{"text":"\\section{Introduction}\n\nQuantum metrology is an emerging cross-disciplinary field between precision measurement and quantum technology,\nand has now become one of the most promising fields in quantum technology due to the general belief that it\ncould step into the industrial-grade applications in a short time~\\cite{Giovannetti2004,Giovannetti2011,Degen2017,\nBraun2018,Pezze2018}. Meanwhile, its development not only benefits the applied technologies like the magnetometry,\nthermometry, and gravimetry, but also the studies in fundamental physics such as the detection of gravitational\nwaves~\\cite{LIGO2013} and the search of dark matters~\\cite{Backes2021,Jiang2021}. As the theoretical support of\nquantum metrology, quantum parameter estimation started from 1960th~\\cite{Helstrom1967}, and has become an\nindispensable component of quantum metrology nowadays~\\cite{Paris2009,Toth2014,Szczykulska2016,Liu2020,Rafal2020,\nSidhu2020,Albarelli2020,Liu2022,Reich2013a,Reich2013b,Goerz2014}.\n\nOne of the key challenges in quantum parameter estimation is to design optimal schemes with quantum apparatuses\nand quantum resources, leading to enhanced precision when compared with their classical counterparts. A typical\nscheme in quantum parameter estimation usually contains four steps: (1) preparation; (2) parameterization; (3)\nmeasurement; and (4) classical estimation. The first step is the preparation of the probe state. The parameters\nto be estimated are involved in the second step, which is also known as sensing in the field of quantum sensing.\nWith the parameterized state given in the second step, the third step is to perform the quantum measurement, which\nresults in a set of probability distributions. Estimating the unknown parameters from the obtained probability\ndistributions is finished in the last step. The design of an optimal scheme usually requires the optimizations of\nsome or all of the steps above.\n\nIn quantum parameter estimation, there exist various mathematical bounds to depict the theoretical precision\nlimit. Depending on the type of the bound considered, it will be more or less informative depending on the type\nof estimation scenario considered, be it: single-shot vs. many-repetition scenario, single vs. multiple-parameter\nscenario, etc. Moreover, by choosing different objective functions when optimizing quantum estimation schemes,\none may arrive at solutions with contrastingly different robustness properties, complexity of practical implementation\nand so on. Hence, the design of optimal schemes has to be performed case by case most of the time. This is the reason\nwhy a general quantum parameter estimation toolkit is needed. Developing such a toolkit is the major motivation of\nthis work.\n\nCurrently, there exist many useful toolkits based on various platforms in quantum information and quantum metrology.\nA famous one is the QuTiP developed by Johansson, Nation, and Nori~\\cite{Johansson2012,Johansson2013} in 2012,\nwhich can execute many basic calculations in quantum information. In the field of quantum control, Machnes et\nal.~\\cite{Machnes2011} developed DYNAMO and Hogben et al. developed Spinach~\\cite{Hogben2011} based on Matlab.\nGoerz et al. developed Krotov~\\cite{Goerz2019}, which owns three versions based on Fortran, Python, and Julia,\nrespectively. G\\\"{u}nther et al. developed Quandary~\\cite{Gunther2021} based on C++. Moreover, there exist other\npackages like Quandary~\\cite{Groth2014} for quantum transport and ProjectQ~\\cite{Steiger2018} for quantum computing.\nIn quantum metrology, Chabuda and Demkowicz-Dobrza\\'{n}ski developed TNQMetro~\\cite{Chabuda2021}, a tensor-network\nbased Python package to perform efficient quantum metrology computations.\n\nHereby we present a new toolkit, QuanEstimation, based on both Python and Julia for the quantum parameter estimation and\nprovide some examples to demonstrate its usage and performance. QuanEstimation contains several widely-used metrological\ntools, such as the asymptotic Fisher information based quantities as well as their Bayesian counterparts (including direct\nBayesian cost minimization, Bayesian versions of the classical and quantum Cram\\'{e}r-Rao bounds as well as the quantum\nZiv-Zakai bound). For the sake of scheme design, QuanEstimation can execute the optimizations of the probe state, control,\nand measurement, as well as the simultaneous optimizations among them with both gradient-based and gradient-free methods.\nDue to the fact that most of the time adaptive measurement schemes are the best practical way to realize the asymptotic\nadvantage indicated by the quantum Fisher information, QuanEstimation can also execute online adaptive measurement\nschemes, such as the adaptive phase estimation, and provide the real-time values of the tunable parameters that can be\ndirectly used in an experiment.\n\n\\section{Overview}\n\n\\begin{figure*}[bt]\n\\centering\\includegraphics[width=17cm]{Fig_schematic.pdf}\n\\caption{Schematic of the package structure of QuanEstimation.\nThe blue boxes, white boxes with blue edges, white boxes with\norange boxes, gray boxes, and gray boxes with dotted orange\nboundaries represent the folders, files, classes, functions\nor methods, and wrapped Julia methods which are solved in\nJulia scripts, respectively.\n\\label{fig:package_structure}}\n\\end{figure*}\n\nQuanEstimation is a scientific computing package focusing on the calculations and optimizations in quantum\nparameter estimation. It is based on both Python and Julia. The interface is written in Python due to the fact\nthat nowadays Python is one of the most popular platforms for scientific computing. However, QuanEstimation\ncontains many optimization processes which need to execute massive numbers of elementary processes such as the loops.\nThese elementary processes could be very time-consuming in Python, and thus strongly affect the efficiency of the\noptimizations. This is why Julia is involved in this package. Julia has many wonderful features, such as optional\ntyping and multiple dispatch, and these features let the loop and other calculation processes cost way less time\nthan those in Python. Hence, the optimizations in QuanEstimation are all performed in Julia. Nevertheless, currently\nthe community of Julia is not comparable to that of Python, and the hybrid structure of this package would allow\nthe people who are not familiar with Julia use the package without any obstacle. In the meantime, QuanEstimation\nhas a full Julia version for the users experienced in Julia.\n\nThe package structure of QuanEstimation is illustrated in Fig.~\\ref{fig:package_structure}. The blue boxes, white\nboxes with blue edges represent the folders and files. The white boxes with orange boxes and gray boxes represent\nclasses and functions\/methods. The gray boxes with dotted orange boundaries are wrapped Julia methods which are\nsolved in Julia, namely, this part of calculation are sent to Julia to execute.\n\nThe functions for the calculation of the parameterization process and dynamics are in the folder named\n\"Parameterization\". In this folder, the file \"GeneralDynamics.py\" contains the functions to solve the Lindblad-type\nmaster equation. Currently, the master equation is solved via the matrix exponential. To improve the efficiency, the\ncalculation of the dynamics via the matrix exponential are executed in Julia and when the calculation is finished,\nthe data is sent back to Python for further use. The file \"NonDynamics.py\" contains the non-dynamical methods\nfor the parameterization, which currently includes the description via Kraus operators. Details and the usage of\nthese functions will be thoroughly introduced in Sec.~\\ref{sec:para}.\n\nThe functions for the calculation of the metrological tools and bounds are distributed in two folders named\n\"AsymptoticBound\" and \"BayesianBound\". In the folder \"AsymptoticBound\", the file \"CramerRao.py\" contains the\nfunctions to calculate the quantities related to the quantum Cram\\'{e}r-Rao bounds, and the file \"Holevo.py\"\ncontains those to calculate the Holevo-type quantum Cram\\'{e}r-Rao bound. In the folder \"BayesianBound\", the file\n\"BayesCramerRao.py\" contains the functions to calculate several versions of the Bayesian classical and quantum\nCram\\'{e}r-Rao bounds and \"ZivZakai.py\" contains the function to calculate the quantum Ziv-Zakai bound. The file\n\"BayesEstimation.py\" contains the functions to execute the Bayesian estimation and the maximum likelihood estimation.\nThe aforementioned metrological tools and the corresponding rules to call them will be given in Sec.~\\ref{sec:tools}.\n\nThe functions for the calculation of metrological resources are placed in the folder named \"Resource\". In this folder,\nthe file \"Resource.py\" currently contains two types of resources, the spin squeezing and the target time to reach a\ngiven value of an objective function, which will be thoroughly introduced in Sec.~\\ref{sec:resource}. The resources\nthat can be readily calculated via QuTiP~\\cite{Johansson2012,Johansson2013} are not included at this moment.\n\nThe scripts for the control optimization, state optimization, measurement optimization, and comprehensive\noptimization are in the folders named \"ControlOpt\", \"StateOpt\", \"MeasurementOpt\", and \"ComprehensiveOpt\",\nrespectively. The structures of these folders are basically the same, and here we only take the folder of \"ControlOpt\"\nas an demonstration to explain the basic structure. In this folder, the file \"ControlStruct.py\" contains a\nfunction named {\\fontfamily{bch}\\selectfont\\small\\itshape ControlOpt()} and a class named {\\fontfamily{bch}\\selectfont\\small\\itshape ControlSystem()}. The function\n{\\fontfamily{bch}\\selectfont\\small\\itshape ControlOpt()} is used to receive the initialized parameters given by the user, and then delivers them\nto one of the classes in the files \"GRAPE\\_Copt.py\", \"PSO\\_Copt.py\", \"DE\\_Copt.py\", and \"DDPG\\_Copt.py\" according to\nthe user's choice of the algorithm. These classes inherit the attributes in {\\fontfamily{bch}\\selectfont\\small\\itshape ControlSystem()}. Then based on\nthe choice of the objective function, the related parts in {\\fontfamily{bch}\\selectfont\\small\\itshape ControlSystem()} is called in these classes to\nfurther run the scripts in Julia. {\\fontfamily{bch}\\selectfont\\small\\itshape ControlSystem()} contains all the common parts that different algorithms\nwould use and the interface with the scripts in Julia. This design is to avoid the repetition codes in the algorithm\nfiles and let the extension neat and simple when more algorithms need to be included in the future. The usage of\nQuanEstimation for control optimization, state optimization, measurement optimization, and comprehensive optimization,\nas well as the corresponding illustrations will be thoroughly discussed in Secs.~\\ref{sec:control_opt}, \\ref{sec:state_opt},\n\\ref{sec:measurement_opt}, and~\\ref{sec:comprehensive_opt}, respectively.\n\nThe scripts for the adaptive measurement are in the folder named \"AdaptiveScheme\". In this folder, the file \"Adaptive.py\"\ncontains the class to execute the adaptive measurement scheme, and \"Adapt\\_MZI.py\" contains the class to generate\nonline and offline adaptive schemes in the Mach-Zehnder interferometer. The details of the adaptive scheme and how to\nperform it with QuanEstimation will be given in Sec.~\\ref{sec:adapt}.\n\nThe folder \"Common\" contains some common functions that are regularly called in QuanEstimation. Currently it\ncontains three functions. {\\fontfamily{bch}\\selectfont\\small\\itshape SIC()} is used to generate a set of rank-one symmetric informationally complete\npositive operator-valued measure. {\\fontfamily{bch}\\selectfont\\small\\itshape suN\\_generator()} is used to generate a set of su($N$) generators.\n{\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} is used to generate a legitimate form of Hamiltonian (or a set of Kraus operators) and its\nderivative, which can be used as the input in some functions in \"BayesEstimation.py\" and \"Adaptive.py\".\n\nAll the Julia scripts are located in the folder named \"JuliaSrc\". One design principle of QuanEstimation for the\noptimizations is that once the calculation goes into the parts in Julia, it will stay in Julia until all the calculations\nare finished and data generated. Hence, \"JuliaSrc\" also contains the scripts to calculate the metrological tools and\nresources for the sake of internal calling in Julia. To keep a high extendability, the optimizations are divided into\nfour elements in Julia, including the scenario of optimization, the algorithm, the parameterization process and the\nobjective function, which are distributed in the files \"OptScenario.jl\", \"Algorithm.jl\", \"Parameterization.jl\", and\n\"ObjectiveFunc.jl\" in the folders \"OptScenario\", \"Algorithm\", \"Parameterization\", and \"ObjectiveFunc\", respectively.\nOnce the information and parameter settings of all elements are input by the user, they are sent to the file \"run.jl\",\nwhich is further used to execute the program. As a matter of fact, \"JuliaSrc\" is also an independent package. If the\nusers are familiar with the language of Julia, they can directly use the full Julia package.\n\nSimilar to other packages, the usage of QuanEstimation requires the existence of some other packages in the environment.\nIn python it requires the pre-installation of numpy, scipy, sympy, cvxpy, and more$\\_$itertools. In Julia it requires\nthe pre-installation of LinearAlgebra, Zygote, Convex, SCS, ReinforcementLearning, SparseArrays, DelimitedFiles,\nStatsBase, BoundaryValueDiffEq, Random, Trapz, Interpolations, Printf, IntervalSets, StableRNGs, and Flux. The calling\nof the package in Python can be done with the following line of codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nfrom quanestimation import *\n\\end{lstlisting}\nAll the scripts demonstrated in the following are based on this calling form.\n\n\n\\section{Parameterization process}\n\\label{sec:para}\n\nThe parameterization process is a key step in the quantum parameter estimation, and in physical terms this process\ncorresponds to a parameter dependent quantum dynamics. Hence, the ability to solve the dynamics is an indispensable\nelement of numerical calculations in quantum parameter estimation. In QuanEstimation, we mainly focus on the dynamics\ngoverned by the quantum master equation\n\\begin{align}\n\\partial_t\\rho &=\\mathcal{L}\\rho \\nonumber \\\\\n&=-i[H,\\rho]+\\sum_i \\gamma_i\\left(\\Gamma_i\\rho\\Gamma^{\\dagger}_i\n-\\frac{1}{2}\\left\\{\\rho,\\Gamma^{\\dagger}_i \\Gamma_i \\right\\}\\right) \\label{eq:mastereq},\n\\end{align}\nwhere $\\rho$ is the evolved density matrix, $H$ is the Hamiltonian of the system, and $\\Gamma_i$ and $\\gamma_i$\nare the $i$th decay operator and decay rate, respectively. The total Hamiltonian $H$ includes two terms, the\nfree Hamiltonian $H_0(\\bold{x})$, which is a function of the parameters $\\bold{x}$ and control Hamiltonian\n$H_{\\mathrm{c}}$. In the quantum parameter estimation, most calculations require the dynamical information\nof $\\rho$ and its derivatives with respect to $\\bold{x}$, which is denoted by $\\partial_{\\bold{x}}\\rho:=\n(\\partial_{0}\\rho,\\partial_1\\rho,\\dots)$ with $\\partial_a$ short for $\\partial_{x_a}$. Hence, in the package\n$\\rho$ and $\\partial_{\\bold{x}}\\rho$ can be found simultaneously via the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\ndynamics = Lindblad(tspan,rho0,H0,dH,\n decay=[],Hc=[],ctrl=[])\nrho,drho = dynamics.expm()\n\\end{lstlisting}\nHere the input {\\fontfamily{bch}\\selectfont\\small\\itshape tspan} is an array representing the time length for the evolution and {\\fontfamily{bch}\\selectfont\\small\\itshape rho0}\nis a matrix representing the initial (probe) state. {\\fontfamily{bch}\\selectfont\\small\\itshape H0} is a matrix or a list representing the free\nHamiltonian. It is a matrix when the free Hamiltonian is time-independent and a list (the length equals to\nthat of {\\fontfamily{bch}\\selectfont\\small\\itshape tspan}) when it is time-dependent. {\\fontfamily{bch}\\selectfont\\small\\itshape dH} is a list containing the derivatives of\n$H_0(\\bold{x})$ on $\\bold{x}$, i.e., $[\\partial_a H_0,\\partial_b H_0,\\dots]$. {\\fontfamily{bch}\\selectfont\\small\\itshape decay} is a list including\nboth decay operators and decay rates, and its input rule is {\\fontfamily{bch}\\selectfont\\small\\itshape decay=[[Gamma1,gamma1],[Gamma2,gamma2],\\dots]},\nwhere {\\fontfamily{bch}\\selectfont\\small\\itshape Gamma1} ({\\fontfamily{bch}\\selectfont\\small\\itshape Gamma2}) and {\\fontfamily{bch}\\selectfont\\small\\itshape gamma1} ({\\fontfamily{bch}\\selectfont\\small\\itshape gamma2}) represent $\\Gamma_1$\n($\\Gamma_2$) and $\\gamma_1$ ($\\gamma_2$), respectively. The default value is empty which means the dynamics is\nunitary. {\\fontfamily{bch}\\selectfont\\small\\itshape Hc} is a list of matrices representing the control Hamiltonians and when it is empty, the\ndynamics is only governed by the free Hamiltonian. {\\fontfamily{bch}\\selectfont\\small\\itshape ctrl} (default value is empty) is a list of arrays\ncontaining the control amplitudes with respect the control Hamiltonians in {\\fontfamily{bch}\\selectfont\\small\\itshape Hc}. The output {\\fontfamily{bch}\\selectfont\\small\\itshape rho}\nis a list representing density matrices in the dynamics. {\\fontfamily{bch}\\selectfont\\small\\itshape drho} is also a list and its $i$th entry is\na list containing all derivatives $\\partial_{\\bold{x}}\\rho$ at $i$th time interval. The dynamics in the package\nis solved by the matrix exponential, i.e., the density matrix at $j$th time interval is calculated via\n$\\rho_j=e^{\\Delta t_j\\mathcal{L}}\\rho_{j-1}$ with $\\Delta t_j$ a small time interval and $\\rho_{j-1}$ the density\nmatrix at the previous time interval. $\\partial_{\\bold{x}}\\rho_j$ is solved by the iterative equation\n\\begin{align}\n\\partial_{\\bold{x}}\\rho_j &=\\Delta t_j(\\partial_{\\bold{x}}\\mathcal{L})\\rho_j\n+e^{\\Delta t_j \\mathcal{L}}(\\partial_{\\bold{x}}\\rho_{j-1}) \\nonumber \\\\\n&=-i\\Delta t_j[\\partial_{\\bold{x}}H_0, \\rho_j]+e^{\\Delta t_j \\mathcal{L}}(\\partial_{\\bold{x}}\\rho_{j-1}).\n\\end{align}\nIn the package $\\Delta t_j$ is automatically obtained by calculating the difference between the $j$th and $(j-1)$th\nentries in {\\fontfamily{bch}\\selectfont\\small\\itshape tspan}. The numerical accuracy of the equation above is limited by the set of $\\{\\Delta t_j\\}$,\nindicating that a smaller $\\{\\Delta t_j\\}$ would always benefit the improvement of the accuracy in general. However,\na smaller $\\{\\Delta t_j\\}$ also means a larger number of calculation steps for a fixed evolution time, resulting in a\ngreater time consumption. Hence, in practice a reasonable values of $\\{\\Delta t_j\\}$ should be chosen to balance the\naccuracy and time consumption.\n\nThe calculation of metrological bounds, which will be discussed in the next section, does not rely on the calling\nof above intrinsic dynamics in the package as they only require the input of $\\rho$ and $\\partial_{\\bold{x}}\\rho$\n(and other essential parameters), not any dynamical information. Hence, the dynamics can also be solved by other\npackages like QuTip~\\cite{Johansson2012,Johansson2013}.\n\nIn certain cases, the parameterization process can be described by some non-dynamical methods, such as the Kraus\noperators. In this case, the parameterized density matrix can be expressed by\n\\begin{equation}\n\\rho(\\bold{x})=\\sum_i K_i(\\bold{x})\\rho_0 K_i^{\\dagger}(\\bold{x}),\n\\label{eq:kraus_opt}\n\\end{equation}\nwhere $K_i(\\bold{x})$ is a Kraus operator satisfying $\\sum_{i}K^{\\dagger}_i K_i=\\openone$ with $\\openone$ the\nidentity operator, $\\rho_0$ is the probe state which is independent of the unknown parameters. In QuanEstimation,\n$\\rho$ and $\\partial_{\\bold{x}}\\rho$ obtained from Kraus operators can be solved via the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nrho,drho = Kraus(rho0,K,dK)\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape rho0} is a matrix representing the probe state, {\\fontfamily{bch}\\selectfont\\small\\itshape K} is a list of matrices with each\nentry a Kraus operator, and {\\fontfamily{bch}\\selectfont\\small\\itshape dK} is a list with $i$th entry also a list representing the derivatives\n$\\partial_{\\bold{x}}K_i$.\n\nThe aforementioned functions only calculate $\\rho$ and $\\partial_{\\bold{x}}\\rho$ at a fixed point of $\\bold{x}$.\nHowever, in the Bayesian scenarios, the values of $\\rho$ and $\\partial_{\\bold{x}}\\rho$ with respect to a regime\nof $\\bold{x}$ may be in need. In this case, if the users can provide the specific functions of $H$ and\n$\\partial_{\\bold{x}}H$, or Kraus operators $\\{K_i\\}$ and derivatives $\\{\\partial_{\\bold{x}} K_i\\}$, the variables\n{\\fontfamily{bch}\\selectfont\\small\\itshape H}, {\\fontfamily{bch}\\selectfont\\small\\itshape dH} (or {\\fontfamily{bch}\\selectfont\\small\\itshape K}, {\\fontfamily{bch}\\selectfont\\small\\itshape dK}) can be generated by the function\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\nH0,dH = BayesInput(x,func,dfunc,\n channel=\"dynamics\")\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape x} is a list of arrays representing the regime of $\\bold{x}$. {\\fontfamily{bch}\\selectfont\\small\\itshape H0} is a list of matrices\nrepresenting the free Hamiltonian with respect to the values in {\\fontfamily{bch}\\selectfont\\small\\itshape x}, and it is multidimensional in the\ncase that {\\fontfamily{bch}\\selectfont\\small\\itshape x} has more than one entry. {\\fontfamily{bch}\\selectfont\\small\\itshape dH} is a (multidimensional) list with each entry also a\nlist representing $\\partial_{\\bold{x}}H$ with respect to the values in {\\fontfamily{bch}\\selectfont\\small\\itshape x}. {\\fontfamily{bch}\\selectfont\\small\\itshape func} and\n{\\fontfamily{bch}\\selectfont\\small\\itshape dfunc} are the handles of the functions {\\fontfamily{bch}\\selectfont\\small\\itshape func()} and {\\fontfamily{bch}\\selectfont\\small\\itshape dfunc()}, which are defined\nby the users representing $H(\\bold{x})$ and $\\partial_{\\bold{x}}H(\\bold{x})$. Notice that the output of\n{\\fontfamily{bch}\\selectfont\\small\\itshape dfunc()} should also be a list representing $[\\partial_0 H,\\partial_1 H,\\dots]$. The output of\n{\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} can be switched between {\\fontfamily{bch}\\selectfont\\small\\itshape H}, {\\fontfamily{bch}\\selectfont\\small\\itshape dH} and {\\fontfamily{bch}\\selectfont\\small\\itshape K}, {\\fontfamily{bch}\\selectfont\\small\\itshape dK} by\nsetting {\\fontfamily{bch}\\selectfont\\small\\itshape channel=\"dynamics\"} or {\\fontfamily{bch}\\selectfont\\small\\itshape channel=\"Kraus\"}. After calling {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()},\n$\\rho$ and $\\partial_{\\bold{x}}\\rho$ can be further obtained via calling of {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad()} and {\\fontfamily{bch}\\selectfont\\small\\itshape Kraus()}.\n\n\\section{Quantum metrological tools}\n\\label{sec:tools}\n\nIn this section, we will briefly introduce the metrological tools that have been involved in QuanEstimation and\ndemonstrate how to calculate them with our package. Both asymptotic and Bayesian tools are included, such as the\nquantum Cram\\'{e}r-Rao bounds, Holevo Cram\\'{e}r-Rao bound, Bayesian estimation, and Bayesian type of Cram\\'{e}r-Rao\nbounds like Van Trees bound and Tsang-Wiseman-Caves bound.\n\n\\subsection{Quantum Cram\\'{e}r-Rao bounds}\n\\label{sec:QCRB}\n\nQuantum Cram\\'{e}r-Rao bounds~\\cite{Helstrom1976,Holevo1982} are the most renown metrological tools in quantum\nparameter estimation. Let $\\rho=\\rho(\\bold{x})$ be a parameterized density matrix and\n$\\{\\Pi_y\\}$ a set of positive operator-valued measure (POVM), then the covariance matrix\n$\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\}):=\\sum_y\\mathrm{Tr}(\\rho\\Pi_y)(\\hat{\\bold{x}}-\\bold{x})(\\hat{\\bold{x}}\n-\\bold{x})^{\\mathrm{T}}$ for the unknown parameters $\\bold{x}=(x_0,x_1,\\dots)^{\\mathrm{T}}$ and the corresponding\nunbiased estimators $\\hat{\\bold{x}}=(\\hat{x}_0,\\hat{x}_1,\\dots)^{\\mathrm{T}}$ satisfies the following\ninequalities~\\cite{Helstrom1976,Holevo1982}\n\\begin{equation}\n\\mathrm{cov}\\left(\\hat{\\bold{x}}, \\{\\Pi_y\\}\\right)\n\\geq \\frac{1}{n}\\mathcal{I}^{-1}\\left(\\{\\Pi_y\\}\\right)\n\\geq \\frac{1}{n} \\mathcal{F}^{-1},\n\\end{equation}\nwhere $n$ is the repetition of the experiment, $\\mathcal{I}$ is the classical Fisher information matrix (CFIM) and\n$\\mathcal{F}$ is the quantum Fisher information matrix (QFIM). Note that the estimators $\\hat{\\bold{x}}$ are in\nfact functions of the measurement outcomes $y$, and formally should always be written as $\\hat{{\\bold{x}}}(y)$.\nStill, we drop this explicit dependence on $y$ for conciseness of formulas. A thorough derivation\nof this bound can be found in a recent review~\\cite{Liu2020}.\n\nFor a set of discrete probability\ndistribution $\\{p(y|\\bold{x})=\\mathrm{Tr}(\\rho\\Pi_y)\\}$, the CFIM is defined by\n\\begin{equation}\n\\mathcal{I}_{ab}=\\sum_{y}\\frac{1}{p(y|\\bold{x})}[\\partial_a p(y|\\bold{x})][\\partial_b p(y|\\bold{x})].\n\\label{eq:CFIM}\n\\end{equation}\nHere $\\mathcal{I}_{ab}$ is short for $\\mathcal{I}_{x_a,x_b}$, the $ab$th entry of the CFIM. For a\ncontinuous probability density, the equation above becomes $\\mathcal{I}_{ab}=\\int \\frac{1}{p(y|\\bold{x})}\n[\\partial_a p(y|\\bold{x})][\\partial_b p(y|\\bold{x})]\\mathrm{d}y$. The diagonal entry $\\mathcal{I}_{aa}$\nis the classical Fisher information (CFI) for $x_a$.\n\nThe QFIM does not depend on the actual measurement performed, and one can encounter a few equivalent definitions\nof this quantity. The one the most often used reads:\n\\begin{equation}\n\\mathcal{F}_{ab}=\\frac{1}{2}\\mathrm{Tr}(\\rho\\{L_a, L_b\\})\n\\end{equation}\nwith $\\mathcal{F}_{ab}$ being the $ab$th entry of $\\mathcal{F}$ and $L_{a(b)}$ the symmetric logarithmic\nderivative (SLD) operator for $x_{a(b)}$. $\\{\\cdot,\\cdot\\}$ represents the anti-commutator. The\nSLD operator is Hermitian and determined by the equation\n\\begin{equation}\n\\partial_{a}\\rho=\\frac{1}{2}(\\rho L_{a}+L_{a}\\rho).\n\\end{equation}\nThe mathematical properties of the SLD operator and QFIM can be found in a recent review~\\cite{Liu2020}.\nThe diagonal entry of $\\mathcal{F}_{aa}$ is the quantum Fisher information (QFI) for $x_a$.\nUtilizing the spectral decomposition $\\rho=\\sum_{i}\\lambda_i |\\lambda_i\\rangle\\langle \\lambda_i|$, the\nSLD operator can be calculated via the equation\n\\begin{equation}\n\\langle\\lambda_i|L_{a}|\\lambda_j\\rangle=\\frac{2\\langle\\lambda_i| \\partial_{a}\\rho |\\lambda_j\\rangle}\n{\\lambda_i+\\lambda_j}, \\label{eq:SLD_eigen}\n\\end{equation}\nfor $\\lambda_i$ or $\\lambda_j$ not equal to zero. For $\\lambda_i=\\lambda_j=0$, the corresponding matrix entry of\n$L_a$ can be set to zero.\n\nIn QuanEstimation, the SLD operator can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nSLD(rho,drho,rep=\"original\",eps=1e-8)\n\\end{lstlisting}\nHere the input {\\fontfamily{bch}\\selectfont\\small\\itshape rho} is a matrix representing the parameterized density matrix, and {\\fontfamily{bch}\\selectfont\\small\\itshape drho}\nis a list of matrices representing the derivatives of the density matrix on $\\bold{x}$, i.e.,\n$[\\partial_0\\rho,\\partial_1\\rho,\\dots]$. When {\\fontfamily{bch}\\selectfont\\small\\itshape drho} only contains one entry ($[\\partial_0 \\rho]$),\nthe output of {\\fontfamily{bch}\\selectfont\\small\\itshape SLD()} is a matrix ($L_0$), and it is a list ($[L_0,L_1,\\dots]$) otherwise. The basis\nof the output SLD can be adjusted via the variable {\\fontfamily{bch}\\selectfont\\small\\itshape rep}. The default choice {\\fontfamily{bch}\\selectfont\\small\\itshape rep=\"original\"}\nmeans the basis is the same with that of the input density matrix. The other choice is {\\fontfamily{bch}\\selectfont\\small\\itshape rep=\"eigen\"},\nwhich means the SLD is written in the eigenspace of the density matrix. Due to the fact that the entries of SLD\nin the kernel are arbitrary, in the package they are just set to be zeros for simplicity. The default machine\nepsilon is {\\fontfamily{bch}\\selectfont\\small\\itshape eps=1e-8}, which can be modified as required. Here the machine epsilon means that if a\neigenvalue of the density matrix is less than the given number ($10^{-8}$ by default), it will be treated as\nzero in the calculation of SLD.\n\nApart from the SLD operator, the QFIM can also be defined via other types of logarithmic derivatives.\nSome well-used ones are the right and left logarithmic derivatives (RLD, LLD)~\\cite{Holevo1982,Yuen1973}. The RLD\nand LLD are determined by $\\partial_{a}\\rho=\\rho \\mathcal{R}_a$ and $\\partial_{a}\\rho=\\mathcal{R}_a^{\\dagger}\\rho$,\nrespectively. Utilizing the spectral decomposition, the entries of RLD and LLD can be calculated as\n\\begin{align}\n\\langle\\lambda_i| \\mathcal{R}_{a} |\\lambda_j\\rangle\n&= \\frac{1}{\\lambda_i}\\langle\\lambda_i| \\partial_{a}\\rho |\\lambda_j\\rangle,~~\\lambda_i\\neq 0; \\\\\n\\langle\\lambda_i| \\mathcal{R}_{a}^{\\dagger} |\\lambda_j\\rangle\n&= \\frac{1}{\\lambda_j}\\langle\\lambda_i| \\partial_{a}\\rho |\\lambda_j\\rangle,~~\\lambda_j\\neq 0.\n\\end{align}\nThe corresponding QFIM is $\\mathcal{F}_{ab}=\\mathrm{Tr}(\\rho \\mathcal{R}_a \\mathcal{R}^{\\dagger}_b)$. In QuanEstimation,\nthe LLD and RLD can be calculated via the functions {\\fontfamily{bch}\\selectfont\\small\\itshape RLD()} and {\\fontfamily{bch}\\selectfont\\small\\itshape LLD()}. The inputs are the same\nwith {\\fontfamily{bch}\\selectfont\\small\\itshape SLD()}. Notice that the RLD and LLD only exist when the support of $\\rho$ contains the the support of\n$\\partial_a\\rho$. Hence, if this condition is not satisfied, the calculation will be terminated and a line of\nreminder will arise to remind that {\\fontfamily{bch}\\selectfont\\small\\itshape RLD()} and {\\fontfamily{bch}\\selectfont\\small\\itshape LLD()} do not exist in this case.\n\nIn QuanEstimation, the QFIM and QFI can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQFIM(rho,drho,LDtype=\"SLD\",exportLD=False,\n eps=1e-8)\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"} is the type of logarithmic derivatives, including {\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"},\nand {\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}. Notice that the values of QFIM based on RLD and LLD are actually the same when\nthe RLD and LLD exist. If {\\fontfamily{bch}\\selectfont\\small\\itshape exportLD=True}, apart from the QFIM, the corresponding values of logarithmic\nderivatives in the original basis will also be exported.\n\nIn the case that the parameterization is described via the Kraus operators, the QFIM can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQFIM_Kraus(rho0,K,dK,LDtype=\"SLD\",\n exportLD=False,eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape rho0} is a matrix representing the density matrix of the initial state. {\\fontfamily{bch}\\selectfont\\small\\itshape K} is a\nlist of matrices with each entry a Kraus operator, and {\\fontfamily{bch}\\selectfont\\small\\itshape dK} is a list with $i$th entry being also a list\nrepresenting the derivatives $\\partial_{\\bold{x}}K_i$.\n\nThe CFIM and CFI for a fully classical scenario can be calculated by the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nFIM(p,dp,eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape p} is an array representing the probability distribution and {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is a list\nwith the $i$th entry being itself also a list containing the derivatives of $p_i$ on $\\bold{x}$, i.e.\n$[\\partial_0 p_i,\\partial_1 p_i,\\dots]$. For a quantum scenario, the CFIM can be calculated by\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nCFIM(rho,drho,M=[],eps=1e-8)\n\\end{lstlisting}\nThe variable {\\fontfamily{bch}\\selectfont\\small\\itshape M} is a list containing a set of POVM. The default measurement is a set of rank-one symmetric\ninformationally complete POVM (SIC-POVM)~\\cite{Gour2014,Fuchs2017,Renes2004}. A set of rank-one SIC-POVM\n$\\{\\frac{1}{d}|\\phi_j\\rangle\\langle\\phi_j|\\}^{d^2}_{j=1}$ satisfies $|\\langle\\phi_j|\\phi_k\\rangle|^2=(d\\delta_{jk}+1)\/(d+1)$\nfor any $j$ and $k$ with $|\\phi_j\\rangle$ being a normalized quantum state and $d$ the dimension of the Hilbert space.\nOne way to construct a set of SIC-POVM is utilizing the Weyl\u2013Heisenberg operators~\\cite{Renes2004,Scott2010}, which is\ndefined by $D_{ab}=(-e^{i\\pi\/d})^{ab}A^{a}B^{b}$. The operators $A$ and $B$ satisfy $A|k\\rangle=|k+1\\rangle$,\n$B|k\\rangle=e^{i2\\pi k\/d}|k\\rangle$ with $\\{|k\\rangle\\}^{d-1}_{k=0}$ an orthonormal basis in the Hilbert space.\nThere exists a normalized fiducial vector $|\\psi\\rangle$ in the Hilbert space such that $\\{\\frac{1}{d}D_{ab}\n|\\psi\\rangle\\langle\\psi|D^{\\dagger}_{ab}\\}^d_{a,b=1}$ is a set of SIC-POVM. In the package, $|\\psi\\rangle$ is\ntaken as the one numerically found by Fuchs et al. in Ref.~\\cite{Fuchs2017}. If the users want to see the\nspecific formula of the SIC-POVM, the function {\\fontfamily{bch}\\selectfont\\small\\itshape SIC(n)} can be called. The input {\\fontfamily{bch}\\selectfont\\small\\itshape n} is\nthe dimension of the density matrix. Currently, the function {\\fontfamily{bch}\\selectfont\\small\\itshape SIC(n)} only valid when $n\\leq 151$.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_QFI_illus.pdf}\n\\caption{The demonstrating code for the calculation of QFI and CFI with QuanEstimation.\nThe inset is the evolution of $\\mathcal{F}_{\\omega\\omega}\/t$ (solid blue line) and\n$\\mathcal{I}_{\\omega\\omega}\/t$ (dashed red line). The initial state is $|+\\rangle$.\nThe true value of $\\omega$ ($\\omega_{\\mathrm{tr}}$) is set to be $1$, and\nthe decay rates are set to be $\\gamma_{+}\/\\omega_{\\mathrm{tr}}=0$ and\n$\\gamma_{-}\/\\omega_{\\mathrm{tr}}=0.1$. Planck units are applied here.\n\\label{fig:QFI_code}}\n\\end{figure}\n\nIn both functions {\\fontfamily{bch}\\selectfont\\small\\itshape QFIM()} and {\\fontfamily{bch}\\selectfont\\small\\itshape CFIM()}, the outputs are real numbers ($\\mathcal{F}_{aa}$ and\n$\\mathcal{I}_{aa}$) in the single-parameter case, namely, when {\\fontfamily{bch}\\selectfont\\small\\itshape drho} only contains one entry, and\nthey are real symmetric or Hermitian matrices in the multi-parameter scenarios. The basis of QFIM and CFIM are determined by the order\nof entries in {\\fontfamily{bch}\\selectfont\\small\\itshape drho}. For example, when {\\fontfamily{bch}\\selectfont\\small\\itshape drho} is $[\\partial_0\\rho,\\partial_1\\rho,\\dots]$,\nthe basis of the QFIM and CFIM is $\\{x_0,x_1,\\dots\\}$.\n\nFor some specific scenarios, the calculation method in {\\fontfamily{bch}\\selectfont\\small\\itshape QFIM()} may be not efficient enough. Therefore,\nwe also provide the calculation of QFIM in some specific scenarios. The first one is the calculation in the\nBloch representation. In this case, the function for the calculation of QFIM is of the form:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQFIM_Bloch(r,dr,eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape r} is an array representing a Bloch vector and {\\fontfamily{bch}\\selectfont\\small\\itshape dr} is a list of arrays representing\nthe derivatives of the Bloch vector on $\\bold{x}$. Gaussian states are very commonly used in quantum metrology, and the\ncorresponding QFIM can be calculated by the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQFIM_Gauss(R,dR,D,dD)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape R} is an array representing the first-order moment, i.e., the expected value\n$\\langle\\bold{R}\\rangle:=\\mathrm{Tr}(\\rho\\bold{R})$ of the vector $\\bold{R}=(q_1,p_1,q_2,p_2,\\dots)^{\\mathrm{T}}$,\nwhere $q_i=(a_i+a^{\\dagger}_i)\/\\sqrt{2}$ and $p_i=(a_i-a^{\\dagger}_i)\/(i\\sqrt{2})$ are the quadrature operators\nwith $a_i$ ($a^{\\dagger}_i$) the annihilation (creation) operator of $i$th bosonic mode. {\\fontfamily{bch}\\selectfont\\small\\itshape dR} is a list\nwith $i$th entry also a list containing the derivatives $\\partial_{\\bold{x}}\\langle[\\bold{R}]_i\\rangle$. Here\n$[\\cdot]_i$ represents the $i$th entry of the vector. {\\fontfamily{bch}\\selectfont\\small\\itshape D} is a matrix representing the second-order\nmoment, $D_{ij}=\\langle [\\bold{R}]_i [\\bold{R}]_j+[\\bold{R}]_j[\\bold{R}]_i\\rangle\/2$, and {\\fontfamily{bch}\\selectfont\\small\\itshape dD} is a list\nof matrices representing the derivatives $\\partial_{\\bold{x}}D$. Notice that {\\fontfamily{bch}\\selectfont\\small\\itshape QFIM\\_Bloch()} and\n{\\fontfamily{bch}\\selectfont\\small\\itshape QFIM\\_Gauss()} can only compute the SLD-based QFIM.\n\n\n\\emph{Example.} Now we present an example to show the usage of these functions. Consider a single qubit\nHamiltonian $H=\\omega\\sigma_3\/2$ with $\\sigma_3$ a Pauli matrix and $\\omega$ the frequency. Take $\\omega$\nas the parameter to be estimated and assume its true value (denoted by $\\omega_{\\mathrm{tr}}$) is 1.\nPlanck unit ($\\hbar=1$) is applied in the Hamiltonian. The dynamics is governed by the master equation\n\\begin{eqnarray}\n\\partial_t\\rho&=&-i\\left[H, \\rho\\right]+\\gamma_{+}\\left(\\sigma_+\\rho\\sigma_{-}-\\frac{1}{2}\n\\left\\{\\sigma_{-}\\sigma_{+}, \\rho\\right\\}\\right) \\nonumber\\\\\n& &+\\gamma_{-}\\left(\\sigma_-\\rho\\sigma_{+}-\\frac{1}{2}\\left\\{\\sigma_{+}\\sigma_{-}, \\rho\\right\\}\\right),\n\\label{eq:ME_spon}\n\\end{eqnarray}\nwhere $\\sigma_{\\pm}=(\\sigma_1\\pm\\sigma_2)\/2$ with $\\sigma_{1}$, $\\sigma_{2}$ also Pauli matrices.\n$\\gamma_{+}$ and $\\gamma_{-}$ are the decay rates. The measurement is taken as\n$\\{|+\\rangle\\langle+|,|-\\rangle\\langle-|\\}$ with\n\\begin{equation}\n|\\pm\\rangle:=\\frac{1}{\\sqrt{2}}(|0\\rangle\\pm|1\\rangle).\n\\end{equation}\nHere $|0\\rangle$ ($|1\\rangle$) is the eigenstate of $\\sigma_3$ with respect to the eigenvalue $1$ ($-1$). The\nspecific codes for the calculation of QFI\/CFI are given in Fig.~\\ref{fig:QFI_code}, and the corresponding evolution\nof $\\mathcal{F}_{\\omega\\omega}\/t$ (solid blue line) and $\\mathcal{I}_{\\omega\\omega}\/t$ (dashed red line)\nare shown in the inset.\n\n\n\\subsection{Holevo Cram\\'{e}r-Rao bound}\n\nHolevo Cram\\'{e}r-Rao bound (HCRB) is another useful asymptotic bound in quantum parameter\nestimation and tighter than the quantum Cram\\'{e}r-Rao bound in general. The HCRB can be expressed\nas~\\cite{Holevo1973,Rafal2020,Nagaoka1989,Hayashi2008}\n\\begin{equation}\n\\mathrm{Tr}(W\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\}))\\geq \\min_{\\bold{X},V} \\mathrm{Tr}(WV)\n\\end{equation}\nwith $W$ the weight matrix and $V$ a matrix satisfying $V\\geq Z(\\bold{X})$. Here $Z(\\bold{X})$ is a\nHermitian matrix and its $ab$th entry is defined by $[Z(\\bold{X})]_{ab}:=\\mathrm{Tr}(\\rho X_a X_b)$,\nwhere $\\bold{X}=[X_0,X_1,\\cdots]$ is a vector of operators and its $a$th entry is defined by\n$X_a:=\\sum_y (\\hat{x}_a-x_a)\\Pi_y$ with $\\hat{x}_a$ the $a$th entry of $\\hat{\\bold{x}}$.\nTo let the local estimator $\\hat{\\bold{x}}$ unbiased, $\\bold{X}$ needs to satisfy\n$\\mathrm{Tr}(X_a\\partial_b\\rho)=\\delta_{ab},\\,\\forall a, b$. Here $\\delta_{ab}$ is the Kronecker\ndelta function. An equivalent formulation of HCRB is~\\cite{Holevo1973,Rafal2020,Nagaoka1989,Hayashi2008}\n\\begin{equation}\n\\min_{\\bold{X},V}\\mathrm{Tr}(WV)\\!=\\!\\min_{\\bold{X}}~\\!\\mathrm{Tr}(W\\mathrm{Re}(Z))\n\\!+\\!\\Vert\\sqrt{W}\\mathrm{Im}(Z)\\sqrt{W}\\Vert,\n\\end{equation}\nwhere $\\mathrm{Re}(Z)$ and $\\mathrm{Im}(Z)$ represent the real and imaginary parts of $Z$, and $\\Vert\\cdot\\Vert$\nis the trace norm, i.e., $\\Vert A\\Vert:=\\mathrm{Tr}\\sqrt{A^{\\dagger}A}$ for a matrix $A$. Numerically, in\na specific matrix basis $\\{\\lambda_i\\}$ which satisfies $\\mathrm{Tr}(\\lambda_i\\lambda_j)=\\delta_{ij}$, the HCRB\ncan be solved via the semidefinite programming as it can be reformulated into a linear semidefinite\nproblem~\\cite{Albarelli2019}:\n\\begin{align}\n& \\min_{\\bold{X},V}~\\mathrm{Tr}(WV), \\nonumber \\\\\n& \\mathrm{subject}~\\mathrm{to}~\n\\begin{cases}\n\\left(\\begin{array}{cc}\nV & \\Lambda^{\\mathrm{T}}R^{\\dagger} \\\\\nR\\Lambda & \\openone\\\\\n\\end{array}\\right)\\geq 0, \\\\\n\\sum_i[\\Lambda]_{ai}\\mathrm{Tr}(\\lambda_i\\partial_b\\rho)=\\delta_{ab}.\n\\end{cases}\n\\end{align}\nHere the $ij$th entry of $\\Lambda$ is obtained by decomposing $\\bold{X}$ in the basis $\\{\\lambda_i\\}$,\n$X_i=\\sum_j [\\Lambda]_{ij}\\lambda_j$, and $R$ satisfies $Z=\\Lambda^{\\mathrm{T}}R^{\\dagger}R\\Lambda$.\nThe semidefinite programming can be solved by the package CVXPY~\\cite{Diamond2016,Agrawal2018} in Python\nand Convex~\\cite{Udell2014} in Julia. In QuanEstimation, the HCRB can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nHCRB(rho,drho,W,eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape W} is the weight matrix and {\\fontfamily{bch}\\selectfont\\small\\itshape rho}, {\\fontfamily{bch}\\selectfont\\small\\itshape drho} have been introduced\npreviously. Since $Z_{aa}$ is equivalent to the variance of the unbiased observable $O:=\\sum_y\\hat{x}_a\\Pi_y$\n[unbiased condition is $\\mathrm{Tr}(\\rho O)=x$], i.e., $Z_{aa}=\\mathrm{Tr}(\\rho O^2)-[\\mathrm{Tr}(\\rho O)]^2$,\nin the case of single-parameter estimation the optimal $V$ is nothing but $Z_{aa}$ itself. Furthermore, it\ncan be proved that $Z_{aa}\\geq 1\/\\mathcal{F}_{aa}$ and the equality is attainable asymptotically. Hence,\none can see that $\\min_{X_a}Z_{aa}=1\/\\mathcal{F}_{aa}$, which means the HCRB is equivalent to the quantum\nCram\\'{e}r-Rao bound in the single-parameter estimation. Due to better numerical efficiency of QFI computation,\nwhenever {\\fontfamily{bch}\\selectfont\\small\\itshape drho} has only one entry, the calling of {\\fontfamily{bch}\\selectfont\\small\\itshape HCRB()} will automatically jump to\n{\\fontfamily{bch}\\selectfont\\small\\itshape QFIM()} in the package. Similarly, if $W$ is a rank-one matrix, the HCRB also reduces to\n$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ and thus in this case the calculation of HCRB will also be replaced by\nthe calculation of QFIM.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.0cm]{Fig_HCRB.pdf}\n\\caption{Time evolution of $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ (solid red line),\n$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ (dashed black line) and HCRB (dash-dotted blue\nline) in the case of two-qubit system with the XX coupling. The probe state is\n$(|00\\rangle+|11\\rangle)\/\\sqrt{2}$. $W=\\openone$ and $\\omega_1=1$. The true values\nof $\\omega_2$ and $g$ are $1$ and $0.1$, respectively. The decay rates\n$\\gamma_1=\\gamma_2=0.05\\omega_1$. The POVM for $\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nis $\\{\\Pi_1$, $\\Pi_2$, $\\openone-\\Pi_1-\\Pi_2\\}$ with $\\Pi_1=0.85|00\\rangle\\langle 00|$\nand $\\Pi_2=0.1|\\!+\\!+\\rangle\\langle+\\!+\\!|$. Planck units are applied here. }\n\\label{fig:HCRB}\n\\end{figure}\n\n\\emph{Example.} Now let us take a two-parameter estimation as an example to demonstrate the calculation of HCRB with\nQuanEstimation. Consider a two-qubit system with the XX coupling. The Hamiltonian of this system is\n\\begin{equation}\nH=\\omega_1\\sigma^{(1)}_3+\\omega_2\\sigma^{(2)}_3+g\\sigma^{(1)}_1\\sigma^{(2)}_1,\n\\end{equation}\nwhere $\\omega_{1}$, $\\omega_2$ are the frequencies of the first and second qubit,\n$\\sigma^{(1)}_{i}=\\sigma_{i}\\otimes\\openone$, and $\\sigma^{(2)}_{i}=\\openone\\otimes\\sigma_{i}$ for $i=1,2,3$.\n$\\openone$ is the identity matrix. Planck units are applied here ($\\hbar=1$). The parameters $\\omega_2$ and $g$ are\nthe ones to be estimated. The dynamics is governed by the master equation\n\\begin{equation}\n\\partial_t\\rho=-i\\left[H, \\rho\\right]+\\sum_{i=1,2}\\gamma_i\\left(\\sigma_3^{(i)}\\rho\\sigma_3^{(i)}-\\rho \\right)\n\\end{equation}\nwith $\\gamma_i$ the decay rate for $i$th qubit. The time evolutions of quantum Cram\\'{e}r-Rao bound\n[$\\mathrm{Tr}(W\\mathcal{F}^{-1})$], classical Cram\\'{e}r-Rao bound [$\\mathrm{Tr}(W\\mathcal{I}^{-1})$], and\nHCRB are shown in Fig.~\\ref{fig:HCRB}. The POVM for $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ is\n$\\{\\Pi_1$, $\\Pi_2$, $\\openone-\\Pi_1-\\Pi_2\\}$ with $\\Pi_1=0.85|00\\rangle\\langle 00|$ and\n$\\Pi_2=0.1|\\!++\\rangle\\langle++\\!|$. The probe state is $(|00\\rangle+|11\\rangle)\/\\sqrt{2}$ and the weight\nmatrix $W=\\openone$. As shown in this plot, HCRB (dash-dotted blue line) is tighter than $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\n(solid red line), which is in agreement with the fact that the HCRB is in general tighter than the quantum Cram\\'{e}r-Rao\nbound, unless the quantum Cram\\'{e}r-Rao bound is attainable, in which case the two bounds coincide~\\cite{Rafal2020}.\nThe gap between $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ and HCRB indicates that the chosen measurement is not optimal.\n\n\\subsection{Bayesian estimation}\n\\label{sec:Bayesian}\n\nBayesian estimation is another well-used method in parameter estimation, in which the prior distribution is updated\nvia the posterior distribution obtained by the Bayes' rule\n\\begin{equation}\np(\\bold{x}|y)=\\frac{p(y|\\bold{x})p(\\bold{x})}{\\int p(y|\\bold{x})p(\\bold{x})\\mathrm{d}\\bold{x}},\n\\label{eq:Bayes_posterior}\n\\end{equation}\nwhere $p(\\bold{x})$ is the current prior distribution, $y$ is the result obtained in practice, and\n$\\int\\mathrm{d}\\bold{x}:=\\int\\mathrm{d}x_0\\int\\mathrm{d}x_1\\cdots$. The prior distribution is then updated with\n$p(\\bold{x}|y)$, and the estimated value of $\\bold{x}$ is obtained via a reasonable estimator, such as the\nexpected value $\\hat{\\bold{x}}=\\int\\bold{x} p(\\bold{x}|y)\\mathrm{d}\\bold{x}$ or the maximum a posteriori\nestimation (MAP), $\\hat{\\bold{x}}=\\mathrm{argmax}_{\\bold{x}}\\,p(\\bold{x}|y)$.\n\nIn QuanEstimation, the Bayesian estimation can be performed via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\npout,xout = Bayes(x,p,rho,y,M=[],\n estimator=\"mean\",savefile=False)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape x} is a list of arrays representing the regimes of $\\bold{x}$, which is the same with the\nfunction {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} discussed in Sec.~\\ref{sec:para}. Notice that in the package all the\ncalculations of the integrals over the prior distributions are performed discretely. Hence, for now the input prior\ndistribution is required to be an array, instead of a continuous function. {\\fontfamily{bch}\\selectfont\\small\\itshape p} is an array representing the\nvalues of $p(\\bold{x})$ with respect to $\\bold{x}$. It is multidimensional in the case of multiparameter estimation, i.e.,\nthe entry number of {\\fontfamily{bch}\\selectfont\\small\\itshape x} are at least two. The input {\\fontfamily{bch}\\selectfont\\small\\itshape rho} is a (multidimensional) list of matrices\nrepresenting the values of density matrix with respect to all values of $\\bold{x}$, which can be alternatively generated\nvia the function {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} if specific functions of $H$ and $\\partial_{\\bold{x}}H$ on $\\bold{x}$ can be\nprovided. {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]} is a list of matrices representing a set of POVM and its default setting is a SIC-POVM.\n{\\fontfamily{bch}\\selectfont\\small\\itshape y} is an array representing the results obtained in an experiment. The result corresponds to the POVM operator\ninput in {\\fontfamily{bch}\\selectfont\\small\\itshape M}, which means it is an integer between 0 and $d-1$ with $d$ the entry number of the set of\nPOVM. The type of estimator can be set via {\\fontfamily{bch}\\selectfont\\small\\itshape estimator=\" \"} and currently it has two choices. When\n{\\fontfamily{bch}\\selectfont\\small\\itshape estimator=\"mean\"} the estimator is the expected value, and when {\\fontfamily{bch}\\selectfont\\small\\itshape estimator=\"MAP\"} the estimator\nis the MAP. The output {\\fontfamily{bch}\\selectfont\\small\\itshape pout} (a multidimensional array) and {\\fontfamily{bch}\\selectfont\\small\\itshape xout} (an array) are the final\nposterior distribution and estimated value of $\\bold{x}$ obtained via the chosen estimator. When {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True},\ntwo files \"pout.npy\" and \"xout.npy\" will be generated, which include the updated $p(\\bold{x})$ and the corresponding optimal\n$\\bold{x}$ in all rounds. If the users call this function in the full-Julia package, the output files are \"pout.csv\"\nand \"xout.csv\".\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_Bayes_est.pdf}\n\\caption{Iteration of posterior distribution by the Bayes' rule. The inset shows the change\nof estimated value as a function of iteration for MAP (solid red line), MLE (dashed blue line),\nand expectation (dash-dotted green line). The dotted black line represents the true value.}\n\\label{fig:bayes_mle}\n\\end{figure}\n\n\\emph{Example.} Now let us consider a simple example with the Hamiltonian\n\\begin{equation}\nH = \\frac{\\kappa\\omega_0}{2}(\\sigma_1\\cos x + \\sigma_3\\sin x),\n\\label{eq:Bayes_demo}\n\\end{equation}\nwhere $x$, $\\kappa$ are two dimensionless parameters and $x$ is taken as the unknown one. Planck units are applied here\n($\\hbar=1$) and $\\omega_0$ is set to be 1. The initial state is taken as $|+\\rangle$ and the target time $\\omega_0 T=1$.\nThe prior distribution is assumed to be uniform in the regime $[0,\\pi\/2]$. The measurement is\n$\\{|+\\rangle\\langle +|,|-\\rangle\\langle-|\\}$. The results in experiment are simulated by a random generation according\nto the probabilities $p(\\pm|x)=\\langle\\pm|\\rho|\\pm\\rangle$ with respect to the value $x=\\pi\/4$. As shown in\nFig.~\\ref{fig:bayes_mle}, with the growth of iteration number, the deviation decreases monotonously and the estimated\nvalue (center value of the distribution) approaches to $\\pi\/4$, which can also be confirmed by the convergence of\nestimated value (solid red line) shown in the inset. As a matter of fact, here the maximum likelihood estimation (MLE)\ncan also provide similar performance by taking the likelihood function with the MAP estimator\n$\\hat{\\bold{x}}=\\mathrm{argmax}_{\\bold{x}}\\,\\prod_i p(y_i|\\bold{x})$ (dashed blue line in the inset). In QuanEstimation,\nthis MLE can be calculated by the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nLout,xout = MLE(x,rho,y,M=[],savefile=False)\n\\end{lstlisting}\nWhen {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True}, two files \"Lout.npy\" and \"xout.npy\" will be generated including all the data in the\niterations.\n\nIn Bayesian estimation, another useful tool is the average Bayesian cost~\\cite{Robert2007} for the quadratic cost,\nwhich is defined by\n\\begin{equation}\n\\bar{C}:=\\int p(\\bold{x})\\sum_y p(y|\\bold{x})(\\bold{x}-\\hat{\\bold{x}})^{\\mathrm{T}}\nW(\\bold{x}-\\hat{\\bold{x}})\\,\\mathrm{d}\\bold{x}\n\\end{equation}\nwith $W$ the weight matrix. In QuanEstimation, this average Bayesian cost can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nBayesCost(x,p,xest,rho,M,W=[],eps=1e-8)\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape x} and {\\fontfamily{bch}\\selectfont\\small\\itshape p} are the same with those in {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()}. {\\fontfamily{bch}\\selectfont\\small\\itshape xest} is a list of arrays\nrepresenting the estimator $\\hat{\\bold{x}}$. The $i$th entry of each array in {\\fontfamily{bch}\\selectfont\\small\\itshape xest} represents the estimator\nwith respect to $i$th result. In the case of the single-parameter scenario, $W$ is chosen to be 1 regardless the input.\nThe average Bayesian cost satisfies the inequality~\\cite{Rafal2020}\n\\begin{equation}\n\\bar{C}\\geq\\int p(\\bold{x})\\left(\\bold{x}^{\\mathrm{T}}W\\bold{x}\\right)\\mathrm{d}\\bold{x}\n-\\sum_{ab}W_{ab}\\mathrm{Tr}\\left(\\bar{\\rho}\\bar{L}_a \\bar{L}_b\\right),\n\\label{eq:BCB}\n\\end{equation}\nwhere $\\bar{\\rho}:=\\int p(\\bold{x})\\rho\\,\\mathrm{d}\\bold{x}$ and the operator $\\bar{L}_a$ is determined by the equation\n$\\int x_a p(\\bold{x})\\rho\\,\\mathrm{d}\\bold{x}=(\\bar{L}_a\\bar{\\rho}+\\bar{\\rho}\\bar{L}_a)\/2$. In the case of the\nsingle-parameter scenario, the inequality above reduces to\n\\begin{equation}\n\\bar{C}\\geq \\int p(x) x^2\\,\\mathrm{d}x-\\mathrm{Tr}(\\bar{\\rho}\\bar{L}^2)\n\\end{equation}\nand represents a bound which is always saturable---the optimal measurement correspond to projection measurement in the\neigenbasis of $\\bar{L}$, while the corresponding eigenvalues represent the estimated values of the parameter. If the\nmean value $\\int p(x) x\\,\\mathrm{d}x$ is subtracted to zero, then the inequality above can be rewritten into\n$\\bar{C}\\geq \\delta^2 x-\\mathrm{Tr}(\\bar{\\rho}\\bar{L}^2)$ with $\\delta^2 x:=\\int p(x) x^2\\,\\mathrm{d}x-\\int p(x)\nx\\,\\mathrm{d}x$ the variance of $x$ under the prior distribution. In QuanEstimation, the bound given in Eq.~(\\ref{eq:BCB})\ncan be calculated via the following function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nBCB(x,p,rho,W=[],eps=1e-8)\n\\end{lstlisting}\nHere the inputs {\\fontfamily{bch}\\selectfont\\small\\itshape x} and {\\fontfamily{bch}\\selectfont\\small\\itshape p} are the some with those in {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()} and {\\fontfamily{bch}\\selectfont\\small\\itshape BayesCost()}.\n{\\fontfamily{bch}\\selectfont\\small\\itshape W} represents the weight matrix and the default value is the identity matrix.\n\n\\subsection{Bayesian Cram\\'{e}r-Rao bounds}\n\nIn the Bayesian scenarios, the quantum Cram\\'{e}r-Rao Bounds and Holevo Cram\\'{e}r-Rao bound are not\nappropriate to grasp the the ultimate precision limits\nas they are ignorant of the prior information. Still, Bayesian Cram\\'{e}r-Rao bounds can be used instead. In these scenarios,\nthe covariance matrix is redefined as\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\!=\\!\\int \\!p(\\bold{x})\\sum_y\\mathrm{Tr}(\\rho\\Pi_y)\n(\\hat{\\bold{x}}\\!-\\!\\bold{x})(\\hat{\\bold{x}}\\!-\\!\\bold{x})^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\end{equation}\nwhere the integral $\\int\\mathrm{d}\\bold{x}:=\\iiint\\mathrm{d}x_0\\mathrm{d}x_1\\cdots$. In such cases, one version of\nthe Bayesian Cram\\'{e}r-Rao bound (BCRB) is of the form\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\int p(\\bold{x})\n\\left(B\\mathcal{I}^{-1}B+\\bold{b}\\bold{b}^{\\mathrm{T}}\\right)\\mathrm{d}\\bold{x},\n\\label{eq:BCRB_type1}\n\\end{equation}\nwhere $\\mathcal{I}$ is the CFIM, and $\\bold{b}=(b(x_0),b(x_1),\\dots)^{\\mathrm{T}}$ is the vector of biases,\ni.e., $b(x_a)=\\sum_y\\hat{x}_a p(y|\\bold{x})-x_a$ for each $x_a$ with $p(y|\\bold{x})$ the conditional probability.\n$B$ is a diagonal matrix with the $a$th entry $B_{aa}=1+[\\bold{b}']_{a}$. Here $\\bold{b}':=(\\partial_0 b(x_0),\n\\partial_1 b(x_1),\\dots)^{\\mathrm{T}}$. The quantum correspondence of this bound (BQCRB) reads\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq\\int p(\\bold{x})\n\\left(B\\mathcal{F}^{-1}B+\\bold{b}\\bold{b}^{\\mathrm{T}}\\right)\\mathrm{d}\\bold{x},\n\\label{eq:BQCRB_type1}\n\\end{equation}\nwhere $\\mathcal{F}$ is the QFIM of all types. As a matter of fact, there exists a similar version of\nEq.~(\\ref{eq:BCRB_type1}), which can be expressed by\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\mathcal{B}\\,\\mathcal{I}_{\\mathrm{Bayes}}^{-1}\n\\,\\mathcal{B}+\\int p(\\bold{x})\\bold{b}\\bold{b}^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\label{eq:BCRB_type2}\n\\end{equation}\nwhere $\\mathcal{I}_{\\mathrm{Bayes}}=\\int p(\\bold{x})\\mathcal{I}\\mathrm{d}\\bold{x}$ is the average CFIM with\n$\\mathcal{I}$ the CFIM defined in Eq.~(\\ref{eq:CFIM}). $\\mathcal{B}=\\int p(\\bold{x})B\\mathrm{d}\\bold{x}$ is the\naverage of $B$. Its quantum correspondence reads\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\mathcal{B}\\,\\mathcal{F}_{\\mathrm{Bayes}}^{-1}\n\\,\\mathcal{B}+\\int p(\\bold{x})\\bold{b}\\bold{b}^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\label{eq:BQCRB_type2}\n\\end{equation}\nwhere $\\mathcal{F}_{\\mathrm{Bayes}}=\\int p(\\bold{x})\\mathcal{F}\\mathrm{d}\\bold{x}$ is average QFIM with $\\mathcal{F}$\nthe QFIM of all types.\n\nAnother version of the Bayesian Cram\\'{e}r-Rao bound is of the form\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\int p(\\bold{x})\n\\mathcal{G}\\left(\\mathcal{I}_p+\\mathcal{I}\\right)^{-1}\\mathcal{G}^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\label{eq:BCRB_type3}\n\\end{equation}\nand its quantum correspondence can be expressed by\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\int p(\\bold{x})\n\\mathcal{G}\\left(\\mathcal{I}_p+\\mathcal{F}\\right)^{-1}\\mathcal{G}^{\\mathrm{T}}\\mathrm{d}\\bold{x},\n\\label{eq:BQCRB_type3}\n\\end{equation}\nwhere the entries of $\\mathcal{I}_{p}$ and $\\mathcal{G}$ are defined by\n\\begin{equation}\n[\\mathcal{I}_{p}]_{ab}:=[\\partial_a \\ln p(\\bold{x})][\\partial_b \\ln p(\\bold{x})],\n\\label{eq:BayesIp}\n\\end{equation}\nand $\\mathcal{G}_{ab}:=[\\partial_b\\ln p(\\bold{x})][\\bold{b}]_a+B_{aa}\\delta_{ab}$. The derivations and thorough\ndiscussions of these bounds will be further discussed in an independent paper, which will be announced in a short time.\n\nThe functions in QuanEstimation to calculate $\\mathcal{I}_{\\mathrm{Bayes}}$ and $\\mathcal{F}_{\\mathrm{Bayes}}$ are:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nBCFIM(x,p,rho,drho,M=[],eps=1e-8)\nBQFIM(x,p,rho,drho,LDtype=\"SLD\",eps=1e-8)\n\\end{lstlisting}\nAnd the functions for the calculations of BCRBs and BQCRBs are:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nBCRB(x,p,dp,rho,drho,M=[],b=[],db=[],\n btype=1,eps=1e-8)\nBQCRB(x,p,dp,rho,drho,b=[],db=[],btype=1,\n LDtype=\"SLD\",eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape x} and {\\fontfamily{bch}\\selectfont\\small\\itshape p} are the same with those in the function {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()}. {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is\na (multidimensional) list of arrays representing the derivatives of the prior distribution, which is only essential when\n{\\fontfamily{bch}\\selectfont\\small\\itshape btype=3}. In the case that {\\fontfamily{bch}\\selectfont\\small\\itshape btype=1} and {\\fontfamily{bch}\\selectfont\\small\\itshape btype=2}, it could be set as {\\fontfamily{bch}\\selectfont\\small\\itshape []}.\n{\\fontfamily{bch}\\selectfont\\small\\itshape rho} and {\\fontfamily{bch}\\selectfont\\small\\itshape drho} are (multidimensional) lists representing the values of $\\rho$ and\n$\\partial_{\\bold{x}}\\rho$. For example, if the input {\\fontfamily{bch}\\selectfont\\small\\itshape x} includes three arrays, which are the values of $x_0$,\n$x_1$ and $x_2$ for the integral, then the $ijk$th entry of {\\fontfamily{bch}\\selectfont\\small\\itshape rho} and {\\fontfamily{bch}\\selectfont\\small\\itshape drho} are a matrix $\\rho$ and\na list $[\\partial_{0}\\rho,\\partial_{1}\\rho,\\partial_{2}\\rho]$ with respect to the values $[x_0]_i$, $[x_1]_j$, and $[x_2]_k$.\nHere $[x_0]_i$, $[x_1]_j$, and $[x_2]_k$ represent the $i$th, $j$th, and $k$th value in the first, second, and\nthird array in {\\fontfamily{bch}\\selectfont\\small\\itshape x}. As a matter of fact, if the users can provide specific functions of $H$ and\n$\\partial_{\\bold{x}}H$ on $\\bold{x}$, {\\fontfamily{bch}\\selectfont\\small\\itshape rho} and {\\fontfamily{bch}\\selectfont\\small\\itshape drho} can be alternatively generated via the\nfunctions {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} and {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad()} [or {\\fontfamily{bch}\\selectfont\\small\\itshape Kraus()}]. {\\fontfamily{bch}\\selectfont\\small\\itshape b} and {\\fontfamily{bch}\\selectfont\\small\\itshape db}\nare two lists of arrays representing $\\bold{b}$ and $\\bold{b}'$, and the default settings for both of them are zero vectors\n(unbiased). In {\\fontfamily{bch}\\selectfont\\small\\itshape BCRB()} the measurement is input via {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]}, and if it is empty, a set of rank-one\nSIC-POVM will be automatically applied, similar to that in {\\fontfamily{bch}\\selectfont\\small\\itshape CFIM()}. Moreover, {\\fontfamily{bch}\\selectfont\\small\\itshape btype=1}, {\\fontfamily{bch}\\selectfont\\small\\itshape btype=2},\nand {\\fontfamily{bch}\\selectfont\\small\\itshape btype=3} represent the calculation of Eqs.~(\\ref{eq:BCRB_type1}), (\\ref{eq:BCRB_type2}), and (\\ref{eq:BCRB_type3}).\nIn the meantime, in {\\fontfamily{bch}\\selectfont\\small\\itshape BQCRB()}, {\\fontfamily{bch}\\selectfont\\small\\itshape btype=1}, {\\fontfamily{bch}\\selectfont\\small\\itshape btype=2}, and {\\fontfamily{bch}\\selectfont\\small\\itshape btype=3} represent the\ncalculation of Eqs.~(\\ref{eq:BQCRB_type1}), (\\ref{eq:BQCRB_type2}) and (\\ref{eq:BQCRB_type3}). Similar to {\\fontfamily{bch}\\selectfont\\small\\itshape QFIM()},\n{\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"} here is the type of logarithmic derivatives, including three choices: {\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"},\nand {\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}. Recently, Ref.~\\cite{Liu2016} provide an optimal biased bound based on the type-1 BQCRB in the case of\nsingle-parameter estimation, which can be calculated in QuanEstimation via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nOBB(x,p,dp,rho,drho,d2rho,LDtype=\"SLD\",\n eps=1e-8)\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is an array containing the derivatives $\\partial_x p$. {\\fontfamily{bch}\\selectfont\\small\\itshape d2rho} is a list\ncontaining the second order derivative of the density matrix on the unknown parameter.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8cm]{Fig_bayes.pdf}\n\\caption{(a) The performance of classical Bayesian bounds, including BCRB of type 1 (solid\nred line), type 2 (dashed green line), type 3 (dotted blue line), and VTB (dash-dotted\nblack line). (b) The performance of quantum Bayesian bounds, including BQCRB\nof type 1 (solid red line), type 2 (dashed green line), type 3 (dotted blue line),\nQVTB (dash-dotted black line), and QZZB (solid cyan pentagram line). The parameters\n$\\mu=0$ and $\\kappa=\\pi\/2$ in the plots. Planck units are applied here. }\n\\label{fig:bayes}\n\\end{figure}\n\nAnother famous Bayesian version of Cram\\'{e}r-Rao bound is introduced by Van Trees in 1968~\\cite{vanTrees1968},\nwhich is known as the Van Trees bound (VTB). The VTB is expressed by\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\left(\\mathcal{I}_{\\mathrm{prior}}\n+\\mathcal{I}_{\\mathrm{Bayes}}\\right)^{-1},\n\\end{equation}\nwhere $\\mathcal{I}_{\\mathrm{prior}}=\\int p(\\bold{x})\\mathcal{I}_{p}\\mathrm{d}\\bold{x}$ is the CFIM for $p(\\bold{x})$\nwith $\\mathcal{I}_p$ defined in Eq.~(\\ref{eq:BayesIp}). In the derivation, the assumption\n\\begin{equation}\n\\int\\partial_{a}\\left[b(x_b)p(\\bold{x})\\right]\\mathrm{d}\\bold{x}=0\n\\end{equation}\nis applied for all subscripts $a$ and $b$. In 2011, Tsang, Wiseman and Caves~\\cite{Tsang2011} provided a quantum\ncorrespondence of the VTB (QVTB). The Tsang-Wiseman-Caves bound is of the form\n\\begin{equation}\n\\mathrm{cov}(\\hat{\\bold{x}},\\{\\Pi_y\\})\\geq \\left(\\mathcal{I}_{\\mathrm{prior}}\n+\\mathcal{F}_{\\mathrm{Bayes}}\\right)^{-1}.\n\\end{equation}\nThe functions in QuanEstimation for the calculation of VTB and QVTB are:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nVTB(x,p,dp,rho,drho,M=[],eps=1e-8)\nQVTB(x,p,dp,rho,drho,LDtype=\"SLD\",eps=1e-8)\n\\end{lstlisting}\nHere {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is a (multidimensional) list of arrays representing the derivatives of the prior distribution.\nFor example, if {\\fontfamily{bch}\\selectfont\\small\\itshape x} includes 3 arrays, which are the values of $x_0$, $x_1$ and $x_2$ for the integral,\nthen the $ijk$th entry of {\\fontfamily{bch}\\selectfont\\small\\itshape dp} is an array $(\\partial_0 p,\\partial_1 p,\\partial_2 p)$ with respect to\nvalues $[x_0]_i$, $[x_1]_j$ and $[x_2]_k$.\n\n\\emph{Example.} Let us still take the Hamiltonian in Eq.~(\\ref{eq:Bayes_demo}) and initial state $|+\\rangle$ as\nan example. $x$ is still the parameter to be estimated. The prior distribution is taken as a Gaussian distribution\n\\begin{equation}\np(x)=\\frac{1}{c\\eta\\sqrt{2\\pi}}e^{-\\frac{(x-\\mu)^2}{2\\eta^2}}\n\\label{eq:Bayes_prior}\n\\end{equation}\nin a finite regime $[-\\pi\/2, \\pi\/2]$, where $\\mu$ is the expectation, $\\eta$ is the standard deviation, and\n$c=\\frac{1}{2}\\big[\\mathrm{erf}(\\frac{\\pi-2\\mu}{2\\sqrt{2}\\eta})+\\mathrm{erf}(\\frac{\\pi+2\\mu}{2\\sqrt{2}\\eta})\\big]$\nis the normalized coefficient. Here $\\mathrm{erf}(x):=\\frac{2}{\\sqrt{\\pi}}\\int^x_0 e^{-t^2}\\mathrm{d}t$ is the error\nfunction. The measurement in the classical bounds is taken as a set of SIC-POVM. The performance of the classical and\nquantum Bayesian bounds are given in Figs.~\\ref{fig:bayes}(a) and \\ref{fig:bayes}(b). As shown in Fig.~\\ref{fig:bayes}(a),\nin this case BCRB of type 1 (solid red line) and type 2 (dashed green line) are tighter than type 3 (dotted blue\nline) and VTB (dash-dotted black line) when the deviation $\\eta$ is small. With the increase of $\\eta$, BCRB of type 1\nand type 3 coincide with each other, so do BCRB of type 2 and VTB. Furthermore, BCRB of type 1 and type 3 are always\ntighter than type 2 and VTB in this example. The performance of quantum Bayesian bounds are similar, as shown in\nFig.~\\ref{fig:bayes}(b). BQCRB (solid red line for type 1 and dashed green line for type 2) are tighter than type 3\n(dotted green line) and QVTB (dash-dotted black line) when $\\eta$ is small and BQCRB of type 1 (type 2) and type 3\n(QVTB) coincide with each other for a large $\\eta$.\n\n\\subsection{Quantum Ziv-Zakai bound}\n\nApart from the Cram\\'{e}r-Rao bounds, the Ziv-Zakai bound is another useful bound in Bayesian scenarios. It was\nfirst provided by Ziv and Zakai in 1969~\\cite{Ziv1969} for the single-parameter estimation and then extended to\nthe linear combination of multiple parameters by Bell et al.~\\cite{Bell1997}, which is also referred to\nas the Bell-Ziv-Zakai bound. In 2012, Tsang provided a quantum correspondence of the Ziv-Zakai bound~\\cite{Tsang2012}\n(QZZB), and in 2015 Berry et al.~\\cite{Berry2015} provided a quantum correspondence of the Bell-Ziv-Zakai bound.\nIn QZZB, the variance $\\mathrm{var}(\\hat{x},\\{\\Pi_y\\})$, a diagonal entry of the covariance matrix, satisfies the\nfollowing inequality\n\\begin{eqnarray}\n\\mathrm{var}(\\hat{x},\\{\\Pi_y\\}) &\\geq & \\frac{1}{2}\\int_0^\\infty \\mathrm{d}\\tau\\tau\n\\mathcal{V}\\int_{-\\infty}^{\\infty} \\mathrm{d}x\\min\\!\\left\\{p(x), p(x+\\tau)\\right\\} \\nonumber \\\\\n& & \\times\\left(1-\\frac{1}{2}||\\rho(x)-\\rho(x+\\tau)||\\right),\n\\end{eqnarray}\nwhere $||\\cdot||$ is the trace norm. $\\mathcal{V}$ is the \"valley-filling\" operator\nsatisfying $\\mathcal{V}f(\\tau)=\\max_{h\\geq 0}f(\\tau+h)$. In the numerical calculations, the prior distribution has\nto be limited or truncated in a finite regime $[\\alpha,\\beta]$, i.e., $p(x)=0$ when $x>\\beta$ or $x<\\alpha$, and\nthen the QZZB reduces to\n\\begin{eqnarray}\n\\mathrm{var}(\\hat{x},\\{\\Pi_y\\}) &\\geq & \\frac{1}{2}\\int_0^{\\beta-\\alpha}\\mathrm{d}\\tau\\tau\n\\mathcal{V}\\int_{\\alpha}^{\\beta}\\mathrm{d}x\\min\\left\\{p(x), p(x+\\tau)\\right\\} \\nonumber \\\\\n& & \\times\\left(1-\\frac{1}{2}||\\rho(x)-\\rho(x+\\tau)||\\right).\n\\end{eqnarray}\nThe function in QuanEstimation for the calculation of QZZB is:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nQZZB(x,p,rho,eps=1e-8)\n\\end{lstlisting}\nThe performance of QZZB is also demonstrated with the Hamiltonian in Eq.~(\\ref{eq:Bayes_demo}) and prior\ndistribution in Eq.~(\\ref{eq:Bayes_prior}), as shown in Fig.~\\ref{fig:bayes}(b). In this example, its performance\n(solid cyan pentagram line) is worse than BQCRB and QVTB. However, this tightness relation may dramatically change\nin other systems or with other prior distributions. Hence, in a specific scenario using QuanEstimation to perform\na thorough comparison would be a good choice to find the tightest tool for the scheme design.\n\n\\section{Metrological resources}\n\\label{sec:resource}\n\nThe improvement of precision usually means a higher consumption of resources. For example, the repetition of\nexperiments will make the deviation of the unknown parameter to scale proportionally to $1\/\\sqrt{n}$ ($n$ the repetition\nnumber) in theory. The repetition number or the total time is thus the resource responsible for this improvement.\nConstraint on quantum resources is an important aspect in the study of quantum parameter estimation, and is crucial\nto reveal the quantum advantage achievable in practical protocols. The numerical calculations of some typical resources\nhave been added in QuTiP, such as various types of entropy and the concurrence. Hence, we do not need to rewrite\nthem in QuanEstimation. Currently, two additional metrological resources, spin squeezing and the time to reach\na given precision limit are provided in the package. The spin squeezing can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nSpinSqueezing(rho,basis=\"Dicke\",output=\"KU\")\n\\end{lstlisting}\nHere the input {\\fontfamily{bch}\\selectfont\\small\\itshape rho} is a matrix representing the state. The basis of the state can be adjusted via\n{\\fontfamily{bch}\\selectfont\\small\\itshape basis=\" \"}. Two options {\\fontfamily{bch}\\selectfont\\small\\itshape \"Dicke\"} and {\\fontfamily{bch}\\selectfont\\small\\itshape \"Pauli\"} represent the Dicke basis\nand the original basis of each spin. {\\fontfamily{bch}\\selectfont\\small\\itshape basis=\"Pauli\"} here is equivalent to choose {\\fontfamily{bch}\\selectfont\\small\\itshape basis=\"uncoupled\"}\nin the function {\\fontfamily{bch}\\selectfont\\small\\itshape jspin()} in QuTiP. Two types of spin squeezing can be calculated in this function.\n{\\fontfamily{bch}\\selectfont\\small\\itshape output=\"KU\"} means the output is the one given by Kitagawa and Ueda~\\cite{Kitagawa1993}, and\n{\\fontfamily{bch}\\selectfont\\small\\itshape output=\"WBIMH\"} means the output is the one given by Wineland et al.~\\cite{Wineland1992}.\n\nThe time to reach a given precision limit can be calculated via the function:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL]\nTargetTime(f,tspan,func,*args,**kwargs)\n\\end{lstlisting}\nNotice that {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad()} and {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad.expm()} should be first called before using this function.\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape f} is a float number representing the given value of the precision limit. The time is searched\nwithin the regime defined by the input {\\fontfamily{bch}\\selectfont\\small\\itshape tspan} (an array). {\\fontfamily{bch}\\selectfont\\small\\itshape func} is the handle of a function\n{\\fontfamily{bch}\\selectfont\\small\\itshape func()} depicting the precision limit. {\\fontfamily{bch}\\selectfont\\small\\itshape *args} is the corresponding input parameters, in\nwhich {\\fontfamily{bch}\\selectfont\\small\\itshape rho} and {\\fontfamily{bch}\\selectfont\\small\\itshape drho} should be the output of {\\fontfamily{bch}\\selectfont\\small\\itshape Lindblad.expm()}. {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}\nis the keyword arguments in {\\fontfamily{bch}\\selectfont\\small\\itshape func()}. The difference between input parameters and keyword arguments in\nQuanEstimation is that the keyword arguments have default values and thus one does not have to assign values to\nthem when calling the function. Currently, all the asymptotic bounds discussed in Sec.~\\ref{sec:tools} are available\nto be called here.\n\n\n\\section{Control optimization}\n\\label{sec:control_opt}\n\nQuantum control is a leading approach in quantum metrology to achieve the improvement of measurement\nprecision and boost the resistance to decoherence. This is possible thanks to high controllability of typical\nquantum metrological setups. A paradigmatic controllable Hamiltonian is of the form\n\\begin{equation}\nH=H_0(\\bold{x})+\\sum^K_{k=1}u_k(t) H_k,\n\\end{equation}\nwhere $H_0(\\bold{x})$ is the free Hamiltonian containing the unknown parameters $\\bold{x}$ and $H_k$ is the\n$k$th control Hamiltonian with the corresponding control amplitude $u_k(t)$. In quantum parameter estimation,\nthe aim of control is to improve the precision of the unknown parameters. Hence, natural choices for the the\nobjective function $f$ are the various metrological bounds. The quantum Cram\\'{e}r-Rao bounds are easiest to calculate\nand hence will typically be the first choice. In the single-parameter estimation, the QFI or CFI can be taken as\nthe objective function, depending whether the measurement can be optimized or is fixed. In the multiparameter scenario,\nthe target function can be $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ or $\\mathrm{Tr}(W\\mathcal{I}^{-1})$. In QuanEstimation,\n$1\/\\mathrm{Tr}(W\\mathcal{F}^{-1})$ and $1\/\\mathrm{Tr}(W\\mathcal{I}^{-1})$ are used as the objective functions\ninstead since the maximization is more precise than the minimization in practice. In the following, this technical aspect\nwill not be brought up again for the conciseness of writing.\n\nSearching the optimal controls in order to achieve the maximum or minimum values of an objective function is the core\ntask in quantum control. Most existing optimization algorithms are capable of providing useful control strategies in\nquantum parameter estimation. The gradient-based algorithms usually perform well in small-scale systems. For complex\nproblems where the gradient-based methods are more challenging or even fail to work at all, gradient-free algorithms\nare a good alternative. Here we introduce several control algorithms in quantum parameter estimation that have been\nadded into our package and give some illustrations.\n\nFirst, we present the specific codes in QuanEstimation for the execution of the control optimization:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\ncontrol = ControlOpt(savefile=False,\n method=\"auto-GRAPE\",**kwargs)\ncontrol.dynamics(tspan,rho0,H0,dH,Hc,\n decay= [],ctrl_bound=[])\ncontrol.QFIM(W=[],LDtype=\"SLD\")\ncontrol.CFIM(M=[],W=[])\ncontrol.HCRB(W=[])\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape tspan} is an array representing the time for the evolution. {\\fontfamily{bch}\\selectfont\\small\\itshape rho0} is a matrix\nrepresenting the density matrix of the initial state. {\\fontfamily{bch}\\selectfont\\small\\itshape H0} is a matrix representing the free Hamiltonian\n$H_0(\\bold{x})$ and {\\fontfamily{bch}\\selectfont\\small\\itshape Hc} is a list containing the control Hamiltonians, i.e., $[H_1,H_2,\\dots]$.\n{\\fontfamily{bch}\\selectfont\\small\\itshape dH} is a list of matrices representing $\\partial_{\\bold{x}}H_0$. In the case that only one entry\nexists in {\\fontfamily{bch}\\selectfont\\small\\itshape dH}, the objective functions in {\\fontfamily{bch}\\selectfont\\small\\itshape control.QFIM()} and {\\fontfamily{bch}\\selectfont\\small\\itshape control.CFIM()}\nare the QFI and CFI, and if more than one entries are input, the objective functions are $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nand $\\mathrm{Tr}(W\\mathcal{I}^{-1})$. Different types of QFIM can be selected as the objective function via the\nvariable {\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"}, which includes three options {\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"}, and\n{\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}. The measurement for CFI\/CFIM is input via {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]} in {\\fontfamily{bch}\\selectfont\\small\\itshape control.CFIM()} and\nthe default value is a SIC-POVM. The weight matrix $W$ can be manually input via {\\fontfamily{bch}\\selectfont\\small\\itshape W=[]}, and the default\nvalue is the identity matrix.\n\nIn some cases, the control amplitudes have to be limited in a regime, for example $[a,b]$, which\ncan be realized by input {\\fontfamily{bch}\\selectfont\\small\\itshape ctrl\\_bound=[a,b]}. If no value is input, the default regime is $[-\\infty,\\infty]$.\n{\\fontfamily{bch}\\selectfont\\small\\itshape decay=[]} is a list of decay operators and corresponding decay rates for the master equation in\nEq.~(\\ref{eq:mastereq}) and its input rule is {\\fontfamily{bch}\\selectfont\\small\\itshape decay=[[Gamma\\_1,gamma\\_1],...]}. The default value\nfor {\\fontfamily{bch}\\selectfont\\small\\itshape savefile} is {\\fontfamily{bch}\\selectfont\\small\\itshape False}, which means only the controls obtained in the final episode\nwill be saved in the file named \"controls.csv\", and if it is set to be {\\fontfamily{bch}\\selectfont\\small\\itshape True}, the controls obtained\nin all episodes will be saved in this file. The values of QFI, CFI, $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ or\n$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ in all episodes will be saved regardless of this setting in the file named\n\"f.csv\". Another file named \"total\\_reward.csv\" will also be saved to save the total rewards in all episodes\nwhen DDPG is chosen as the optimization method. Here the word \"episode\" is referred to as a round of\nupdate of the objective function in the scenario of optimization.\n\n\\begin{table}[tp]\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\hline\nAlgorithms & method= & \\multicolumn{2}{c}{~**kwargs and default values~}\\\\\n\\hline\n\\multirow{6}{*}{auto-GRAPE} & \\multirow{6}{*}{\"auto-GRAPE\"} & \"Adam\" & True \\\\\n\\multirow{6}{*}{(GRAPE)} & \\multirow{6}{*}{(\"GRAPE\")} & \"ctrl0\" & [] \\\\\n & & \"max\\_episode\" & 300 \\\\\n & & \"epsilon\" & 0.01 \\\\\n & & \"beta1\" & 0.90 \\\\\n & & \"beta2\" & 0.99 \\\\\n\\hline\n\\multirow{7}{*}{PSO} & \\multirow{7}{*}{\"PSO\"} & \"p\\_num\" & 10 \\\\\n & & \"ctrl0\" & [] \\\\\n & & \"max\\_episode\" & [1000,100] \\\\\n & & \"c0\" & 1.0 \\\\\n & & \"c1\" & 2.0 \\\\\n & & \"c2\" & 2.0 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{6}{*}{DE} & \\multirow{6}{*}{\"DE\"} & \"p\\_num\" & 10 \\\\\n & & \"ctrl0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"c\" & 1.0 \\\\\n & & \"cr\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{5}{*}{DDPG} & \\multirow{5}{*}{\"DDPG\"} & \"ctrl0\" & [] \\\\\n & & \"max\\_episode\" & 500 \\\\\n & & \"layer\\_num\" & 3 \\\\\n & & \"layer\\_dim\" & 200 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Available control methods in QuanEstimation andcorresponding\ndefault parameter settings. Notice that auto-GRAPE and GRAPE are not\navailable when {\\fontfamily{bch}\\selectfont\\small\\itshape control.HCRB()} is called.}\n\\label{table:ctrl_paras}\n\\end{table}\n\nThe switch of optimization algorithms can be realized by {\\fontfamily{bch}\\selectfont\\small\\itshape method=\" \"}, and the corresponding parameters\ncan be set via {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. All available algorithms in QuanEstimation are given in Table~\\ref{table:ctrl_paras}\ntogether with the corresponding default parameter settings. Notice that in some algorithms maybe more than one set\nof guessed controls are needed, and if not enough sets are input then random-value controls will be generated\nautomatically to fit the number. In the meantime, if excessive number of sets are input, only the suitable number\nof controls will be used. {\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\"SLD\"} is the only choice when {\\fontfamily{bch}\\selectfont\\small\\itshape method=\"GRAPE\"} as the QFIMs\nbased on RLD and LLD are unavailable to be the objective function for GRAPE in the package. All the aforementioned\nalgorithms will be thoroughly introduced and discussed with examples in the following subsections.\n\n\\begin{algorithm*}[tp]\n\\SetArgSty{}\n\\caption{GRAPE} \\label{algorithm:grape}\nInitialize the control amplitude $u_k(t)$ for all $t$ and $k$; \\\\\n\\For {episode=1, $M$}{\nReceive initial state $\\rho_{0}$ ($\\rho_{\\mathrm{in}}$); \\\\\n\\For {$t=1, T$}{\nEvolve with the control $\\rho_t=e^{\\Delta t\\mathcal{L}_t} \\rho_{t-1}$; \\\\\nCalculate the derivatives $\\partial_\\bold{x}\\rho_t=-i\\Delta t [\\partial_\\bold{x} H_0(\\bold{x})]^{\\times}\\rho_t\n+e^{\\Delta t\\mathcal{L}_t} \\partial_\\bold{x} \\rho_{t-1}$; \\\\\nSave $\\rho_t$ and $\\partial_\\bold{x}\\rho_t$; \\\\\n\\For {$k=1, K$}{\nCalculate $\\frac{\\delta \\rho_t}{\\delta u_k(t)}=-i\\Delta t H^{\\times}_k\\rho_t$,\n$\\partial_\\bold{x}\\!\\left(\\!\\frac{\\delta\\rho_t}{\\delta u_k(t)}\\!\\right)=\n-i\\Delta t H^{\\times}_k(\\partial_{\\bold{x}}\\rho_t)$; \\\\\n\\For {$j=t-1, 1$}{\nCalculate $\\frac{\\delta \\rho_t}{\\delta u_k(j)}=e^{\\Delta t\\mathcal{L}_t}\n\\frac{\\delta \\rho_{t-1}}{\\delta u_k(j)}$,\n$\\partial_\\bold{x}\\!\\left(\\frac{\\delta\\rho_t}{\\delta u_k(j)}\\right)=\\left(\\partial_\\bold{x}\ne^{\\Delta t\\mathcal{L}_t}\\right)\\frac{\\delta\\rho_{t-1}}{\\delta u_k(j)}\n+e^{\\Delta t\\mathcal{L}_t}\\partial_\\bold{x}\\!\\left(\\frac{\\delta \\rho_{t-1}}\n{\\delta u_k(j)}\\right)$;}\nSave $\\frac{\\delta \\rho_t}{\\delta u_k(t)}$, $\\partial_\\bold{x}\\!\\left(\\frac{\\delta\\rho_t}\n{\\delta u_k(t)}\\right)$\nand all $\\frac{\\delta \\rho_t}{\\delta u_k(j)}$,\n$\\partial_\\bold{x}\\!\\left(\\frac{\\delta\\rho_t}{\\delta u_k(j)}\\right)$;\n}}\nCalculate the SLDs for all $\\bold{x}$ and the objective function $f(T)$; \\\\\n{\\For {$t=1, T$}{\n\\For {$k=1, K$}{\nCalculate the gradient $\\frac{\\delta f(T)}{\\delta u_k(t)}$ with $\\frac{\\delta\\rho_T}\n{\\delta u_k(t)}$ and $\\partial_\\bold{x}\\!\\left(\\frac{\\delta \\rho_T}{\\delta u_k(t)}\\right)$; \\\\\nUpdate control $u_k(t)\\!\\leftarrow\\! u_k(t)\\!+\\!\\epsilon\\frac{\\delta f(T)}{\\delta u_k(t)}$.\n}}}\n}\nSave the controls $\\{u_k\\}$ and corresponding $f(T)$.\n\\end{algorithm*}\n\nApart from the QFIM and CFIM, the HCRB can also be taken as the objective function in the case of multiparameter\nestimation, which can be realized by calling {\\fontfamily{bch}\\selectfont\\small\\itshape control.HCRB()}. Notice that auto-GRAPE and GRAPE are\nnot available in {\\fontfamily{bch}\\selectfont\\small\\itshape method=\" \"} here as the calculation of HCRB is performed via optimizations (semidefinite\nprogramming), not direct calculations. Due to the equivalence between the HCRB and quantum Cram\\'{e}r-Rao bound in\nthe single-parameter estimation, if {\\fontfamily{bch}\\selectfont\\small\\itshape control.HCRB()} is called in this case, the entire program will\nbe terminated and a line of reminder will arise to remind the users to invoke {\\fontfamily{bch}\\selectfont\\small\\itshape control.QFIM()} instead.\n\n\n\\subsection{Gradient ascent pulse engineering}\n\nThe gradient ascent pulse engineering algorithm (GRAPE) was developed by Khaneja et al.~\\cite{Khaneja2005}\nin 2005 for the design of pulse sequences in the Nuclear Magnetic Resonance systems, and then applied into the\nquantum parameter estimation for the generation of optimal controls~\\cite{Liu2017a,Liu2017b}, in which the\ngradients of the objective function $f(T)$ at a fixed time $T$ were obtained analytically. In the pseudocode given\nin Ref.~\\cite{Liu2022}, the propagators between any two time points have to be saved, which would occupy a large\namount of memory during the computation and make it difficult to deal with high-dimensional Hamiltonians or long-time\nevolutions. To solve this problem, a modified pseudocode is provided as given in Algorithm~\\ref{algorithm:grape}.\nIn this modified version, after obtaining the evolved state $\\rho_t$ and $\\partial_{\\bold{x}}\\rho$, the gradient\n$\\delta\\rho_t\/\\delta u_k(t)$ and its derivatives with respect to $\\bold{x}$ are then calculated via the equations\n\\begin{equation}\n\\frac{\\delta\\rho_t}{\\delta u_k(t)}=-i\\Delta t H^{\\times}_k(\\rho_t)\n\\end{equation}\nwith $\\Delta t$ a small time interval, $H^{\\times}_k(\n\\cdot)=[H_k,\\cdot]$ the commutator between $H_{k}$ and other\noperators, and\n\\begin{equation}\n\\partial_\\bold{x}\\!\\left(\\frac{\\delta\\rho_t}{\\delta u_k(t)}\\!\\right)\n=-i\\Delta t H^{\\times}_k(\\partial_\\bold{x}\\rho_t),\n\\end{equation}\nThe gradients $\\delta\\rho_t\/\\delta u_k(j)$ ($j}\n\\caption{auto-GRAPE} \\label{algorithm:autogrape}\nInitialize the control amplitude $u_k(t)$ for all $t$ and $k$; \\\\\n\\For {episode=1, $M$}{\nReceive initial state $\\rho_{0}$ ($\\rho_{\\mathrm{in}}$); \\\\\n\\For {$t=1, T$}{\nEvolve with the control $\\rho_t=e^{\\Delta t\\mathcal{L}_t} \\rho_{t-1}$; \\\\\nCalculate the derivatives $\\partial_\\bold{x} \\rho_t=-i\\Delta t [\\partial_\\bold{x} H_0(\\bold{x})]^{\\times}\\rho_t\n+e^{\\Delta t\\mathcal{L}_t} \\partial_\\bold{x} \\rho_{t-1}$; \\\\\nSave $\\rho_t$ and $\\partial_\\bold{x} \\rho_t$;\\\\\n}\nCalculate the SLD and objective function $f(T)$. \\\\\nCalculate the gradient $\\frac{\\delta f(T)}{\\delta u_k(t)}$ with the auto-differential method\nfor all $t$ and $k$.\\\\\n{\\For {$t=1, T$}{\n\\For {$k=1, K$}{\nUpdate control $u_k(t)\\!\\leftarrow\\! u_k(t)\\!+\\!\\epsilon\\frac{\\delta f(T)}{\\delta u_k(t)}$.\n}}}\n}\nSave the controls $\\{u_k\\}$ and corresponding $f(T)$.\n\\end{algorithm}\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_ADillus.pdf}\n\\caption{The schematic of chain rules in automatic differentiation with the logarithmic\nderivative related functions as the objective function.}\n\\label{fig:AD_illus}\n\\end{figure}\n\n\\begin{table}[tp]\n\\def1.15{1.15}\n\\begin{tabular}{c|c|c|c|c}\n\\hline\n\\hline\n\\multirow{3}{*}{$N$} & \\multicolumn{2}{c|}{M1} & \\multicolumn{2}{c}{M2}\\\\\n\\cline{2-5}\n& ~~computing~~& ~~~~memory~~~~& ~~computing~~ &~~~~memory~~~~\\\\\n& time & allocation & time & allocation \\\\\n\\hline\n$2$ & 4.46\\,$\\mu$s & 2.99\\,KB & 5.14\\,$\\mu$s & 2.24\\,KB \\\\\n$2^{2}$ & 18.09\\,$\\mu$s & 17.01\\,KB & 11.17\\,$\\mu$s & 5.46 \\,KB \\\\\n$2^{3}$ & 257.65\\,$\\mu$s & 217.63\\,KB & 35.84\\,$\\mu$s & 18.79\\,KB \\\\\n$2^{4}$ & 4.55\\,ms & 3.34\\,MB & 151.51\\,$\\mu$s & 90.18\\,KB \\\\\n$2^{5}$ & 174.61\\,ms & 53.01\\,MB & 962.17\\,$\\mu$s & 501.85\\,KB \\\\\n$2^{6}$ & 9.45\\,s & 846.18\\,MB & 11.05\\,ms & 3.31 \\,MB \\\\\n$2^{7}$ & 6151.51\\,s & 137.95\\,GB & 45.70\\,ms & 230.98\\,MB \\\\\n$2^{8}$ & - & - & 347.50\\,ms & 1.73\\,GB \\\\\n$2^{9}$ & - & - & 3.29\\,s & 13.36\\,GB \\\\\n$2^{10}$ & - & - & 41.51\\,s & 105.08\\,GB \\\\\n\\end{tabular}\n\\begin{ruledtabular}\n\\begin{tabular}{ccccccc}\n$\\omega T$ & 5 & 10 & 15 & 20 & 30 & 40\\\\\\specialrule{0.05em}{0pt}{3pt}\nGRAPE & 5.23\\,s & 21.75\\,s & 44.95\\,s & 71.00\\,s &178.56\\,s &373.89\\,s \\\\\nauto-GRAPE & 0.32\\,s & 0.77\\,s & 1.45\\,s & 2.19\\,s & 4.14\\,s & 7.00\\,s \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\caption{Upper table: comparison of the average computing time and memory\nallocation for the calculation of the gradient of QFI between two realization\nmethods of AD. M1 and M2 represent the first and second methods. $N$ is the\ndimension of the density matrix. The density matrix and its derivative are\ngenerated randomly in the test. Lower table: comparison of the average\ncomputing time per episode between GRAPE and auto-GRAPE with different target\ntime $T$. Parallel computing is not applied here. KB, MB, and GB represent\nKilobyte, Megabyte, and Gigabyte, respectively.}\n\\label{table:auto}\n\\end{table}\n\nThe core of AD is utilizing the chain rules to evaluate the derivatives of the objective function. As illustrated in\nFig.~\\ref{fig:AD_illus}, in AD the value of the objective function $f$ is evaluated from left to right (red arrows),\nand the derivatives are calculated backwards (blue arrows), which is also called pullback in the language of\nAD. In our case, the differentiation of $f$ on a control amplitude $u_k$ needs to be evaluated through all three\npaths, from $f$ to $\\rho$, from $f$ to $\\partial_{\\bold{x}}\\rho$ (if $f$ is a function of $\\partial_{\\bold{x}}\\rho$)\nand from $f$ to $G$ to $L$. Here $L$ represents the SLDs of all parameters and $G:=G(L)=G(\\rho,\\partial_{\\bold{x}}\\rho)$\ncould be any intermediate function. For example, the contribution of the path from $f$ to $\\rho$ to the derivative\n$\\mathrm{d}f\/\\mathrm{d}u_k$ is $\\frac{\\partial f}{\\partial \\rho}\\frac{\\partial \\rho}{\\partial u_k}$. Notice that\nhere $\\partial f\/\\partial \\rho$ is a formal derivative. The paths to $\\rho$ and $\\partial_{\\bold{x}}\\rho$ can be\nroutinely solved in Zygote, however, the path to $L$ cannot be solved due to the entry-by-entry calculation of\nSLD in Eq.~(\\ref{eq:SLD_eigen}), which causes the difficulty to generate $\\partial L\/\\partial \\rho$\nand $\\partial L\/\\partial(\\partial_{\\bold{x}}\\rho)$, and therefore $\\partial G\/\\partial\\rho$\nand $\\partial G\/\\partial(\\partial_{\\bold{x}}\\rho)$ cannot be obtained. The chain rules in AD cannot be applied then.\nHence, we need to manually provide $\\partial G\/\\partial \\rho$ and $\\partial G\/\\partial (\\partial_{\\bold{x}}\\rho)$ to\nlet AD work in our case. To do it, one should first know that the total differentiation $\\mathrm{d}G_{\\alpha\\beta}$\n(the $\\alpha\\beta$th entry of $\\mathrm{d}G$) can be evaluated via the equation\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=\\sum_{ij}\\frac{\\partial G_{\\alpha\\beta}}{\\partial L_{ij}}\\mathrm{d} L_{ij}\n+\\frac{\\partial G_{\\alpha\\beta}}{\\partial (L_{ij})^{*}}\\mathrm{d} (L_{ij})^{*},\n\\end{equation}\nwhich can be written into a more compact matrix form\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=\\mathrm{Tr}\\!\\left(\\left(\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial L}\\right)^{\\mathrm{T}}\\mathrm{d}L+\\left(\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial L^{*}}\\right)^{\\mathrm{T}}\\mathrm{d}L^{*}\\right).\n\\end{equation}\nDue to the fact that the SLD is a Hermitian matrix, one can have $dL^{*}=dL^{\\mathrm{T}}$, and the equation above\nreduces to\n\\begin{align}\n\\mathrm{d}G_{\\alpha\\beta}&=\\mathrm{Tr}\\!\\left(\\left(\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial L}\\right)^{\\mathrm{T}}\\mathrm{d}L+\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial L^{\\mathrm{T}}}\\mathrm{d}L\\right) \\nonumber \\\\\n&= 2\\mathrm{Tr}\\!\\left(\\left(\\frac{\\partial G_{\\alpha\\beta}}{\\partial L}\\right)^{\\mathrm{T}}\\mathrm{d}L\\right).\n\\label{eq:dG}\n\\end{align}\nNow we introduce an auxiliary function $h$ which satisfies\n\\begin{equation}\n\\left(\\frac{\\partial G_{\\alpha\\beta}}{\\partial L}\\right)^{\\mathrm{T}}=\\rho h^{\\mathrm{T}}+h^{\\mathrm{T}}\\rho.\n\\end{equation}\nThis equation is a typical Lyapunov equation and can be numerically solved. Substituting the equation above into\nthe expression of $\\mathrm{d}G_{\\alpha\\beta}$, one can find that\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=2\\mathrm{Tr}\\left(h^{\\mathrm{T}}\\mathrm{d}L\\rho+h^{\\mathrm{T}}\\rho\\mathrm{d}L\\right).\n\\end{equation}\nDue to the fact that $\\partial_{\\bold{x}}\\rho=(\\rho L+L\\rho)\/2$, we have\n$\\rho\\mathrm{d}L+(\\mathrm{d}L)\\rho=2\\mathrm{d}(\\partial_{\\bold{x}}\\rho)-(\\mathrm{d}\\rho) L-L\\mathrm{d}\\rho$,\nwhich means\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=2\\mathrm{Tr}\\!\\left(2h^{\\mathrm{T}}\\mathrm{d}(\\partial_{\\bold{x}}\\rho)\\right)\n-2\\mathrm{Tr}\\!\\left(\\left(Lh^{\\mathrm{T}}+h^{\\mathrm{T}}L\\right)\\mathrm{d}\\rho\\right).\n\\label{eq:dG_h}\n\\end{equation}\nNext, since $G=G(\\rho,\\partial_{\\bold{x}}\\rho)$, $\\mathrm{d}G_{\\alpha\\beta}$ can also be expressed by\n\\begin{equation}\n\\mathrm{d}G_{\\alpha\\beta}=2\\mathrm{Tr}\\!\\left(\\!\\left(\n\\frac{\\partial G_{\\alpha\\beta}}{\\partial\\rho}\\right)^{\\!\\mathrm{T}}\n\\!\\mathrm{d}\\rho\\!+\\!\\left(\\frac{\\partial G_{\\alpha\\beta}}\n{\\partial(\\partial_{\\bold{x}}\\rho)}\\right)^{\\!\\mathrm{T}}\n\\!\\mathrm{d}(\\partial_{\\bold{x}}\\rho)\\!\\right).\n\\end{equation}\nThis equation is derived through a the similar calculation procedure for Eq.~(\\ref{eq:dG}). Comparing this equation\nwith Eq.~(\\ref{eq:dG_h}), one can see that\n\\begin{align}\n\\frac{\\partial G_{\\alpha\\beta}}{\\partial\\rho}&=2h, \\\\\n\\frac{\\partial G_{\\alpha\\beta}}{\\partial (\\partial_{\\bold{x}}\\rho)} &=-hL^{\\mathrm{T}}-L^{\\mathrm{T}}h.\n\\end{align}\nWith these expressions, $\\partial G\/\\partial\\rho$ and $\\partial G\/\\partial(\\partial_{\\bold{x}}\\rho)$ can be obtained\ncorrespondingly. In this way, the entire path from $f$ to $L$ is connected. Together with the other two paths, AD can\nbe fully applied in our case. The performance of computing time and memory allocation for the calculation of the\ngradient of QFI between these two realization methods of AD are compared with different dimensional density matrices.\nThe dimension is denoted by $N$. As shown in the upper table in Table~\\ref{table:auto}, the computing time and memory\nallocation of the second method are better than the first one except for the case of $N=2$, and this advantage becomes\nvery significant when $N$ is large. Moreover, the computing time and memory allocation of the first method grow fast\nwith the increase of dimension, which is reasonable as the calculations, especially the diagonalization, in the first\nmethod are performed in the $N^2$-dimensional space. There is no data of the first method when $N$ is larger than 7\nas the memory occupation has exceeded our computer's memory. From this comparison, one can see that the second method\nperforms better than the first one in basically all aspects and hence is chosen as the default auto-GRAPE method in\nQuanEstimation.\n\n\\emph{Example.} Consider the dynamics in Eq.~(\\ref{eq:ME_spon}) and control Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}).\nNow define\n\\begin{eqnarray}\n\\delta_{\\mathrm{c}}\\omega &:=& 1\/\\sqrt{\\mathcal{I}_{\\omega\\omega}}, \\label{eq:c_deviation} \\\\\n\\delta_{\\mathrm{q}}\\omega &:=& 1\/\\sqrt{\\mathcal{F}_{\\omega\\omega}} \\label{eq:q_deviation}\n\\end{eqnarray}\nas the theoretical optimal deviations with and without fixed measurement. The performance of controls generated via\nGRAPE and auto-GRAPE are shown in Figs.~\\ref{fig:SPara_ctrl}(a) and \\ref{fig:SPara_ctrl}(b), which are obtained by\n300 episodes in general. In QuanEstimation, the number of episodes can be set via the variable {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode=300}\nin {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} in Table~\\ref{table:ctrl_paras}. As shown in these plots, the values of\n$\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$ in (a) and $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$\nin (b) obtained via GRAPE (red pentagrams) and auto-GRAPE (blue circles) basically coincide with each other, which is\nreasonable as they are intrinsically the same algorithm, just with different gradient calculation methods. However,\nauto-GRAPE shows a significant improvement on the computing time consumption, as given in the lower table in\nTable~\\ref{table:auto}, especially for a large target time $T$. The growth of average computing time per episode with\nthe increase of $T$ in auto-GRAPE is quite insignificant compared to that in GRAPE. Adam can be applied by setting\n{\\fontfamily{bch}\\selectfont\\small\\itshape Adam=True} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. For the sake of a good performance, one can set appropriate Adam parameters\nin {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}, including the learning rate {\\fontfamily{bch}\\selectfont\\small\\itshape epsilon}, the exponential decay rate for the first (second)\nmoment estimates {\\fontfamily{bch}\\selectfont\\small\\itshape beta1} ({\\fontfamily{bch}\\selectfont\\small\\itshape beta2}). The default values of these parameters in the package are 0.01 and\n0.90 (0.99). If {\\fontfamily{bch}\\selectfont\\small\\itshape Adam=False}, the controls are updated with the constant step {\\fontfamily{bch}\\selectfont\\small\\itshape epsilon}. Due to the\nconvergence problem of Adam in some cases, several points in the figure are obtained by a second running of the codes with\na constant step, which takes the optimal control obtained in the first round (with Adam) as the initial guess.\n\nIn some scenarios, the time resolution of the control amplitude could be limited if the dynamics is\ntoo fast or the target time is too short. Hence, in the numerical optimization in such cases, the time steps\nof control cannot equal to that of the dynamics. Here we use the total control amplitude number $N_{\\mathrm{c}}\n=T\/\\Delta t_{\\mathrm{c}}$ with $\\Delta t_{\\mathrm{c}}$ the control time step, to represent the time resolution\nof the control and we assume $\\Delta t_{\\mathrm{c}}$ is fixed in the dynamics. A full $N_{\\mathrm{c}}$ in\nFigs.~\\ref{fig:SPara_ctrl}(a) and \\ref{fig:SPara_ctrl}(b) means $\\Delta t_{\\mathrm{c}}$ equals to the dynamical\ntime step $\\Delta t$. In the numerical calculation, it is possible that quotient of $\\Delta t_{\\mathrm{c}}$ by\n$\\Delta t$ is not an integer, indicating that the existing time of all control amplitudes cannot be equivalent.\nTo avoid this problem, in QuanEstimation the input number ($N_{t}$) of dynamical time steps is automatically\nadjusted to $kN_{\\mathrm{c}}$ with $k$ the smallest integer to let $kN_{\\mathrm{c}}>N_t$, if it is not already\nan integer multiple of $N_{\\mathrm{c}}$. For example, if $N_{\\mathrm{c}}=3$ and $N_{\\mathrm{t}}=100$,\nthen $N_{\\mathrm{t}}$ is adjusted to 102. Notice that in the package GRAPE is not available to deal with a non-full\n$N_{\\mathrm{c}}$ scenario for a technical reason. If GRAPE is invoked in this case, it would automatically go\nback to auto-GRAPE. As a matter of fact, auto-GRAPE outperforms GRAPE in most aspects, therefore, we strongly\nsuggest the users choose auto-GRAPE, instead of GRAPE, in practice.\n\nThe performance of controls with limited $N_{\\mathrm{c}}$ is also demonstrated in Figs.~\\ref{fig:SPara_ctrl}(a)\nand \\ref{fig:SPara_ctrl}(b) with the dynamics in Eq.~(\\ref{eq:ME_spon}) and control Hamiltonian in\nEq.~(\\ref{eq:ctrl_demo}). It can be seen that the constant-value controls ($N_{\\mathrm{c}}=1$, orange upward\ntriangles) cannot reduce the values of $\\delta_{\\mathrm{c}}\\omega$ and $\\delta_{\\mathrm{q}}\\omega$. In the case\nof fixed measurement it can only suppress the oscillation of $\\delta_{\\mathrm{c}}\\omega$. The performance improves\nwith the increase of $N_{\\mathrm{c}}$ and when $N_{\\mathrm{c}}=10$, the values of $\\delta_{\\mathrm{q}}\\omega$ and\n$\\delta_{\\mathrm{c}}\\omega$ are very close to those with a full $N_{\\mathrm{c}}$. This fact indicates that inputting\n10 control amplitudes is good enough in this case and a full $N_{\\mathrm{c}}$ control is unnecessary. A limited\n$N_{\\mathrm{c}}$ here could be easier to realize in practice and hence benefit the experimental realization.\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_GRAPE.pdf}\n\\caption{The performance of control-enhanced (a) $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nand (b) $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$ with different $N_{\\mathrm{c}}$. The\noptimal controls are generated via GRAPE and auto-GRAPE with the dynamics in Eq.~(\\ref{eq:ME_spon}) and\ncontrol Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}). The dotted black lines in (a) and (b) represent\n$\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$ and $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$\nwithout control. The red pentagrams are those obtained via GRAPE with a full $N_{\\mathrm{c}}$, i.e., $N_{\\mathrm{c}}$\nequals to the number of time steps. The blue circles, green crosses, purple diamonds, cyan downward triangles and\norange upward triangles represent those obtained via auto-GRAPE with $N_{\\mathrm{c}}$ being full, 10, 6, 3 and 1,\nrespectively. Other parameters are set to be the same with those in Fig.~\\ref{fig:QFI_code}. (c1-c4) The optimal\ncontrols in the case of $\\omega_{\\mathrm{tr}}T=20$ with $N_{\\mathrm{c}}$ being (c1) 1, (c2) 3, (c3) 6 and (c4) 10,\nrespectively. The true value $\\omega_{\\mathrm{tr}}$ is set to be $1$. Planck units are applied here.}\n\\label{fig:SPara_ctrl}\n\\end{figure*}\n\n\\subsection{Particle swarm optimization}\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_PSO.pdf}\n\\caption{(a) Illustration of the basic operation of PSO in $m$th round of episode.\nThe personal best (with the blue subscript pb) for each particle in this\nround is obtained by comparing all the values of $f$ of this particle in\nall previous rounds including the current one. The global best (with the red\nsubscript gb) is obtained by comparing the values of $f$ of all personal bests in\nthis round. The light gray areas represent the process of comparison, which takes\nthe values of $\\{u_k\\}$ with respect to the maximum value of $f$. (b) The\ncontrol-enhanced values of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nwith a full $N_{\\mathrm{c}}$ (red pentagrams) and $N_{\\mathrm{c}}=6$ (green circles),\nwhere the controls are generated via PSO. (c) The optimal controls for $N_{\\mathrm{c}}=6$\nin the case of $\\omega_{\\mathrm{tr}}T=20$. The true value $\\omega_{\\mathrm{tr}}$ is set\nto be $1$. Planck units are applied here.}\n\\label{fig:PSO}\n\\end{figure*}\n\n\\begin{algorithm*}[tp]\n\\SetArgSty{}\n\\caption{PSO} \\label{algorithm:pso}\nInitialize the control $\\left\\{u_k\\right\\}^i_1$ for each $i \\in \\left[1,P\\right]$; \\\\\nInitialize $\\left\\{\\delta u_k\\right\\}^i_1=0$ for each $i \\in \\left[1,P\\right]$;\\\\\nAssign $f(\\left\\{u_k\\right\\}^i_{0,\\mathrm{pb}})=0$ for each $i \\in \\left[1,P\\right]$; \\\\\n\\For {$m=1, M$\\ \\do}{\n\\For {$i=1, P$\\ \\do}{\nReceive the control $\\left\\{u_k\\right\\}^i_m$;\\\\\nEvolve the state with $\\left\\{u_k\\right\\}^i_m$ and calculate the objective function\n$f(\\left\\{u_k\\right\\}^i_m)$ at the target time $T$; \\\\\nCompare $f(\\left\\{u_k\\right\\}^i_m)$ with value of the personal best in last episode\n$f(\\left\\{u_k\\right\\}^i_{m-1,\\mathrm{pb}})$ and assign the new personal best\n$\\left\\{u_k\\right\\}^i_{m,\\mathrm{pb}}=\\mathrm{arg}\n\\left(\\max\\left\\{f(\\left\\{u_k\\right\\}^i_{m-1,\\mathrm{pb}}),\nf(\\left\\{u_k\\right\\}^i_m)\\right\\}\\right)$;}\nCompare all $f(\\left\\{u_k\\right\\}^i_{m,\\mathrm{pb}})$ with $i\\in[1,P]$ and assign the global best\n$\\left\\{u_k\\right\\}_{m, \\mathrm{gb}}=\\mathrm{arg}\\left(\\max\\limits_{i\\in\\left[1,P\\right]}\nf(\\left\\{u_k\\right\\}^i_{m, \\mathrm{pb}})\\right)$;\\\\\n\\For {$i=1, P$\\ \\do}\n{Calculate $\\left\\{\\delta u_k\\right\\}^i_m= c_0 \\left\\{\\delta u_k\\right\\}^i_{m-1} +\n\\mathrm{rand}() \\cdot c_1\\big(\\left\\{u_k\\right\\}^i_{m, \\mathrm{pb}}-\\left\\{u_k\\right\\}^i_m\\big) +\n\\mathrm{rand}() \\cdot c_2\\big(\\left\\{u_k\\right\\}_{m,\\mathrm{gb}}-\\left\\{u_k\\right\\}^i_m\\big)$;\\\\\nUpdate the control $\\left\\{u_k\\right\\}^i_{m+1} = \\left\\{u_k\\right\\}^i_m+\\left\\{\\delta u_k\\right\\}^i_m$.\n}}\nSave the global best $\\{u_k\\}_{M,\\mathrm{gb}}$.\n\\end{algorithm*}\n\nParticle swarm optimization (PSO) is a well-used gradient-free method in\noptimizations~\\cite{Kennedy1995,Eberhart2001}, and has been applied in the detection\nof gravitational waves~\\cite{Michimura2018}, the characterization of open systems~\\cite{Stenberg2016},\nthe prediction of crystal structure~\\cite{Wang2010}, and in quantum metrology it has been used\nto generate adaptive measurement schemes in phase estimations~\\cite{Hentschel2010,Hentschel2011}.\n\nA typical version of PSO includes a certain number (denoted by $P$) of parallel particles. In quantum\ncontrol, these particles are just $P$ sets of controls $\\{u_k\\}$ labelled by $\\{u_k\\}^i$ for $i=1,\\dots,P$.\nThe value of $\\{u_k\\}$ of $i$th particle in $m$th round of episode is further denoted by $\\{u_k\\}^i_m$.\nThe basic optimization philosophy of PSO is given in Fig.~\\ref{fig:PSO}(a) and the pseudocode is given\nin Algorithm~\\ref{algorithm:pso}. In the pseudocode, $\\left\\{u_k\\right\\}^i_{0,\\mathrm{pb}}$ and\n$f(\\left\\{u_k\\right\\}^i_{0,\\mathrm{pb}})$ are just formal notations representing the initialization\nof the personal bests. There exist two basic concepts in PSO, the personal best and global best. In\nthe $m$th round of episode, the personal best of $i$th particle ($\\{u_k\\}^i_{m,\\mathrm{pb}}$) is\nassigned by the $\\{u_k\\}$ with respect to the maximum value of $f$ among all previous episodes of\nthis particle, namely,\n\\begin{equation}\n\\{u_k\\}^i_{m,\\mathrm{pb}}=\\mathrm{arg}\\left(\\max \\limits_{n\\in\\left[1,m\\right]}\nf(\\left\\{u_k\\right\\}^i_n)\\right)\n\\end{equation}\nwith $\\mathrm{arg}(\\cdot)$ the argument. For example, as illustrated in Fig.~\\ref{fig:PSO}, if $f^1_j$ is\nthe maximum in $\\{f^1_1,f^1_2,\\dots,f^1_m\\}$, then $\\{u_k\\}^1_{m,\\mathrm{pb}}$ is assigned by $\\{u_k\\}^1_j$.\nOnce the personal bests are obtained for all particles, the global best is assigned by the $\\{u_k\\}$ with\nrespect to the maximum value of $f$ among all personal bests, i.e.,\n\\begin{equation}\n\\{u_k\\}_{m, \\mathrm{gb}}=\\mathrm{arg}\\left(\\max\\limits_{i\\in\\left[1,P\\right]}\nf(\\{u_k\\}^i_{m,\\mathrm{pb}})\\right).\n\\end{equation}\nWith all personal bests and the global best, the velocity $\\{\\delta u_k\\}^i_m$ for the $i$th particle is\ncalculated by\n\\begin{align}\n\\{\\delta u_k\\}^i_m =& c_0 \\{\\delta u_k\\}^i_{m-1}\n\\!+\\!\\mathrm{rand}()\\cdot c_1\\left(\\{u_k\\}^i_{m,\\mathrm{pb}}-\\{u_k\\}^i_m\\right) \\nonumber \\\\\n& +\\mathrm{rand}()\\cdot c_2\\left(\\{u_k\\}_{m,\\mathrm{gb}}-\\{u_k\\}^i_m\\right),\n\\end{align}\nwhere rand() represents a random number within $[0,1]$ and $c_0$, $c_1$, $c_2$ are three positive\nconstant numbers. In the package, these parameters can be adjusted in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}, shown in\nTable~\\ref{table:ctrl_paras}, via the variables {\\fontfamily{bch}\\selectfont\\small\\itshape c0}, {\\fontfamily{bch}\\selectfont\\small\\itshape c1} and {\\fontfamily{bch}\\selectfont\\small\\itshape c2}. A\ntypical choice for these constants is $c_0=1$, $c_1=c_2=2$, which are also the default values in the package.\n{\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} represents the episode number to run. If it is only set to be\na number, for example {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode=1000}, the program will continuously run 1000 episodes. However,\nif it is a list, for example {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode=[1000,100]}, the program will also run 1000 episodes\nin total but replace $\\{u_k\\}$ of all particles with the current global best every 100 episodes.\n{\\fontfamily{bch}\\selectfont\\small\\itshape p\\_num} represents the particle number and is set to be 10 in default. The initial guesses\nof control can be input via {\\fontfamily{bch}\\selectfont\\small\\itshape ctrl0} and the default choice {\\fontfamily{bch}\\selectfont\\small\\itshape ctrl0=[]} means all the guesses\nare randomly generated. In the case that the number of input guessed controls is less than the particle number,\nthe algorithm will generate the remaining ones randomly. On the other hand, if the number is larger than the\nparticle number, only the suitable number of controls will be used. The optimization result can be realized\nrepeatedly by fixing the value of the variable {\\fontfamily{bch}\\selectfont\\small\\itshape seed}, and its default value is 1234 in the package.\n\n\\begin{algorithm*}[tp]\n\\SetArgSty{}\n\\caption{DE}\nInitialize the control $\\{u_k\\}^i$ for $i\\in[1,P]$; \\\\\nEvolve the state with $\\{u_k\\}^i$ and calculate the objective function $f(\\{u_k\\}^i)$\nat the target time $T$ for $i\\in[1,P]$; \\\\\n\\For {episode=1, $M$}{\n\\For {$i=1,P$}{\nRandomly generate three integers $p_1$, $p_2$, $p_3$ in the regime $[1,P]$; \\\\\nGenerate $\\{G_k\\}$ via the equation $\\{G_k\\}=\\{u_k\\}^{p_1}+c(\\{u_k\\}^{p_2}-\\{u_k\\}^{p_3})$; \\\\\n\\For {$k=1, K$}{\nGenerate a random integer $a\\in[1, N_{\\mathrm{c}}]$; \\\\\n\\For {$j=1, N_{\\mathrm{c}}$}{\nGenerate a random number $r\\in[0,1]$ and assign\n$ [Q_k]_j=\n\\begin{cases}\n[G_k]_j, & {\\mathrm{if}~r\\leq c_r~\\mathrm{or}~j=a}, \\\\\n[u_k]_j, & {\\mathrm{if}~r>c_r~\\mathrm{and}~j\\neq a};\n\\end{cases}$\n}}\nEvolve the state with the control $\\{Q_k\\}$ and calculate $f(\\{Q_k\\})$ at time $T$; \\\\\n\\If {$f(\\{u_k\\}^i)c_r~\\mathrm{and}~j\\neq a},\n\\end{cases}\n\\end{equation}\nwhere $[G_k]_j$ is the $j$th entry of $G_k$ and $[u_k]_j$ is the $j$th entry of a $u_k$ in $\\{u_k\\}^i$. This\nequation means if $r$ is no larger than a given constant $c_r$ (usually called crossover constant in DE),\nthen assign $[G_k]_j$ to $[Q_k]_j$, otherwise assign $[u_k]_j$ to $[Q_k]_j$. In the meantime, the $a$th\nentry of $Q_k$ always takes the value of $[G_k]_j$ regardless the value of $r$ to make sure at least one\npoint mutates. After the crossover, the values of objective functions $f(\\{u_k\\}^i)$ and $f(\\{Q_k\\})$ are compared,\nand $\\{u_k\\}^i$ is replaced by $\\{Q_k\\}$ if $f(\\{Q_k\\})$ is larger. In the package, $c$ and $c_r$ can be adjusted\nvia the variables {\\fontfamily{bch}\\selectfont\\small\\itshape c} and {\\fontfamily{bch}\\selectfont\\small\\itshape cr} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}, and the default values are 1.0 and 0.5.\n\n\\emph{Example.} The performance of controls generated via DE is also illustrated with the dynamics in Eq.~(\\ref{eq:ME_spon})\nand control Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}). $\\delta_{\\mathrm{q}}\\omega$ is defined in Eq.~(\\ref{eq:q_deviation}).\nAs shown in the Fig.~\\ref{fig:DE}(b), different with PSO, the performance of DE with a full $N_{\\mathrm{c}}$ (red pentagrams)\nis very close to that of auto-GRAPE (dash-dotted gray line), even for a large target time $T$, which indicates that DE works\nbetter than PSO in this example. More surprisingly, in the case of $N_{\\mathrm{c}}=6$, DE (green circles) not only outperforms\nPSO, but also significantly outperforms auto-GRAPE (dashed light-blue line). This result indicates that no algorithm has the\nabsolute advantage in general. Comparison and combination of different algorithms are a better approach to design optimal\ncontrols in quantum metrology, which can be conveniently finished via QuanEstimation. The optimal controls obtained via DE\nfor $N_{\\mathrm{c}}=6$ are given in Fig.~\\ref{fig:DE}(c) in the case of $\\omega_{\\mathrm{tr}}T=20$. The results above are\nobtained with 1000 episodes, which can be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode=1000} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}.\n\n\\subsection{Deep Deterministic Policy Gradients}\n\n\\begin{figure}[bp]\n\\centering\\includegraphics[width=8.5cm]{Fig_DDPG.pdf}\n\\caption{(a) The control-enhanced values of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nwith a full $N_{\\mathrm{c}}$ (red pentagrams) and $N_{\\mathrm{c}}=6$ (green circles),\nwhere the controls are generated via DDPG. (b-c) The change of total reward in the\nepisodes in the case of (b) a full $N_{\\mathrm{c}}$ and (c) $N_{\\mathrm{c}}=6$.\n(d) The controls obtained via DDPG for $N_{\\mathrm{c}}=6$ in the case of\n$\\omega_{\\mathrm{tr}}T=20$. The true value $\\omega_{\\mathrm{tr}}$ is set to be $1$.\nPlanck units are applied here.}\n\\label{fig:DDPG}\n\\end{figure}\n\nDeep Deterministic Policy Gradients (DDPG) is a powerful tool in machine learning~\\cite{Lillicrap2015}\nand has already been applied in quantum physics to perform quantum multiparameter estimation~\\cite{Xu2021}\nand enhance the generation of spin squeezing~\\cite{Tan2021}. The pseudocode of DDPG for quantum estimation\nand the corresponding flow chart can be found in Ref.~\\cite{Liu2022}, and the details will not be repeatedly\naddressed herein.\n\n\\emph{Example.} The performance of controls generated via DDPG in the case of single-parameter estimation is also\nillustrated with the dynamics in Eq.~(\\ref{eq:ME_spon}) and control Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}), as shown\nin Fig.~\\ref{fig:DDPG}(a). $\\delta_{\\mathrm{q}}\\omega$ is defined in Eq.~(\\ref{eq:q_deviation}). The reward is taken\nas the logarithm of the ratio between the controlled and non-controlled values of the QFI at time $t$. It can be seen\nthat the performance of DDPG with a full $N_{\\mathrm{c}}$ (red pentagrams) shows a significant disparity with that of\nauto-GRAPE (dash-dotted gray line). A more surprising fact is that it is even worse than the performance of both\nauto-GRAPE (dashed light-blue line) and DDPG (green circles) with $N_{\\mathrm{c}}=6$. And the performance of DDPG with\n$N_{\\mathrm{c}}=6$ also presents no advantage compared to PSO and DE. However, we cannot rashly say that PSO and DE\noutperform DDPG here as DDPG involves way more parameters and maybe a suitable set of parameters would let its performance\ncomparable or even better than PSO and DE. Nevertheless, we can still safely to say that PSO and DE, especially DE, are\neasier to find optimal controls in this example and DDPG does not present a general advantage here. The total reward in\nthe case of $\\omega_{\\mathrm{tr}}T=20$ with a full $N_{\\mathrm{c}}$ and $N_{\\mathrm{c}}=6$ are given in Figs.~\\ref{fig:DDPG}(b)\nand \\ref{fig:DDPG}(c), respectively. The total reward indeed increases and converges for a full $N_{\\mathrm{c}}$, but the\nfinal performance is only sightly better than the non-controlled value [dotted black line in Fig.~\\ref{fig:DDPG}(a)].\nFor $N_{\\mathrm{c}}=6$, the total reward does not significantly increase, which means the corresponding performance\nof $\\delta_{\\mathrm{q}}\\omega$ basically comes from the average performance of random controls. The controls obtained\nvia DDPG for $N_{\\mathrm{c}}=6$ are shown in Fig.~\\ref{fig:DDPG}(d).\n\n\\subsection{Performance of the convergence speed}\n\nApart from the improvement of the objective function, the convergence speed is also an important aspect of an\nalgorithm to evaluate its performance. Here we illustrate the convergence performance of different algorithms\nin Fig.~\\ref{fig:converg} in the single-parameter scenario discussed previously, namely, the dynamics in\nEq.~(\\ref{eq:ME_spon}) and control Hamiltonian in Eq.~(\\ref{eq:ctrl_demo}) with a full $N_{\\mathrm{c}}$. As\nshown in Fig.~\\ref{fig:converg}(a), GRAPE (dashed red line) and auto-GRAPE (dotted black line) show higher\nconvergence speed than PSO (solid green line) and DE (dash-dotted cyan line). This phenomenon coincides with\nthe common understanding that the gradient-based methods converge faster than gradient-free methods in general.\nDE converges slower than GRAPE and auto-GRAPE, but the final performance of QFI basically coincides with them.\nPSO presents the slowest speed in this example and the final result of QFI is also worse than others. DDPG is\nnot involved in this figure as its improvement on the QFI is not as significant as others.\n\nThe effect of Adam in auto-GRAPE is also illustrated in Fig.~\\ref{fig:converg}(b). Denote $\\epsilon$ as the\nlearning rate in Adam. In the case of constant-step update, auto-GRAPE with $\\epsilon=0.01$ (dotted black line)\nconverges faster than that with $\\epsilon=0.005$ (dash-dotted green line), which is common and reasonable as a\nlarge step usually implies a higher convergence speed. However, when Adam is invoked, this difference becomes\nvery insignificant and both lines (solid gray line for $\\epsilon=0.01$ and dashed blue line for $\\epsilon=0.005$)\nconverge faster than constant-step updates. However, it should be noticed that a large $\\epsilon$ in Adam may\nresult in a strong oscillation of $\\delta_{\\mathrm{q}}\\omega$ in the episodes, and it should be adjusted to smaller\nvalues if one wants to avoid this phenomenon.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_convergence.pdf}\n\\caption{(a) The convergence performance of different algorithms, including\nGRAPE (dashed red line), auto-GRAPE (dotted black line), PSO (solid green line)\nand DE (dash-dotted cyan line). (b) The convergence performance of auto-GRAPE\nwith constant step $\\epsilon=0.01$ (dotted black line), $\\epsilon=0.005$\n(dash-dotted green line), and with Adam (solid gray line for $\\epsilon=0.01$\nand dashed blue line for $\\epsilon=0.005$). The target time $\\omega_{\\mathrm{tr}}T=20$,\nand the true value $\\omega_{\\mathrm{tr}}$ is set to be 1. Planck units are\napplied here. }\n\\label{fig:converg}\n\\end{figure}\n\n\\subsection{Multiparameter estimation}\n\\label{sec:multi}\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_multipara.pdf}\n\\caption{(a) The performance of controls generated via different algorithms,\nincluding GRAPE (red pentagrams), auto-GRAPE (cyan triangles) with full\n$N_{\\mathrm{c}}$, PSO (blue crosses) with full $N_{\\mathrm{c}}$, DE (yellow\ncircles) with full $N_{\\mathrm{c}}$ and DDPG (orange pluses) with full\n$N_{\\mathrm{c}}$. (b) The performance of controls generated via PSO (dark\nblue diamonds), DE (small red hollow circles) and DDPG (large purple hollow\ncircles) with limited $N_{\\mathrm{c}}$ ($N_{\\mathrm{c}}=10$). $W$ is chosen\nto be $\\openone$.}\n\\label{fig:multipara}\n\\end{figure}\n\nCompared to the single-parameter estimation, multiparameter estimation is a more challenging problem in\nquantum metrology. In this case, $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ cannot be used as the objective function\nin the implementation of GRAPE as the analytical calculation of $\\mathcal{F}^{-1}$ is very difficult,\nif not fully impossible, when the number of parameter is large. Hence, in GRAPE when $W=\\openone$,\n$\\sum_{a}1\/\\mathcal{F}_{aa}$, a lower bound of $\\mathrm{Tr}(\\mathcal{F}^{-1})$, is taken as\nthe superseded objective function~\\cite{Liu2020,Liu2017b,Liu2022}. Unfortunately, $\\sum_{a}W_{aa}\/\\mathcal{F}_{aa}$\nfails to be a valid lower bound for a general $W$. In this case, to keep $\\sum_{a}W_{aa}\/\\mathcal{F}_{aa}$\na valid lower bound, the parameters for estimation have to be reorganized by the linear combination of the\noriginal ones to let $W$ be diagonal, which causes the inconvenience to implement GRAPE insuch cases.\nDifferent with GRAPE, this problem naturally vanishes in auto-GRAPE as the inverse matrix $\\mathcal{F}^{-1}$\nis calculated automatically and so does the gradient. In the meantime, PSO and DE would also not face such\nproblems as they are gradient-free.\n\n\\emph{Example.} Here we take an electron-nuclear spin system, which can be readily realized in the Nitrogen-vacancy\ncenters, as an example to demonstrate and compare the performance of different algorithms included in QuanEstimation.\nThe Hamiltonian of this system reads~\\cite{Barry2020,Schwartz2018,Rembold2020}\n\\begin{equation}\nH_0\/\\hbar=DS^2_3+g_{\\mathrm{S}}\\vec{B}\\cdot\\vec{S}+g_{\\mathrm{I}}\\vec{B}\\cdot\\vec{I}\n+\\vec{S}^{\\,\\mathrm{T}}\\mathcal{A}\\vec{I},\n\\label{eq:NV_H}\n\\end{equation}\nwhere $S_i=s_i\\otimes\\openone$ and $I_i=\\openone\\otimes\\sigma_i$ ($i=1,2,3$) represent the\nelectron and nuclear ($^{15}\\mathrm{N}$) operators with $s_1$, $s_2$ and $s_3$ spin-1 operators.\nTheir specific expressions are\n\\begin{eqnarray}\ns_1 = \\frac{1}{\\sqrt{2}}\\left(\\begin{array}{ccc}\n0 & 1 & 0 \\\\\n1 & 0 & 1 \\\\\n0 & 1 & 0\n\\end{array}\\right),\ns_2 = \\frac{1}{\\sqrt{2}}\\left(\\begin{array}{ccc}\n0 & -i & 0\\\\\ni & 0 & -i\\\\\n0 & i & 0\n\\end{array}\\right)\\!\\!, \\nonumber\n\\end{eqnarray}\nand $s_3=\\mathrm{diag}(1,0,-1)$. The vectors $\\vec{S}=(S_1,S_2,S_3)^{\\mathrm{T}}$,\n$\\vec{I}=(I_1,I_2,I_3)^{\\mathrm{T}}$ and $\\mathcal{A}$ is the hyperfine tensor. In this case,\n$\\mathcal{A}=\\mathrm{diag}(A_1,A_1,A_2)$ with $A_1$ and $A_2$ the axial and transverse magnetic hyperfine\ncoupling coefficients. The hyperfine coupling between the magnetic field and electron are approximated to be\nisotopic. The coefficients $g_{\\mathrm{S}}=g_\\mathrm{e}\\mu_\\mathrm{B}\/\\hbar$ and\n$g_{\\mathrm{I}}=g_\\mathrm{n}\\mu_\\mathrm{n}\/\\hbar$. Here $g_\\mathrm{e}$ ($g_\\mathrm{n}$) is the $g$ factor\nof the electron (nuclear), $\\mu_\\mathrm{B}$ ($\\mu_\\mathrm{n}$) is the Bohr (nuclear) magneton and $\\hbar$ is\nthe Plank's constant. The control Hamiltonian is\n\\begin{equation}\nH_{\\mathrm{c}}\/\\hbar=\\sum^3_{i=1}\\Omega_i(t)S_i,\n\\label{eq:NV_c}\n\\end{equation}\nwhere $\\Omega_i(t)$ is a time-varying Rabi frequency. In practice, the electron suffers from the noise of\ndephasing, which means the dynamics of the full system is described by the master equation\n\\begin{equation}\n\\partial_t\\rho=-i[H_0+H_{\\mathrm{c}},\\rho]+\\frac{\\gamma}{2}(S_3\\rho S_3-S^2_3\\rho-\\rho S^2_3),\n\\label{eq:NV_ME}\n\\end{equation}\nwith $\\gamma$ the dephasing rate, which is usually inverse proportional to the dephasing time $T^{*}_2$.\n\nNow we use this system as a controllable magnetometer to estimate the magnetic field $\\vec{B}$, which is\na three-parameter estimation problem. The optimal controls can be obtained via different algorithms.\nIn this case, the initial state is taken as $(|1\\rangle+|\\!-\\!1\\rangle)\\otimes|\\!\\!\\uparrow\\rangle\/\\sqrt{2}$,\nwhere $(|1\\rangle+|\\!-\\!1\\rangle)\/\\sqrt{2}$ is an electron state with $|1\\rangle$ ($|\\!-\\!1\\rangle$) the\neigenstate of $s_3$ corresponding to the eigenvalue $1$ ($-1$). $|\\!\\!\\uparrow\\rangle$ is a nuclear state and\nthe eigenstate of $\\sigma_3$ corresponding to the eigenvalue 1. $W$ is chosen to be $\\openone$. The systematic\nparameters are chosen as $D=2\\pi\\times 2.87$\\,GHz, $g_{\\mathrm{S}}=2\\pi\\times 28.03$\\,GHz\/T,\n$g_{\\mathrm{I}}=2\\pi\\times 4.32$\\,MHz\/T, $A_1=2\\pi\\times 3.65$\\,MHz, $A_2=2\\pi\\times 3.03$\\,MHz, and the\ntrue values of $\\vec{B}$ are $B_1=B_2=B_3=0.50$\\,mT. The dephasing rate $\\gamma=2\\pi\\times 1$\\,MHz. All the\nparameter values are selected according to Ref.~\\cite{Barry2020,Felton2009}.\n\nThe performance of controls given by different algorithms is given in Fig.~\\ref{fig:multipara}. The control\namplitude is limited in the regime $[-20\\,\\mathrm{MHz},20\\,\\mathrm{MHz}]$. In the case of a full $N_{\\mathrm{c}}$\n[$N_{\\mathrm{c}}=2000T\/(0.01\\,\\mathrm{\\mu s})$], as shown in Fig.~\\ref{fig:multipara}(a), the performance of GRAPE\n(red pentagrams),\nauto-GRAPE (cyan triangles), PSO (blue crosses), DE (yellow circles) and DDPG (orange pluses) basically coincide\nfor small target time ($T\\leq 0.01\\,\\mathrm{\\mu s}$), and the reduction of $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nis limited compared to the non-controlled values (solid black line). In the regime of large target time\n($T> 0.01\\,\\mathrm{\\mu s}$), auto-GRAPE shows the best performance. GRAPE is not applied in these points as its\ntime consumption is too heavy for our computers. PSO and DE only find controls that provide slightly enhancement\non $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in this regime. The different behaviors of the performance are due to the\nlarge search space in this case. For example, the total control number for $T=0.08\\,\\mathrm{\\mu s}$ is 48000\nincluding all three controls $\\Omega_{1}$, $\\Omega_{2}$ and $\\Omega_{3}$. In such a large parameter space, different\nwith the gradient-based methods, the gradient-free methods cannot promise to find optimal values. Hence, the\ngradient-based methods would be a good choice in such cases. However, one should notice that the gradient-based\nmethods like auto-GRAPE could be more memory consuming than gradient-based methods. In the case that the computer\nmemory is limited, one may have to choose gradient-free methods.\n\nIn the case of a small search space, for example $N_{\\mathrm{c}}=10$, the performance of PSO and DE improve\nsignificantly, as shown in Fig.~\\ref{fig:multipara}(b). Both PSO (dark blue diamonds) and DE (small red hollow\ncircles) with $N_{\\mathrm{c}}=10$ outperform the full $N_{\\mathrm{c}}$ cases, yet\nDDPG with $N_{\\mathrm{c}}=10$ (large purple hollow circles) does not show this behavior. Similar to the\nsingle-parameter scenario, DE provides a better performance than PSO and DDPG when the control number\n$N_{\\mathrm{c}}$ is limited. A more interesting fact is that for some target time, like $T=0.03\\,\\mathrm{\\mu s}$,\nPSO and DE even provide comparable performance with auto-GRAPE with a full $N_{\\mathrm{c}}$, indicating that\nthe control schemes given by PSO and DE in this case not only meet the best precision limit, but are also simpler\nto implement in experiments than the full-$N_{\\mathrm{c}}$ scheme given by auto-GRAPE.\n\n\\subsection{Minimum parameterization time optimization}\n\nThe control optimizations discussed in the previous subsections are performed with a fixed target time $T$. In some\nscenarios, the goal is not to achieve the highest precision within a fixed time, but to reach a given precision as\nsoon as possible. This problem requires the search of minimum time to reach a given value of the objective function,\nwhich can be realized in QuanEstimation in the class {\\fontfamily{bch}\\selectfont\\small\\itshape ControlOpt()}. After the callings of\n{\\fontfamily{bch}\\selectfont\\small\\itshape control=ControlOpt()} and {\\fontfamily{bch}\\selectfont\\small\\itshape control.dynamics()}, one can use the following codes to solve this problem:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\ncontrol.mintime(f,W=[],M=[],method=\"binary\",\n target=\"QFIM\",LDtype=\"SLD\")\n\\end{lstlisting}\nHere the input {\\fontfamily{bch}\\selectfont\\small\\itshape f} is a float number representing the given value of the objective function. The type of\nobjective function can be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape target=\" \"}, which includes three options {\\fontfamily{bch}\\selectfont\\small\\itshape \"QFIM\"}\n(default), {\\fontfamily{bch}\\selectfont\\small\\itshape \"CFIM\"}, and {\\fontfamily{bch}\\selectfont\\small\\itshape \"HCRB\"}. The measurement can be input via {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]} if\nnecessary, and in this case the objective function will be chosen as the CFIM regardless of the setting in\n{\\fontfamily{bch}\\selectfont\\small\\itshape target=\" \"}. In the case of {\\fontfamily{bch}\\selectfont\\small\\itshape target=\"QFIM\"}, the type of QFIM can be changed via\n{\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"}. The choices include {\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"}, and {\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}.\n{\\fontfamily{bch}\\selectfont\\small\\itshape method=\"binary\"} represents the binary search (logarithmic search) and {\\fontfamily{bch}\\selectfont\\small\\itshape method=\"forward\"}\nrepresents the forward search from the beginning of time. Choosing a suitable method may help to improve the\ncalculation efficiency. For example, if the users already know that the minimum time is very small compared to $T$,\nthe forward search would be more efficient than the binary search. Notice that the search is restricted in the regime\n$[0,T]$ where $T$ is given by the array {\\fontfamily{bch}\\selectfont\\small\\itshape tspan} input in {\\fontfamily{bch}\\selectfont\\small\\itshape ControlOpt()}, and the current codes\ncan only deal with full-$N_{\\mathrm{c}}$ controls. The outputs are two files \"mtspan.csv\" and \"controls.csv\" containing\nthe array of time from the start to the searched minimum time and the corresponding length of optimal controls,\nrespectively.\n\n\\section{State optimization}\n\\label{sec:state_opt}\n\n\\begin{table}[tp]\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\hline\n~~Algorithms~~ & ~~method=~~ & \\multicolumn{2}{c}{~~**kwargs and default values~~}\\\\\n\\hline\n\\multirow{6}{*}{AD} & \\multirow{6}{*}{\"AD\"} & \"Adam\" & False \\\\\n & & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 300 \\\\\n & & \"epsilon\" & 0.01 \\\\\n & & \"beta1\" & 0.90 \\\\\n & & \"beta2\" & 0.99 \\\\\n\\hline\n\\multirow{7}{*}{PSO} & \\multirow{7}{*}{\"PSO\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & [1000,100] \\\\\n & & \"c0\" & 1.0 \\\\\n & & \"c1\" & 2.0 \\\\\n & & \"c2\" & 2.0 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{6}{*}{DE} & \\multirow{6}{*}{\"DE\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"c\" & 1.0 \\\\\n & & \"cr\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{9}{*}{NM} & \\multirow{9}{*}{\"NM\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"ar\" & 1.0 \\\\\n & & \"ae\" & 2.0 \\\\\n & & \"ac\" & 0.5 \\\\\n & & \"as0\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{3}{*}{RI} & \\multirow{3}{*}{\"RI\"} & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 300 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{5}{*}{DDPG} & \\multirow{5}{*}{\"DDPG\"} & \"psi0\" & [] \\\\\n & & \"max\\_episode\" & 500 \\\\\n & & \"layer\\_num\" & 3 \\\\\n & & \"layer\\_dim\" & 200 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Available methods for state optimization in QuanEstimation and\ncorresponding default parameter settings. Notice that AD is not available\nwhen the HCRB are taken as the objective function.}\n\\label{table:StateOpt_paras}\n\\end{table}\n\nQuantum resources like entanglement and squeezing are key in quantum parameter estimation to demonstrate a quantum\nenhanced precision. In contrast to the dynamical resources like time or control, entanglement and squeezing are\nusually embedded in the probe state, indicating that different probe states would present dramatically different\nperformance on the precision. The search of optimal probe states is thus an essential step in the design of optimal\nschemes. Various methodologies, including direct analytical calculations~\\cite{Caves1981,Liu2013,Jarzyna2012,Lang2013,\nLang2014,Modi2011,Monras2006,Fiderer2019,Safranek2016,Knysh2014,Fujiwara2001}, semi-analytical~\\cite{Dorner2009,\nRafal2009,Forsgren2002,Maccone2009,Knysh2011,Yuan2017} and full numerical approaches~\\cite{Frowis2014,Knott2016,\nRafal2020a,Basilewitsch2020,Larrouy2020}, have been proposed and discussed. More advances of the state optimization\nin quantum metrology can be found in a recent review~\\cite{Liu2022}. QuanEstimation includes the process of state\noptimization with various methods, including both gradient-based and gradient-free methods. The specific codes in\nQuanEstimation for the execution of state optimization are as follows:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\nstate = StateOpt(savefile=False,method=\"AD\",\n **kwargs)\nstate.dynamics(tspan,H0,dH,Hc=[],ctrl=[],\n decay=[])\nstate.QFIM(W=[],LDtype=\"SLD\")\nstate.CFIM(M=[],W=[])\n\\end{lstlisting}\nIn the case that the parameterization is described by the Kraus operators, replace {\\fontfamily{bch}\\selectfont\\small\\itshape state.dynamics()}\nwith the code {\\fontfamily{bch}\\selectfont\\small\\itshape state.Kraus(K,dK)}. The parameters above have already been introduced previously. The\ndefault settings {\\fontfamily{bch}\\selectfont\\small\\itshape W=[]} and {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]} means $W=\\openone$ and the measurement is a SIC-POVM.\nThe optimization method can be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape method=\" \"} and corresponding parameters can be set via\n{\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. The available optimization methods and corresponding default parameter settings are given\nin Table~\\ref{table:StateOpt_paras}. Two files \"f.csv\" and \"states.csv\" will be generated at the end of the program,\nwhich include the values of the objective function in all episodes and the final obtained optimal probe state.\nWhen {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True}, the states obtained in all episodes will be saved in \"states.csv\". In the\nmultiparameter estimation, the HCRB can also be chosen as the objective function by calling the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\nstate.HCRB(W=[])\n\\end{lstlisting}\nNotice that if {\\fontfamily{bch}\\selectfont\\small\\itshape method=\"AD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape state.HCRB()} is not available. Similar to the control\noptimization, if the users invoke {\\fontfamily{bch}\\selectfont\\small\\itshape state.HCRB()} in the single-parameter scenario, a warning will\narise to remind them to call {\\fontfamily{bch}\\selectfont\\small\\itshape state.QFIM()} instead.\n\n\\begin{algorithm}[tp]\n\\SetArgSty{}\n\\caption{AD for pure states} \\label{algorithm:AD}\nReceive the guessed probe state $\\rho_{0}=|\\psi_{\\mathrm{in}}\\rangle\\langle\\psi_{\\mathrm{in}}|$; \\\\\n\\For {episode=1, $M$}{\nCalculate the density matrix $\\rho_{T}$ and its derivative $\\partial_{\\bold{x}}\\rho_{T}$ at the target time $T$; \\\\\nCalculate the objective function $f(T)$ with $\\rho_{T}$ and $\\partial_{\\bold{x}}\\rho_{T}$; \\\\\nCalculate the gradients $\\big\\{\\frac{\\delta f(T)}{\\delta c_{i}}\\big\\}$ with the automatic differentiation; \\\\\nUpdate the coefficients $\\{c_{i}\\} \\leftarrow \\{c_{i}\\}+\\epsilon\\big\\{\\frac{\\delta f(T)}{\\delta c_{i}}\\big\\}$; \\\\\nNormalize the coefficients $\\{c_i\\}\\leftarrow \\frac{1}{\\sqrt{\\sum_j |c_j|^2}}\\{c_i\\}$.\n}\nSave the coefficients and reconstruct the state.\n\\end{algorithm}\n\nIn the previous section, we already showed the power of automatic differentiation (AD) in the construction of\nauto-GRAPE. Similarly, it can also be used in the state optimization. Due to the convexity of the QFI and\nQFIM~\\cite{Toth2014,Liu2020}, the optimal probe states are pure states in most scenarios. Hence, we first\nconsider the state optimization within the set of pure states. The pseudocode of AD in state optimization\nfor pure states is given in Algorithm~\\ref{algorithm:AD}. In a specific basis $\\{|i\\rangle\\langle i|\\}$, a probe\nstate could be expanded by $|\\psi\\rangle=\\sum_i c_i|i\\rangle$, and the search of optimal probe states is equivalent\nto the search of a set of normalized complex coefficients $\\{c_i\\}$. In AD, a guessed probe state is first given or\ngenerated and evolved to the target time $T$ according to the given dynamics, during which the density matrices\nand corresponding derivatives with respect to $\\bold{x}$ are calculated and saved. Then after calculating the objective\nfunction $f(T)$ at time $T$, all gradients $\\{\\delta f(T)\/\\delta c_i\\}$ are evaluated via the automatic\ndifferentiation, and the coefficients $\\{c_i\\}$ are updated accordingly with the step $\\epsilon$. This step can\nbe adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape epsilon} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. Finally, the updated coefficients are normalized\nas required by the quantum mechanics. In the package, Adam is not applied by default in AD and it can be turned\non by setting {\\fontfamily{bch}\\selectfont\\small\\itshape Adam=True} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}.\n\nRegarding the gradient-free methods, apart from the PSO, DE and DDPG, QuanEstimation also contains the Nelder-Mead\nalgorithm (NM)~\\cite{Nelder1965}, which has already been used by Fr\\\"{o}wis et al.~\\cite{Frowis2014} to perform\nthe state optimization in the case of collective spins. The detailed flow chart of NM to locate the minimum value\nof an objective function can be found in Ref.~\\cite{Liu2022}. For the sake of self-consistency of the paper, here\nwe present its pseudocode in Algorithm~\\ref{algorithm:NM} for the search of the maximum value of $f$ at the target\ntime $T$.\n\n\\begin{algorithm}[tp]\n\\SetArgSty{}\n\\caption{NM for pure states} \\label{algorithm:NM}\nReceive a set of guessed states $|\\psi_1\\rangle,\\cdots,|\\psi_{n+1}\\rangle$;\\\\\n\\For {episode=1, $M$}{\nEvolve all states according to the given dynamics and calculate the objective function $f$\nat time $T$; \\\\\nSort the states and reassign the indices to let\n$f(|\\psi_1\\rangle)\\geq f(|\\psi_2\\rangle)\\geq\\cdots\\geq f(|\\psi_{n+1}\\rangle)$;\\\\\nCalculate the average state $|\\psi_{\\mathrm{a}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{a}}}}\n\\sum^n_{k=1}|\\psi_{k}\\rangle$; \\\\\nCalculate the reflected state\n$|\\psi_{\\mathrm{r}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{r}}}}\n[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{r}}(|\\psi_{\\mathrm{a}}\\rangle-|\\psi_{n+1}\\rangle)]$; \\\\\n\\uIf {$f(|\\psi_{\\mathrm{r}}\\rangle)>f(|\\psi_1\\rangle)$}\n{Calculate the expanded state\n$|\\psi_{\\mathrm{e}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{e}}}}\n[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{e}}(|\\psi_{\\mathrm{r}}\\rangle-|\\psi_{\\mathrm{a}}\\rangle)]$; \\\\\n\\eIf {$f(|\\psi_{\\mathrm{r}}\\rangle)\\geq f(|\\psi_{\\mathrm{e}}\\rangle)$}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{r}}\\rangle$;}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{e}}\\rangle$;}}\n\\uElseIf {$f(|\\psi_1\\rangle) \\geq f(|\\psi_{\\mathrm{r}}\\rangle) > f(|\\psi_n\\rangle)$}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{r}}\\rangle$;}\n\\uElseIf {$f(|\\psi_n\\rangle) \\geq f(|\\psi_{\\mathrm{r}}\\rangle) > f(|\\psi_{n+1}\\rangle)$}\n{Calculate the outside contracted state\n$|\\psi_{\\mathrm{oc}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{oc}}}}\n[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{c}}(|\\psi\\rangle_{\\mathrm{r}}-|\\psi_{\\mathrm{a}}\\rangle)]$;\\\\\n\\eIf{$f(|\\psi_{\\mathrm{oc}}\\rangle) \\geq f(|\\psi_{\\mathrm{r}}\\rangle)$}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{oc}}\\rangle$;}\n{Replace all $|\\psi_k\\rangle$ for $k\\in[2,n+1]$ with\n$\\frac{1}{\\sqrt{\\mathcal{N}_k}}[|\\psi_1\\rangle+a_{\\mathrm{s}}(|\\psi_k\\rangle-|\\psi_1\\rangle)]$;}\n}\n\\Else {Calculate the inside contracted state\n$|\\psi_{\\mathrm{ic}}\\rangle=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{ic}}}}\n[|\\psi_{\\mathrm{a}}\\rangle-a_{\\mathrm{c}}(|\\psi_{\\mathrm{a}}\\rangle-|\\psi_{n+1}\\rangle)]$;\\\\\n\\eIf {$f(|\\psi_{\\mathrm{ic}}\\rangle) > f(|\\psi_{n+1}\\rangle)$}\n{Replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{ic}}\\rangle$;}\n{Replace all $|\\psi_k\\rangle$ for $k\\in[2,n+1]$ with\n$\\frac{1}{\\sqrt{\\mathcal{N}_k}}[|\\psi_1\\rangle+a_{\\mathrm{s}}(|\\psi_k\\rangle-|\\psi_1\\rangle)]$;}\n}}\n\\end{algorithm}\n\nIn NM, $n+1$ guessed states are input and sorted descendingly according to the corresponding values of $f$, namely,\n$f(|\\psi_1\\rangle)\\geq\\cdots\\geq f(|\\psi_{n+1}\\rangle)$. In one episode of optimization, the average state\n$|\\psi_{\\mathrm{a}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{a}}}}\\sum^n_{k=1}|\\psi_k\\rangle$ and reflected state\n$|\\psi_{\\mathrm{r}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{r}}}}[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{r}}\n(|\\psi_{\\mathrm{a}}\\rangle-|\\psi_{n+1}\\rangle)]$ are first calculated. In the case that the reflected state is\nbetter than $|\\psi_1\\rangle$, i.e., $f(|\\psi_{\\mathrm{r}}\\rangle)$ is larger than $f(|\\psi_1\\rangle)$, the expanded\nstate $|\\psi_{\\mathrm{e}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{e}}}}[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{e}}\n(|\\psi_{\\mathrm{r}}\\rangle-|\\psi_{\\mathrm{a}}\\rangle)]$ is then calculated and compared to the reflected state. If\nthe reflected state is still better, then replace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{r}}\\rangle$, otherwise\nreplace $|\\psi_{n+1}\\rangle$ with $|\\psi_{\\mathrm{e}}\\rangle$. In the case that the performance of the reflected\nstate is in the middle of $|\\psi_1\\rangle$ and $|\\psi_n\\rangle$, just replace $|\\psi_{n+1}\\rangle$ with it. If its\nperformance is between $|\\psi_n\\rangle$ and $|\\psi_{n+1}\\rangle$, then the outside contracted state\n$|\\psi_{\\mathrm{oc}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{oc}}}}[|\\psi_{\\mathrm{a}}\\rangle+a_{\\mathrm{c}}\n(|\\psi\\rangle_{\\mathrm{r}}-|\\psi_{\\mathrm{a}}\\rangle)]$ is calculated and compared to the reflected state.\n$|\\psi_{n+1}\\rangle$ is replaced with $|\\psi_{\\mathrm{oc}}\\rangle$ if $|\\psi_{\\mathrm{oc}}\\rangle$ outperforms the\nreflected state, otherwise all states $\\{|\\psi_k\\rangle\\}$, except the best one $|\\psi_1\\rangle$, are replaced with\nthe states $\\frac{1}{\\sqrt{\\mathcal{N}_k}}[|\\psi_1\\rangle+a_{\\mathrm{s}}(|\\psi_k\\rangle-|\\psi_1\\rangle)]$ and the\nprogram goes to the next episode. In the case that $|\\psi_{\\mathrm{r}}\\rangle$ is no better than any state in\n$\\{|\\psi_k\\rangle\\}$, the inside contracted state\n$|\\psi_{\\mathrm{ic}}\\rangle:=\\frac{1}{\\sqrt{\\mathcal{N}_{\\mathrm{ic}}}}\n[|\\psi_{\\mathrm{a}}\\rangle-a_{\\mathrm{c}}(|\\psi_{\\mathrm{a}}\\rangle-|\\psi_{n+1}\\rangle)]$ is then calculated and\ncompared to $|\\psi_{n+1}\\rangle$. If it is better than $|\\psi_{n+1}\\rangle$, replace $|\\psi_{n+1}\\rangle$ with it,\notherwise perform the same replacement operation to all states as done previously. At the beginning of next round,\nall states are sorted in descending order again.\n$\\mathcal{N}_{\\mathrm{a}}:=\\langle\\psi_{\\mathrm{a}}|\\psi_{\\mathrm{a}}\\rangle$\nis the normalization coefficient, same as $\\mathcal{N}_{\\mathrm{r}}$, $\\mathcal{N}_{\\mathrm{e}}$,\n$\\mathcal{N}_{\\mathrm{oc}}$ and $\\mathcal{N}_{\\mathrm{ic}}$. A general setting of the coefficients are\n$a_{\\mathrm{r}}=1.0$, $a_{\\mathrm{e}}=2.0$, $a_{\\mathrm{c}}=a_{\\mathrm{s}}=0.5$, which are also the default\nvalues in the package. These coefficients can be adjusted in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} (shown in\nTable~\\ref{table:StateOpt_paras}) via {\\fontfamily{bch}\\selectfont\\small\\itshape ar}, {\\fontfamily{bch}\\selectfont\\small\\itshape ae}, {\\fontfamily{bch}\\selectfont\\small\\itshape ac} and {\\fontfamily{bch}\\selectfont\\small\\itshape as0}.\nIn the meantime, {\\fontfamily{bch}\\selectfont\\small\\itshape p\\_num} in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} represents the state number $n+1$.\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_StataOpt_unitary.pdf}\n\\caption{(a) The performance of the optimal probe states searched via AD (cyan triangles),\nRI (red pluses), PSO (blue crosses), DE (yellow circles) and NM (purple squares) in the\nLipkin\u2013Meshkov\u2013Glick model in the absence of noise. The blue dots represents the\nvalue of $\\sqrt{\\lambda T}\\delta g$ for the coherent spin state $|\\pi\/2,\\pi\/2\\rangle$,\nand the dash-dotted black and dashed black lines represent $1\/\\sqrt{N}$ and $1\/N$,\nrespectively. (b) The convergence performance of AD (dash-dotted cyan line), RI (solid\nred line), PSO (dotted blue line), DE (dashed yellow line), and NM (dotted star purple\nline) in the case of $N=500$. (c1-c5) The searched optimal states with different algorithms\nin the case of $N=100$. The target time is chosen as $\\lambda T=10$. The true value of $g$\nis 0.5, and the value of $h\/\\lambda$ is set to be $0.1$. Planck units are applied here.}\n\\label{fig:StateOpt_unitary}\n\\end{figure*}\n\n\\begin{algorithm}[tp]\n\\SetArgSty{}\n\\caption{RI}\nReceive the guessed probe state $\\rho_0$; \\\\\n\\For {episode=1, $M$}{\nEvolve the state with $\\rho=\\sum_i K_i \\rho_0 K^{\\dagger}_i$;\\\\\nCalculate the derivative $\\partial_{a}\\rho = \\sum_i(\\partial_{a}K_i)\\rho_0 K^{\\dagger}_i\n+ K_i\\rho_0(\\partial_{a}K^{\\dagger}_i)$; \\\\\nCalculate the QFI and the SLD $L$ with $\\rho$ and $\\partial_{\\bold{x}}\\rho$; \\\\\nCalculate the matrix $\\mathcal{M}$; \\\\\nFind the eigenvector ${|\\psi_{\\mathrm{m}}\\rangle}$ of $\\mathcal{M}$ corresponding to\nits largest eigenvalue; \\\\\nReplace $\\rho_0$ with $|\\psi_{\\mathrm{m}}\\rangle\\langle\\psi_{\\mathrm{m}}|$.}\nReturn the optimal state ${|\\psi\\rangle}$ and the QFI.\n\\label{algorithm:Iter}\n\\end{algorithm}\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_StateOpt_noise.pdf}\n\\caption{The performance of probe states obtained via different algorithms for\n(a) $N=8$ and (c) $N=30$ when the collective dephasing exists. The solid red line,\ndashed star blue line, dash-dotted circle cyan line, dashed purple line represent\nthe values of $\\sqrt{\\lambda T}\\delta g$ for the searched states obtained via AD,\nPSO, DE, and NM, respectively. The dash-dotted green line represents that of NM\nwith 20 parallel sets. The dotted black line represent the result of\n$|\\pi\/2,\\pi\/2\\rangle$. (b1-b5) The searched optimal states for $N=8$. (d1-d5) The\nsearched optimal states for $N=30$. The target time $\\lambda T=10$, and the true\nvalues of $g$ is 0.5. The value of $h\/\\lambda$ is set to be $0.1$ and the decay\nrate $\\gamma\/\\lambda=0.1$. Planck units are applied here.}\n\\label{fig:StateOpt_noise}\n\\end{figure*}\n\nApart from the aforementioned algorithms, there also exist dedicated algorithms for the state optimization\nin quantum parameter estimation. Here we introduce a reverse iterative algorithm (RI), which was first proposed\nin Ref.~\\cite{Demkowicz2011,Macieszczak2014} in the Bayesian estimation context, and then applied to the QFI\nin Ref.~\\cite{Macieszczak2013a}. In the case of single-parameter estimation, the QFI can be rewritten into\n\\begin{equation}\n\\label{eq:qfisup}\n\\mathcal{F}_{aa} = \\sup_{A} \\left[2\\mathrm{Tr}(A \\partial_a\\rho)-\\mathrm{Tr}(\\rho A^2)\\right].\n\\end{equation}\nThis form is equivalent to the standard definition of the QFI as can be seen by solving the maximization\nproblem $2\\mathrm{Tr}(A\\partial_a\\rho)-\\mathrm{Tr}(\\rho A^2)$ with respect to $A$, which is formally a\nquadratic function in matrix $A$ and the resulting extremum condition yields the standard linear equation\nfor $\\partial_a\\rho=\\frac{1}{2}(A\\rho+\\rho A)$, i.e., the optimal $A=L_a$ is just the SLD operator. When this\nsolution is plugged into the formula and it yields $\\mathrm{Tr}(\\rho L^2_a)$, which is in agreement with\nthe standard definition of the QFI. Consider the parameterization process described by the Kraus operators\ngiven in Eq.~(\\ref{eq:kraus_opt}), $\\rho=\\sum_i K_i(x)\\rho_0 K_i^\\dagger(x)$. Taking into account\nEq.~(\\ref{eq:qfisup}), we see that the problem of identifying the optimal input state $\\rho_0$ that maximizes\nthe QFI, can be written as a double maximization problem:\n\\begin{equation}\n \\sup_{\\rho_0}\\mathcal{F}_{aa} = \\sup_{A,\\rho_0}\n \\left[2\\mathrm{Tr}(A \\partial_a\\rho)-\\mathrm{Tr}(\\rho A^2)\\right].\n\\end{equation}\nThis observation leads to an effective iterative protocol, where for a fixed $\\rho_0$ we find the optimal $A$\nthat maximizes the above expression, and then fixing the optimal $A$ found in the previous step we look for the\noptimal $\\rho_0$. In order to implement the procedure, note that the QFI can be rewritten in the `Heisenberg\npicture' form, where the Kraus operators effectively act on the $L_a$ operators, as\n\\begin{equation}\n\\mathcal{F}_{aa}=\\mathrm{Tr}\\left(\\rho_0 \\mathcal{M}\\right)\n\\end{equation}\nwith\n\\begin{equation}\n\\mathcal{M}\\!=\\!\\sum_i 2\\!\\left[(\\partial_a K^{\\dagger}_i)L_a K_i\n\\!+\\!K^{\\dagger}_i L_a(\\partial_a K_i)\\right]\\!-\\!K_i^\\dagger L^2_a K_i.\n\\end{equation}\nThis equation indicates that for a fixed $\\mathcal{M}$ (i.e.~fixed $A=L_a$), the optimal probe state is nothing\nbut the eigenvector corresponding to the maximum eigenvalue of $\\mathcal{M}$. The pseudocode of this algorithm\nis given in Algorithm~\\ref{algorithm:Iter}. In one round of the optimization, $\\mathcal{M}$ is calculated and\nits eigenvector with respect to the maximum eigenvalue of $\\mathcal{M}$ is calculated and used as the probe\nstate in the next round. In the package, this method can be invoked via {\\fontfamily{bch}\\selectfont\\small\\itshape method=\"RI\"}. The number\nof episodes and the seed can be adjusted in {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs} (shown in Table~\\ref{table:StateOpt_paras})\nvia {\\fontfamily{bch}\\selectfont\\small\\itshape max\\_episode} and {\\fontfamily{bch}\\selectfont\\small\\itshape seed}. Notice that this method is only available when\n{\\fontfamily{bch}\\selectfont\\small\\itshape state.Kraus()} is invoked, and in the current version of the package, it only works for the\nsingle-parameter quantum estimation, i.e., the objective function is the QFI. The extension to the CFI and\nthe case of multiparameter estimation will be thoroughly discussed in an independent paper.\n\n\\emph{Example.} Here we use the Lipkin\u2013Meshkov\u2013Glick model as an example to show the state optimization with\nQuanEstimation. The Hamiltonian of this model is~\\cite{Lipkin1965}\n\\begin{equation}\nH_{\\mathrm{LMG}}=-\\frac{\\lambda}{N}(J_1^2+gJ_2^2)-hJ_3,\n\\end{equation}\nwhere $J_i=\\frac{1}{2}\\sum_{j=1}^N \\sigma_i^{(j)}$ ($i=1,2,3$) is the collective spin operator with $\\sigma_i^{(j)}$\nthe $i$th Pauli matrix for the $j$th spin. $N$ is the total number of spins, $\\lambda$ is the spin\u2013spin interaction\nstrength, $h$ is the strength of the external field and $g$ is the anisotropic parameter. All searches with\ndifferent algorithms start from the coherent spin state $|\\theta=\\pi\/2,\\phi=\\pi\/2\\rangle$, which is defined\nby~\\cite{Ma2011}\n\\begin{equation}\n|\\theta,\\phi\\rangle=\\exp\\left(-\\frac{\\theta}{2}e^{-i\\phi} J_{+}+\\frac{\\theta}{2}e^{i\\phi}J_-\\right)|J,J\\rangle,\n\\end{equation}\nwhere $|J,J\\rangle$ is a Dicke state with $J=N\/2$ and $J_{\\pm}=J_1\\pm iJ_2$. Here we consider the case\nthat the search is constrained to pure states with fixed $J=N\/2$, which can be expressed as\n$|\\psi\\rangle=\\sum^J_{m=-J}c_m|J,m\\rangle$ with $|J,m\\rangle$ a general Dicke state and $c_m$ a complex\ncoefficient. Let us first study the single-parameter scenario with $g$ the parameter to be estimated.\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=15.5cm]{Fig_StateOpt_multipara.pdf}\n\\caption{The performance of different algorithms for the weight matrix (a)\n$W=\\mathrm{diag}(1\/2,1\/2)$ and (b) $W=\\mathrm{diag}(1\/3,2\/3)$. The solid red\nline, dashed star blue line, dash-dotted circle cyan line, dashed purple line\nand dash-dotted green line represent the results obtained via AD, PSO, DE, NM,\nand NM with 20 parallel sets, respectively. The dotted black line represent the\nresult of $|\\pi\/2,\\pi\/2\\rangle$. (c1-c2) The optimal states obtained from AD and\nDE for $W=\\mathrm{diag}(1\/2,1\/2)$. (d1-d2) The optimal states obtained from AD\nand DE for $W=\\mathrm{diag}(1\/3,2\/3)$. The target time $\\lambda T=10$. The true\nvalues of $g$ and $h\/\\lambda$ are set to be 0.5 and $0.1$. Planck units are\napplied here.}\n\\label{fig:StateOpt_multipara}\n\\end{figure*}\n\nThe performance of the optimal probe states searched via AD (cyan triangles), RI (red pluses), PSO (blue crosses),\nDE (yellow circles) and NM (purple squares) in the absence of noise are given in Fig.~\\ref{fig:StateOpt_unitary}(a).\nHere $\\delta g=1\/\\sqrt{\\mathcal{F}_{gg}}$ is the theoretical optimal deviation for $g$. The target time is taken\nas $\\lambda T=10$ (Planck units are applied). The performance of DDPG is not good enough and thus not shown in\nthe figure. For a very small $N$, the searched optimal states do not show an obvious advantage than the state\n$|\\pi\/2,\\pi\/2\\rangle$ (blue dots). However, when $N$ is large the advantage becomes significant, and the performance\nof all searched states outperform $|\\pi\/2,\\pi\/2\\rangle$ and $1\/\\sqrt{N}$ (dash-dotted black line) in the case that\n$N$ is larger than around 6. For a large $N$, the performance of the states obtained via AD and RI are the best and\nvery close to $1\/N$ (dashed black line). The performance of DE and PSO basically coincide with each other (more\naccurately to say, the performance of DE is slightly better than that of PSO), but is worse than AD and RI. The\nperformance of NM is the worst in this example. Please note that we cannot rashly say that the general performance\nof NM is worse than DE or PSO in the state optimization just based on this plot as different parameter settings in\nthe algorithms sometimes could dramatically affect the behaviors, yet we basically use the general recommended settings\nin all algorithms. Nevertheless, different sensitivities of the parameter settings on the final result still indicates\nthat DE and PSO are easier to locate optimal states than NM at least in this example.\n\nRegarding the convergence performance in this example, as shown in Fig.~\\ref{fig:StateOpt_unitary}(b), RI shows\nthe fastest convergence speed and the best optimized value. AD is slightly slower than RI but still way faster than the\ngradient-free methods. However, the disadvantage of AD is that occupation of memory grows very fast with the increase\nof $N$. Hence, RI would be the best choice to try first for the state optimization in the case of unitary parameterization.\nIn the last, as a demonstration, the searched optimal states via different algorithms in the case of $N=100$ are shown\nin Figs.~\\ref{fig:StateOpt_unitary}(c1-c5).\n\n\\emph{Example.} When the collective dephasing is involved, the dynamics of this system is governed by the following\nmaster equation\n\\begin{equation}\n\\partial_t\\rho = -i[H_{\\mathrm{LMG}},\\rho]+\\gamma \\left(J_3\\rho J_3-\\frac{1}{2}\\left\\{\\rho, J^2_3\\right\\}\\right)\n\\label{eq:dephasing_LMG}\n\\end{equation}\nwith $\\gamma$ the decay rate. The performance of optimal probe states searched via AD (solid red line), PSO (dashed\nstar blue line), DE (dash-dotted circle cyan line) and NM (dashed purple line) are illustrated with $N=8$ and $N=30$\nin Figs.~\\ref{fig:StateOpt_noise}(a) and \\ref{fig:StateOpt_noise}(c), respectively. The corresponding optimal probe\nstates are given in Figs.~\\ref{fig:StateOpt_noise}(b1-b4) for $N=8$ and Figs.~\\ref{fig:StateOpt_noise}(d1-d4) for\n$N=30$. In both cases, the states obtained via AD, PSO and DE basically present coincidental performance at time $T$,\nand outperform $|\\pi\/2,\\pi\/2\\rangle$ (dotted black lines). Similar to the unitary scenario, the state obtained via\nNM shows a worse performance at time $T$, and it even fails to find a better state than $|\\pi\/2,\\pi\/2\\rangle$ in the\ncase of $N=30$. In this figure, the number of parallel sets (also called particles in PSO and populations in DE) are\n10 for all NM, DE and PSO. After increasing the number of parallel sets from 10 to 20 [labelled by NM (20) in the\nplot], the performance of NM (dash-dotted green line) improves in the case of $N=8$, which basically coincides with\nothers. However, it still fails to find a better state when $N=30$. More number of parallel sets may be requires for\nNM in this case. The states obtained via NM (20) are shown in Figs.~\\ref{fig:StateOpt_noise}(b5) and\n\\ref{fig:StateOpt_noise}(d5) for $N=8$ and $N=30$, respectively.\n\n\\begin{table}[bp]\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\hline\n~~Algorithms~~ & ~~method=~~ & \\multicolumn{2}{c}{~~**kwargs and default values~~}\\\\\n\\hline\n\\multirow{7}{*}{PSO} & \\multirow{7}{*}{\"PSO\"} & \"p\\_num\" & 10 \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & [1000,100] \\\\\n & & \"c0\" & 1.0 \\\\\n & & \"c1\" & 2.0 \\\\\n & & \"c2\" & 2.0 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{6}{*}{DE} & \\multirow{6}{*}{\"DE\"} & \"p\\_num\" & 10 \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"c\" & 1.0 \\\\\n & & \"cr\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{5}{*}{AD} & \\multirow{7}{*}{\"AD\"} & \"Adam\" & False \\\\\n\\multirow{5}{*}{(available when} & & \"measurement0\" & [] \\\\\n\\multirow{5}{*}{{\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"})} & & \"max\\_episode\" & 300 \\\\\n & & \"epsilon\" & 0.01 \\\\\n & & \"beta1\" & 0.90 \\\\\n & & \"beta2\" & 0.99 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Available methods for measurement optimization in QuanEstimation and\ncorresponding default parameter settings. Notice that AD is only available\nwhen {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"}. Here {\\fontfamily{bch}\\selectfont\\small\\itshape measurement0} is the initial\nguess of the measurement.}\n\\label{table:MeasOpt_paras}\n\\end{table}\n\nNext we discuss the state optimization in multiparameter estimation. Consider the simultaneous estimation\nof $g$ and $h\/\\lambda$ in the Lipkin\u2013Meshkov\u2013Glick model with the dynamics in Eq.~(\\ref{eq:dephasing_LMG}).\nFigures~\\ref{fig:StateOpt_multipara}(a) and \\ref{fig:StateOpt_multipara}(b) show the performance of optimal\nstates obtained via different algorithms for $W=\\mathrm{diag}(1\/2,1\/2)$ and $W=\\mathrm{diag}(1\/3,2\/3)$, respectively.\nIn both cases AD (solid red line) and DE (dash-dotted circle cyan line) present the best performance at the target\ntime $\\lambda T=10$, and DE even slightly outperform AD in the case of $W=\\mathrm{diag}(1\/2,1\/2)$. The performance\nof PSO (dashed star blue line) is worse than AD and DE, yet still better than NM (dashed purple line) and NM with\n20 parallel sets (dash-dotted green line). The performance of NM does not even outperform the coherent spin state\n$|\\pi\/2,\\pi\/2\\rangle$ (dotted black line) in the case of $W=\\mathrm{diag}(1\/2,1\/2)$. Hence, apart from gradient-based\nalgorithm like AD, PSO and DE would also be good choices for state optimizations. The optimal states obtained from AD\nand DE for $W=\\mathrm{diag}(1\/2,1\/2)$ and $W=\\mathrm{diag}(1\/3,2\/3)$ are demonstrated in\nFigs.~\\ref{fig:StateOpt_multipara}(c1-c2) and Figs.~\\ref{fig:StateOpt_multipara}(d1-d2), respectively. Although the\nperformance on $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ are basically the same for these states, they may still have gaps on\nother properties like the difficulties of preparation, the robustness to the imperfect preparation and so on. Hence,\nin practice one needs to compare these optimal states comprehensively case by case to make wise choices.\n\n\n\\section{Measurement optimization}\n\\label{sec:measurement_opt}\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=17.5cm]{Fig_Mopt.pdf}\n\\caption{(a) The performance of optimal projective measurements obtained via PSO\n(blue crosses) and DE (yellow circles) in the case of single-parameter estimation.\nThe dashed cyan line represents the values of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nand the dotted black line represents the values of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$\nwith respect to the projective measurement $\\{\\Pi_{+}\\!=\\!|+\\rangle\\langle+|,\n\\Pi_{-}\\!=\\!|-\\rangle\\langle-|\\}$. The true value $\\omega_{\\mathrm{tr}}=1$. Planck units\nare applied in this plot. (b) The performance of optimal projective measurements obtained\nvia PSO (blue crosses) and DE (yellow circles) in the case of multiparameter estimation in\nthe absence of control. The black underlines and cyan triangles represent the values of\n$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ without and with optimal control. The red pentagrams\nrepresent the controlled values of $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ with the optimal\nmeasurements obtained in the non-controlled scenario. (c) Demonstration of the optimal\nprojective measurement obtained by DE in the multiparameter estimation at the target time\n$T=0.04\\,\\mu$s. The red and blue bars represent the real and imaginary parts of the coefficients\nof the optimal measurement in the basis $\\{|1\\!\\uparrow\\rangle,|1\\!\\downarrow\\rangle,\n|0\\!\\uparrow\\rangle,|0\\!\\downarrow\\rangle,|\\!-\\!1\\!\\uparrow\\rangle,|\\!-\\!1\\!\\downarrow\\rangle\\}$.}\n\\label{fig:Mopt}\n\\end{figure*}\n\nMeasurement is critical in quantum parameter estimation~\\cite{Yu2021,Rath2021,Zhang2020,XuL2021}. On one hand,\nall asymptotic bounds require some optimal measurements to attain if it is attainable, and hence the search\nof optimal measurements is a natural requirement in theory to approach the ultimate precision limit. On the\nother hand, the choice of measurements is usually limited in practice, and how to find conditioned optimal\nmeasurements with the practical measurements in hand is an important step towards the design of a realizable\nscheme. QuanEstimation includes the optimization of measurements for several scenarios. The first one is the\noptimization of rank-one projective measurements. A set of projective measurements $\\{\\Pi_i\\}$ satisfies\n$\\Pi_i\\Pi_j=\\Pi_i\\delta_{ij}$ and $\\sum_i\\Pi_i=\\openone$, and it can be rewritten into\n$\\{|\\phi_i\\rangle\\langle\\phi_i|\\}$ with $\\{|\\phi_i\\rangle\\}$ an orthonormal basis in the Hilbert space. In\nthis way, the optimization of rank-one projective measurement is equivalent to identifying the optimal basis,\nwhich can be realized using PSO and DE in QuanEstimation. In this case the automatic differentiation is not\nworking very well due to the Gram-Schmidt orthogonalization procedure after the update of $\\{|\\phi_i\\rangle\\}$\naccording to the gradients. In some cases, the realizable measurement has to be limited in\nthe linear combination of a given set of POVM, hence, the second scenario is to find the optimal linear\ncombination of an input measurement. Moreover, in some cases the measurement $\\{\\Pi_i\\}$ has to be fixed, but\nan arbitrary unitary operation can be invoked before performing the measurement, which is equivalent to a new\nmeasurement $\\{U\\Pi_i U^{\\dagger}\\}$. Based on this, the third scenario is to find the optimal rotated\nmeasurement of an input measurement.\n\nThe codes in QuanEstimation for the execution of measurement optimization are as follows:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\nm = MeasurementOpt(mtype=\"projection\",\n minput=[],savefile=False,\n method=\"DE\",**kwargs)\nm.dynamics(tspan,rho0,H0,dH,Hc=[],ctrl=[],\n decay=[])\nm.CFIM(W=[])\n\\end{lstlisting}\nIn the case that the parameterization is described by the Kraus operators, replace {\\fontfamily{bch}\\selectfont\\small\\itshape m.dynamics()} with\nthe code {\\fontfamily{bch}\\selectfont\\small\\itshape m.Kraus(rho0,K,dK)}. The optimization method can be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape method=\" \"}\nand corresponding parameters can be set via {\\fontfamily{bch}\\selectfont\\small\\itshape **kwargs}. The available optimization methods and\ncorresponding default parameter settings are given in Table~\\ref{table:MeasOpt_paras}. Two files \"f.csv\" and\n\"measurements.csv\" will be generated at the end of the program. When {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True}, the\nmeasurements obtained in all episodes will be saved in \"measurements.csv\".\n\nThe variable {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\" \"} defines the type of scenarios for the optimization, and currently it includes\ntwo options: {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"projection\"} and {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"}. The first one means the optimization\nis performed in the first scenario, i.e., within the set of projective measurements. In this case,\n{\\fontfamily{bch}\\selectfont\\small\\itshape minput=[]} should keep empty. Since $|\\phi_i\\rangle$ in a rank-one projective measurement\n$\\{|\\phi_i\\rangle\\langle\\phi_i|\\}$ can be expended as $|\\phi_i\\rangle=\\sum_j C_{ij}|j\\rangle$ in a given\northonormal basis $\\{|j\\rangle\\}$, the optimization of the rank-one projective measurement is equivalent to\nthe optimization of a complex matrix $C$. When the gradient-free methods are applied, all entries in $C$ are\nupdated via the given algorithm in each episode, then adjusted via the Gram-Schmidt orthogonalization\nprocedure to make sure $\\{|\\phi_i\\rangle\\langle\\phi_i|\\}$ is a legitimate projective measurement, i.e.,\n$\\langle\\phi_i|\\phi_j\\rangle=\\delta_{ij},~\\forall i,j$ and $\\sum_i|\\phi_i\\rangle\\langle\\phi_i|=\\openone$. The\nsecond option {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"} means the optimization is performed in the second and third scenarios.\nThe input rule of {\\fontfamily{bch}\\selectfont\\small\\itshape minput} for the second scenario is {\\fontfamily{bch}\\selectfont\\small\\itshape minput=[\"LC\", [Pi1,Pi2,...], m]} and\nfor the third one is {\\fontfamily{bch}\\selectfont\\small\\itshape minput=[\"rotation\", [Pi1,Pi2,...]]}. Here {\\fontfamily{bch}\\selectfont\\small\\itshape [Pi1,Pi2,...]} is a list of\nmatrices representing the input measurement $[\\Pi_1,\\Pi_2,\\dots]$. The variable {\\fontfamily{bch}\\selectfont\\small\\itshape m} in the second\nscenario is an integer representing the number of operators of the output measurement, and thus should be no\nlarger than that of the input measurement. For example, assume the input measurement is $\\{\\Pi_i\\}^6_{i=1}$ and\ninput 4 in the position of {\\fontfamily{bch}\\selectfont\\small\\itshape m} means the the output measurement is $\\{\\Pi^{\\prime}_{i}\\}^4_{i=1}$ where\n$\\Pi^{\\prime}_i=\\sum^{6}_{j=1}B_{ij}\\Pi_j$. The optimization is to find an optimal real matrix $B$ for the optimal\nCFI or $\\mathrm{Tr}(W\\mathcal{I}^{-1})$. To make sure the updated measurement in each episode is still a legitimate\nPOVM, all entries of $B$ are limited in the regime $[0,1]$ and $\\sum_{i}B_{ij}$ is required to be 1, which is\nrealized by the normalization process. In this scenario, apart from PSO and DE, AD can also be implemented. In\nthe third scenario, the unitary operation is expressed by $U=\\prod_k \\exp(i s_k\\lambda_k)$ where $\\lambda_k$ is\na SU($N$) generator and $s_k$ is a real number in the regime $[0,2\\pi]$. The optimization is to find an optimal\nset of $\\{s_k\\}$ for the optimal CFI or $\\mathrm{Tr}(W\\mathcal{I}^{-1})$, and similar to the second scenario, AD\nis also available here besides PSO and DE. In the case that {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"projection\"}, each entry of\n{\\fontfamily{bch}\\selectfont\\small\\itshape measurement0} in {**kwargs} is a list of arrays, and in the case that {\\fontfamily{bch}\\selectfont\\small\\itshape mtype=\"input\"}, each\nentry is an array.\n\n\\emph{Example.} Now we consider two models to demonstrate the measurement optimizations in the first scenario. The\nfirst one is a single-parameter case with the single-qubit Hamiltonian $H=\\omega\\sigma_3\/2$ and dynamics in\nEq.~(\\ref{eq:ME_spon}). $\\delta_{\\mathrm{c}}\\omega$ and $\\delta_{\\mathrm{q}}\\omega$ are defined in\nEqs.~(\\ref{eq:c_deviation}) and (\\ref{eq:q_deviation}). As shown in Fig.~\\ref{fig:Mopt}(a), $\\delta_{\\mathrm{c}}\\omega$\nfor the projective measurement $\\{\\Pi_{+}\\!=\\!|+\\rangle\\langle+|,\\Pi_{-}\\!=\\!|-\\rangle\\langle-|\\}$ (dotted black line) can\nonly reach $\\delta_{\\mathrm{q}}\\omega$ (dashed cyan line) at some specific time points, which has already been shown in\nSec.~\\ref{sec:QCRB}. However, utilizing the optimal projective measurements obtained via PSO (blue crosses) and DE (yellow\ncircles), $\\delta_{\\mathrm{c}}\\omega$ saturates $\\delta_{\\mathrm{q}}\\omega$ for all target time. This performance coincides\nwith the common understanding that the QFI can be theoretically attained by certain optimal measurements.\n\nIn the case of multiparameter estimation, we use the Hamiltonian in Eq.~(\\ref{eq:NV_H}) and dynamics in\nEq.~(\\ref{eq:NV_ME}) to demonstrate the performance of the optimal projective measurements. The magnetic field\n$\\vec{B}$ is still the quantity to be estimated. Different with the single-parameter case, the values of\n$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ for the optimal measurements found by PSO (blue crosses) and DE (yellow circles)\ncannot attain $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ (black underlines) in the absence of control, as shown in\nFig.~\\ref{fig:Mopt}(b). The gap between $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ and $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ is\ndue to the fact that the quantum Cram\\'{e}r-Rao bound is not attainable here. Next, together with the optimal\nmeasurement which gives the lowest $\\mathrm{Tr}(W\\mathcal{I}^{-1})$, the control is also invoked to further evaluate\nthe reduction of $\\mathrm{Tr}(W\\mathcal{I}^{-1})$. Utilizing the optimal controls obtained via auto-GRAPE, the values\nof $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ (red pentagrams) continue to reduce compared to the non-controlled case, yet it\nis still unable to attain the controlled values of $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ (cyan triangles) in general due\nto the attainability problem. Nevertheless, their differences are very insignificant for some target time, indicating\nthat the combined performance of the optimal measurement and optimal control approaches to the ultimate precision limit.\nThe optimal measurement $\\{|\\phi_1\\rangle\\langle\\phi_1|,\\cdots,|\\phi_6\\rangle\\langle\\phi_6|\\}$ obtained by DE in the\nabsence of control are demonstrated in Fig.~\\ref{fig:Mopt}(c). The red and blue bars represent the real and imaginary\nparts of the coefficients of $|\\phi_1\\rangle$ to $|\\phi_6\\rangle$ in the basis\n$\\{|1\\!\\uparrow\\rangle,|1\\!\\downarrow\\rangle,|0\\!\\uparrow\\rangle,|0\\!\\downarrow\\rangle,|\\!-\\!1\\!\\uparrow\\rangle,\n|\\!-\\!1\\!\\downarrow\\rangle\\}$.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_Mopt_input.pdf}\n\\caption{Demonstration of the measurement optimization in the second (LC) and\nthird scenarios (rotation). The cyan upward triangles, blue crosses and\nyellow circles represent the performance of optimal measurements found by AD, PSO,\nand DE, respectively in the second scenario. The red downward triangles, green\ndiamonds and orange pentagrams represent the performance of optimal measurements\nfound by AD, PSO, and DE in the third scenario. }\n\\label{fig:Mopt_input}\n\\end{figure}\n\nThe optimizations in the second and third scenarios are also demonstrated with the Hamiltonian in\nEq.~(\\ref{eq:NV_H}) and dynamics in Eq.~(\\ref{eq:NV_ME}). The input measurement is taken as\n$\\{|ij\\rangle\\langle ij|\\}_{i=0,\\pm 1;j=\\uparrow,\\downarrow}$, which includes 6 operators. In the second\nscenario, the number of output POVM operators is set to be 4. As shown in Fig.~\\ref{fig:Mopt_input}, the\nperformance of measurements found by AD (cyan upward triangles), PSO (blue crosses) and DE (yellow circles)\napproach to and even reach that of the input measurement (magenta pluses). This fact indicates that in this\ncase, an optimal 4-operator measurement can reach the performance of the original 6-operator measurement, and\nthe reduction of operator numbers may benefit the practical precision of the measurements in experiments.\nIn the third scenario, the performance of optimal measurements found by AD (red downward triangles), PSO\n(green diamonds) and DE (orange pentagrams) not only significantly better than that of the input measurement,\nbut also approach to the ultimate precision limit given by $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ (black underlines),\nindicating that the performance of these optimal measurements are very close to that of the global optimal\nmeasurements, if there exist any. The probe states, the true values of the parameters to be estimated and other\nparameters are set to be the same with those in Sec.~\\ref{sec:multi}.\n\n\n\\section{Comprehensive optimization}\n\\label{sec:comprehensive_opt}\n\n\\begin{table}[tp]\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\hline\n~~Algorithms~~ & ~~method=~~ & \\multicolumn{2}{c}{~~**kwargs and default values~~}\\\\\n\\hline\n\\multirow{9}{*}{PSO} & \\multirow{9}{*}{\"PSO\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"ctrl0\" & [] \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & [1000,100] \\\\\n & & \"c0\" & 1.0 \\\\\n & & \"c1\" & 2.0 \\\\\n & & \"c2\" & 2.0 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{8}{*}{DE} & \\multirow{8}{*}{\"DE\"} & \"p\\_num\" & 10 \\\\\n & & \"psi0\" & [] \\\\\n & & \"ctrl0\" & [] \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & 1000 \\\\\n & & \"c\" & 1.0 \\\\\n & & \"cr\" & 0.5 \\\\\n & & \"seed\" & 1234 \\\\\n\\hline\n\\multirow{7}{*}{AD} & \\multirow{7}{*}{\"AD\"} & \"Adam\" & False \\\\\n\\multirow{7}{*}{(available} & & \"psi0\" & [] \\\\\n\\multirow{7}{*}{{for SC})} & & \"ctrl0\" & [] \\\\\n & & \"measurement0\" & [] \\\\\n & & \"max\\_episode\" & 300 \\\\\n & & \"epsilon\" & 0.01 \\\\\n & & \"beta1\" & 0.90 \\\\\n & & \"beta2\" & 0.99 \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\caption{Available methods for comprehensive optimization in QuanEstimation and\ncorresponding default parameter settings. Notice that AD is only available\nwhen {\\fontfamily{bch}\\selectfont\\small\\itshape com.SC()} is called. }\n\\label{table:CompOpt_paras}\n\\end{table}\n\n\\begin{figure*}[tp]\n\\centering\\includegraphics[width=16cm]{Fig_compre.pdf}\n\\caption{Illustration of the comprehensive optimization (first lines with\ngray background) and combination of univariate optimizations (second lines)\nin four types of multivariate optimizations, including the optimizations of\n(a) the probe state and measurement (SM), (b) the probe state and control (SC),\n(c) control and measurement (CM), and (d) the probe state, control, and measurement\n(SCM).}\n\\label{fig:compre}\n\\end{figure*}\n\nThe previous sections focused on the univariate (single variable) optimizations. However, in a\npractical scenario the probe state, control (if available) and measurement may all need to be\noptimized. More importantly, the optimal results obtained for an univariate optimization may cease\nto be optimal when other variables are involved. For example, the optimal probe state and\nmeasurement for the non-controlled case may not be optimal anymore in the controlled case. Hence,\nsometimes a comprehensive optimization, i.e., simultaneous multivariate optimization, is in need.\n\nQuanEstimation can deal with four types of multivariate optimizations, including the optimizations of\nthe probe state and measurement (SM), the probe state and control (SC), control and measurement (CM),\nand all three together (SCM). In these scenarios, the key feature of comprehensive optimization is\nthat all variables are optimized simultaneously. Regarding the objective function, in the cases of SM,\nCM, and SCM, namely, when the measurement is involved, it has to be dependent on the measurement.\nIn current version of the package it is chosen as the CFI or $\\mathrm{Tr}(W\\mathcal{I}^{-1})$.\nIn the case of SC, the objective function could be either QFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ or\nCFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ for a flexible or fixed choice of measurement. The process of\ncomprehensive optimizations and corresponding objective functions have been illustrated in the first\nlines (with gray background) in Figs.~\\ref{fig:compre}(a-d). In QuanEstimation, the codes for the\nexecution of comprehensive optimization are:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\ncom = ComprehensiveOpt(savefile=False,\n method=\"DE\",**kwargs)\ncom.dynamics(tspan,H0,dH,Hc=[],ctrl=[],\n decay=[],ctrl_bound=[])\ncom.SM(W=[])\ncom.SC(W=[],M=[],target=\"QFIM\",LDtype=\"SLD\")\ncom.CM(rho0,W=[])\ncom.SCM(W=[])\n\\end{lstlisting}\nIn the case that the parameterization is described by the Kraus operators, replace {\\fontfamily{bch}\\selectfont\\small\\itshape com.dynamics()}\nwith the code {\\fontfamily{bch}\\selectfont\\small\\itshape com.Kraus(K,dK)}. All four types of comprehensive optimizations can be called through\n{\\fontfamily{bch}\\selectfont\\small\\itshape com.SM()}, {\\fontfamily{bch}\\selectfont\\small\\itshape com.SC()}, {\\fontfamily{bch}\\selectfont\\small\\itshape com.CM()}, and {\\fontfamily{bch}\\selectfont\\small\\itshape com.SCM()}. Notice that if\n{\\fontfamily{bch}\\selectfont\\small\\itshape com.Kraus()} is invoked, only {\\fontfamily{bch}\\selectfont\\small\\itshape com.SM()} is available as control is not suitable for the\nparameterization process described by the Kraus operators. In {\\fontfamily{bch}\\selectfont\\small\\itshape com.CM()}, the input {\\fontfamily{bch}\\selectfont\\small\\itshape rho0}\nis a matrix representing the fixed probe state. In {\\fontfamily{bch}\\selectfont\\small\\itshape com.SC()}, the objective function can be set via\n{\\fontfamily{bch}\\selectfont\\small\\itshape target=\" \"}, including three choices {\\fontfamily{bch}\\selectfont\\small\\itshape target=\"QFIM\"} (default), {\\fontfamily{bch}\\selectfont\\small\\itshape target=\"CFIM\"},\nand {\\fontfamily{bch}\\selectfont\\small\\itshape target=\"HCRB\"}. If a set of measurement is input via {\\fontfamily{bch}\\selectfont\\small\\itshape M=[]}, the objective function\nwill be automatically chosen as the CFIM regardless of the input in {\\fontfamily{bch}\\selectfont\\small\\itshape target=\" \"}. The type of QFIM\ncan be adjusted via {\\fontfamily{bch}\\selectfont\\small\\itshape LDtype=\" \"} ({\\fontfamily{bch}\\selectfont\\small\\itshape \"SLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"RLD\"}, {\\fontfamily{bch}\\selectfont\\small\\itshape \"LLD\"}). The\navailable methods for the comprehensive optimization and corresponding default parameter settings are given in\nTable~\\ref{table:CompOpt_paras}. Notice that AD is only available when {\\fontfamily{bch}\\selectfont\\small\\itshape com.SC()} is called and\nthe objective function is not the HCRB. At the end of the program, \"f.csv\" will be generated including the values\nof the objective function in all episodes. In the meantime, some or all of the files \"controls.csv\", \"states.csv\",\nand \"measurements.csv\" will also be generated according to the type of comprehensive optimization.\n\nAlternatively, the multivariate optimization can also be finished by the combination of univariate\noptimizations, as shown in the second lines in Figs.~\\ref{fig:compre}(a-d). In the case of SM (or\nCM) shown in Fig.~\\ref{fig:compre}(a) [Fig.~\\ref{fig:compre}(c)], one could first perform the state\n(control) optimization with QFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ the objective function. Next,\ntake the found optimal state (control) as the fixed input, and further optimize the measurement\nwith CFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ the objective function. If the optimized values of the\nCFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ in the second process reaches the optimized values of the\nQFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in the first process, the entire scheme is then optimal. Things\ncould be more complex in the multiparameter estimation due to the attainability problem. The existence\nof the gap between the optimized $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ and $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\ndoes not necessarily mean the scheme is not optimal. Nevertheless, there is no doubt that a smaller gap\nalways implies a better scheme at least in theory. In the case of SC, the state optimization and\ncontrol optimization can be performed in turn with the optimal quantity found in the previous\nturn as the fixed input [Fig.~\\ref{fig:compre}(b)]. Same with the comprehensive optimization, both\nQFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ and CFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ can be taken as the\nobjective function in this case. At last, in the case of SCM the combination strategy in SC\ncould be performed first with QFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ the objective function, and\nthe measurement is further optimized with the found optimal state and control as the fixed input\n[Fig.~\\ref{fig:compre}(d)]. Same with the scenario of SM, if the optimized CFI\/$\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nobtained in the second process reaches the optimized QFI\/$\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in the\nfirst process, the entire scheme is optimal.\n\n\\emph{Example.} Now we provide some demonstrations on the comprehensive optimization with QuanEstimation\nand compare their performance with the combination strategy. First, consider a non-controlled example\nwith the single-qubit Hamiltonian $\\omega\\sigma_3\/2$, which is a SM scenario. The dynamics is\ngoverned by Eq.~(\\ref{eq:ME_spon}) with decay rates $\\gamma_{-}\/\\omega_{\\mathrm{tr}}=0$ and\n$\\gamma_{+}\/\\omega_{\\mathrm{tr}}=0.1$. The target time $\\omega_{\\mathrm{tr}}T=20$. In this case, the optimized\nvalues of $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$ in the comprehensive optimization and\ncombination strategy are both 0.608 (in the units of $\\omega_{\\mathrm{tr}}$, same below), equivalent to the optimal\n$\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$ obtained in the solely state optimization, indicating\nthat the schemes found by both strategies are indeed optimal in theory. Next we invoke the controls described\nin Eq.~(\\ref{eq:ctrl_demo}). In the case of SC, the optimized $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nobtained in the combination strategy is 0.441, and that in the comprehensive optimization is 0.440. Furthermore,\nin the case of SCM, the optimized $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{c}}\\omega$ provided by the\ncombination strategy is 0.441, equivalent to the optimal $\\sqrt{\\omega_{\\mathrm{tr}}T}\\delta_{\\mathrm{q}}\\omega$\nobtained in the SC, and that provided by the comprehensive optimization is 0.443. The performance of these two\nstrategies basically coincide with each other in this example.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_compre_NV.pdf}\n\\caption{Performance comparison between the comprehensive optimization\nand combination strategy in the multiparameter estimation in the case of SCM.\nThe dashed blue line represents the optimization of $\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nin the comprehensive optimization. The solid red lines represent the optimization\nof $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in the SC (first 500 episodes) and that of\n$\\mathrm{Tr}(W\\mathcal{I}^{-1})$ in the measurement optimization (last 500\nepisodes) in the combination strategy. The inset shows the performance of\ndifferent combination strategies in the SC part due to the episode number\nof each optimization. All the optimizations in the figure are finished by DE.}\n\\label{fig:compre_multi}\n\\end{figure}\n\nThis equivalent performance may due to two facts: the example is simple and the QFI is attainable in\ntheory. In the multiparameter estimation, these two strategies may show divergent performance as the\nQFIM is not always guaranteed to be attainable. For example, in the case of SCM, $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nare first optimized in the SC. However, it is hard to say whether the optimal probe state and control\nfor an unattainable $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ can still provide a good $\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nand benefit the subsequent measurement optimization. To investigate it, we still take the Nitrogen-vacancy\ncenter as an example. The free Hamiltonian, control Hamiltonian, and dynamics are described in\nEqs.~(\\ref{eq:NV_H}), (\\ref{eq:NV_c}) and (\\ref{eq:NV_ME}). The performance of comprehensive optimization and\ncombination strategy in the SCM are shown in Fig.~\\ref{fig:compre_multi}. The comprehensive optimization (dashed\nblue line), which takes $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ as the objective function, basically converges at\naround $110$ episodes. The combination strategy (solid red line) splits into two parts, the one in the first\n500 episodes is the combination optimization of SC, and that in the last 500 episodes is the optimization of the\nmeasurement. The gap between these two lines is actually the gap between the optimal $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nand the value of $\\mathrm{Tr}(W\\mathcal{I}^{-1})$ with a random measurement. In the SC part, the alternative\noptimizations of the probe state and control can be done in different ways due to the episode number of each\noptimization. As shown in the inset of Fig.~\\ref{fig:compre_multi}, here we test several selections, including\n20 episodes for each optimization (solid circle blue line), 50 episodes for each optimization (dashed green\nline), 100 episodes for each optimization (dash-dotted cyan line), 200 episodes for state optimization and 300\nepisodes for control optimization (solid red line), and 300 episodes for state optimization and 200 episodes\nfor control optimization (dotted black line). In these selections, the fourth one, 200 episodes for state\noptimization and 300 episodes for control optimization, shows the best performance at the end of the 500\nepisodes, and the corresponding optimal state and control are chosen for the subsequent measurement optimization.\nIn this example, the final performance of the combination strategy is better than that of the simultaneous\nstrategy, indicating that the unattainability of $\\mathrm{Tr}(W\\mathcal{F}^{-1})$ in the SC does not present\nnegative effects on the final performance. However, this result does not mean the combination strategy is always\nbetter in general. In practice, the comparison of these two strategies might still be needed case by case in\nthe scheme design.\n\n\\section{Adaptive measurement schemes}\n\\label{sec:adapt}\n\nAdaptive measurement is another common scenario in quantum parameter estimation. In this scenario, apart from the\nunknown parameters $\\bold{x}$, the Hamiltonian also includes a set of tunable parameters $\\bold{u}$. A typical case\nis that the tunable parameters are invoked by the same way with $\\bold{x}$, resulting in the total Hamiltonian\n$H(\\bold{x}+\\bold{u})$. In the point estimation approach, the QFIM and CFIM computed at the true values of $\\bold{x}$\nmay not always provide the practically achievable precision due to the fact that the actual working point may be\nslightly away from the true values. Hence, the tunable parameters $\\bold{u}$ are invoked to let the Hamiltonian\n$H(\\bold{x}+\\bold{u})$ work at the optimal point $\\bold{x}_{\\mathrm{opt}}$. An obvious difficulty for the\nimplementation of this scheme is that one actually does not known the true values in practice, which means $\\bold{u}$\nhas to be given according to the estimated values $\\hat{\\bold{x}}$, and the entire scheme would only be useful when\nit is implemented adaptively. In the meantime, a pre-estimation of $\\bold{x}$ is usually needed. The inaccuracy of\n$\\hat{\\bold{x}}$ would result in the inaccuracy of $\\bold{u}$, and $\\hat{\\bold{x}}+\\bold{u}$ is then inevitably far\nfrom $\\bold{x}_{\\mathrm{opt}}$, causing a lousy performance of the scheme. This scheme has been applied by Berni\net al.~\\cite{Berni2015} in optical phase estimation with additional real-time feedback controls.\n\nNow let us introduce the in detail all steps required to implement this scheme. Consider the Hamiltonian $H(\\bold{x})$\nwhere $\\bold{x}$ is restricted in a finite regime with a prior distribution $p(\\bold{x})$. The first step is to find\nthe optimal value $\\bold{x}_{\\mathrm{opt}}$ in this regime with respect to the minimum $\\mathrm{Tr}(W\\mathcal{I}^{-1})$\nwhen the measurement is fixed. If the measurement can be altered flexibly in practice, $\\bold{x}_{\\mathrm{opt}}$,\ntogether with the corresponding optimal measurement, can be obtained with $\\mathrm{Tr}(W\\mathcal{F}^{-1})$\nthe objective function. Next, perform the pre-estimation via the Bayesian estimation with the fixed or\noptimal measurement and update the prior distribution with the posterior distribution in Eq.~(\\ref{eq:Bayes_posterior}).\nWhen $p(\\bold{x})$ has been updated to a reasonable narrow distribution, the tunable parameters $\\bold{u}$ are then\ninvoked into the system. In the $n$th round of this step, with the observed result $y^{(n)}$, the posterior distribution\nis obtained via the Bayes' rule as\n\\begin{equation}\np(\\bold{x},\\bold{u}^{(n)}|y^{(n)})=\\frac{p(y^{(n)}|\\bold{x},\\bold{u}^{(n)})\np(\\bold{x})}{\\int p(y^{(n)}|\\bold{x},\\bold{u}^{(n)})p(\\bold{x})\\mathrm{d}\\bold{x}},\n\\end{equation}\nwhere $\\bold{u}^{(n)}$ is obtained in the $(n-1)$th round. The estimated value $\\hat{\\bold{x}}^{(n)}$ can be\nobtained through the MAP, $\\hat{\\bold{x}}^{(n)}=\\mathrm{argmax}\\,p(\\bold{x},\\bold{u}^{(n)}|y^{(n)})$. The\nvalue of $\\bold{u}$ used in the next round is obtained via the formula\n$\\bold{u}^{(n+1)}=\\bold{x}_{\\mathrm{opt}}-\\hat{\\bold{x}}^{(n)}$, and the prior distribution is also replaced by\nthe current posterior distribution. In QuanEstimation, the pre-estimation can be finished with the function\n{\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()} discussed in Sec.~\\ref{sec:Bayesian}, and the adaptive process can be executed with the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\napt = adaptive(x,p,rho0,savefile=False,\n max_episode=1000,eps=1e-8)\napt.dynamics(tspan,H,dH,Hc=[],ctrl=[],\n decay=[])\napt.CFIM(M=[],W=[])\n\\end{lstlisting}\nIn the case that the parameterization process is described by the Kraus operators, replace\n{\\fontfamily{bch}\\selectfont\\small\\itshape apt.dynamics()} with {\\fontfamily{bch}\\selectfont\\small\\itshape apt.Kraus(K,dK)}. The inputs {\\fontfamily{bch}\\selectfont\\small\\itshape x} and {\\fontfamily{bch}\\selectfont\\small\\itshape p} are the\nsame with those in {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()}. The input {\\fontfamily{bch}\\selectfont\\small\\itshape H} is a list of matrices representing the Hamiltonian\nwith respect to the values in {\\fontfamily{bch}\\selectfont\\small\\itshape x}, and it is multidimensional in the multiparameter case. {\\fontfamily{bch}\\selectfont\\small\\itshape dH}\nis a (multidimensional) list with each entry also a list representing $\\partial_{\\bold{x}}H$ with respect to the\nvalues in {\\fontfamily{bch}\\selectfont\\small\\itshape x}. In the case that specific functions of $H$ and $\\partial_{\\bold{x}}H$ can be provided,\n{\\fontfamily{bch}\\selectfont\\small\\itshape H} and {\\fontfamily{bch}\\selectfont\\small\\itshape dH} can be alternatively generated via the function {\\fontfamily{bch}\\selectfont\\small\\itshape BayesInput()} discussed\nin Sec.~\\ref{sec:para}. In {\\fontfamily{bch}\\selectfont\\small\\itshape apt.CFIM()}, {\\fontfamily{bch}\\selectfont\\small\\itshape M} is the input measurement and the default one is a\nset of SIC-POVM.\n\nDuring the running of the codes, three files \"xout.csv\", \"y.csv\", and \"pout.csv\" will be generated including the\ndata of $\\hat{\\bold{x}}$, result $y$ in all rounds of iteration and final obtained $p(\\bold{x})$. In the case\nthat {\\fontfamily{bch}\\selectfont\\small\\itshape savefile=True}, \"pout.csv\" contains the data of $p(\\bold{x})$ in all rounds. If the choice of\nmeasurement is flexible in the experiment, before the invocation of {\\fontfamily{bch}\\selectfont\\small\\itshape apt.CFIM()}, the optimal measurement\nwith respect to $\\bold{x}_{\\mathrm{opt}}$ can be first obtained via calling {\\fontfamily{bch}\\selectfont\\small\\itshape M = apt.Mopt(W=[])}.\nIn the case that the users would like to run the pre-estimation with the optimal measurement, they can just call\n{\\fontfamily{bch}\\selectfont\\small\\itshape apt.Mopt()} first and input the optimal measurement to {\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()} for the pre-estimation.\n\nDuring the running of {\\fontfamily{bch}\\selectfont\\small\\itshape apt.CFIM()}, the users should type the result $y$ obtained in practice on the\nscreen and receive the values of $\\bold{u}$ used for the next round of experiment. In the case that the users\nhave already done the pre-estimation by themselves, they can directly use {\\fontfamily{bch}\\selectfont\\small\\itshape adaptive()} without calling\n{\\fontfamily{bch}\\selectfont\\small\\itshape Bayes()} first.\n\n\\begin{figure}[tp]\n\\centering\\includegraphics[width=8.5cm]{Fig_adpt.pdf}\n\\caption{Performance comparison between the adaptive (dashed blue line)\nand non-adaptive (solid red line) schemes. The adaptive measurement starts\nafter 500 rounds of pre-estimation. The non-adaptive scheme is a full\nBayesian estimation.}\n\\label{fig:adpt}\n\\end{figure}\n\nLet us still take the Hamiltonian in Eq.~(\\ref{eq:Bayes_demo}) as an example. The initial state is $|+\\rangle$ and\nthe target time $\\omega_0 T=1$ (Planck units are applied). The prior distribution is uniform in the regime\n$(-\\pi\/4,3\\pi\/4)$. The measurement is $\\{|+\\rangle\\langle +|,|-\\rangle\\langle-|\\}$. $x_{\\mathrm{opt}}$ is taken\nas zero. The results are simulated by generating random values in the regime $[0,1]$. When it is smaller (larger)\nthan $p(+|x)$, the posterior distribution is calculated with $p(+|x)$ [$p(-|x)$]. As shown in Fig.~\\ref{fig:adpt},\nafter 500 rounds of pre-estimation, the adaptive scheme (dashed blue line) indeed shows better performance (smaller\nvariance) compared to the non-adaptive scheme (solid red line) which is fully finished by the Bayesian estimation.\n\nAnother famous adaptive scheme in quantum parameter estimation is the online adaptive phase estimation proposed by\nBerry et al.~\\cite{Berry2000,Berry2001}. In this scheme, after reading the result $y^{(n)}$ in the $n$th round,\nthe value of the tunable phase $\\Phi_{n+1}$ or phase difference $\\Delta\\Phi_{n+1}$ is generated. The relation\nbetween $\\Phi_{n+1}$ and $\\Delta\\Phi_{n+1}$ can be taken as $\\Phi_{n+1}=\\Phi_{n}-(-1)^{y^{(n)}}\\Delta\\Phi_{n+1}$.\nHentschel and Sanders~\\cite{Hentschel2010,Hentschel2011} further provided an offline strategy with PSO, and the\noptimization methods are further extended to DE~\\cite{Lovett2013} and genetic algorithm~\\cite{Rambhatla2020}\nin recent years. Apart from the original references, details of this scheme can also be found in a recent\nreview~\\cite{Liu2022}. In QuanEstimation, this scheme can be executed by the codes:\n\\begin{lstlisting}[breaklines=true,numbers=none,frame=trBL,mathescape=true]\napt = adaptMZI(x,p,rho0)\napt.general()\napt.online(output=\"phi\")\n\\end{lstlisting}\nThe input {\\fontfamily{bch}\\selectfont\\small\\itshape rho0} is a matrix representing the probe state. The output can be tuned between $\\Phi$ and\n$\\Delta\\Phi$ by setting {\\fontfamily{bch}\\selectfont\\small\\itshape output=\"phi\"} or {\\fontfamily{bch}\\selectfont\\small\\itshape output=\"dphi\"} in {\\fontfamily{bch}\\selectfont\\small\\itshape apt.online()}.\nThe offline strategy can also be executed by replacing {\\fontfamily{bch}\\selectfont\\small\\itshape apt.online()} with\n{\\fontfamily{bch}\\selectfont\\small\\itshape apt.offline(method=\"DE\",**kwargs)}. PSO is also available here ({\\fontfamily{bch}\\selectfont\\small\\itshape method=\"PSO\"}). When the\nentire program is finished, a file named \"xout.csv\" including the data of output in all rounds will be generated.\nIn the case of online scheme, an additional file \"y.csv\" including the result $y$ in all rounds will also be\ngenerated. The design of {\\fontfamily{bch}\\selectfont\\small\\itshape apt.general()} here is to give us a space for the further inclusion of the\nadaptive phase estimation in other optical scenarios such as the SU(1,1) interferometers.\n\n\\section{Summary}\n\nIn this paper, we present a new open-source toolkit, QuanEstimation, for the design of optimal schemes in the\nquantum parameter estimation. The source of the package, as well as the demonstrating codes for the calculation\nof all examples discussed in this paper, can be download in GitHub~\\cite{github1}. This package is based on\nboth platforms of Python and Julia. The combined structure is to guarantee the calculation efficiency of Julia is\nfully utilized, and in the meantime, the people who have no knowledge of Julia would have no obstacle in using\nthis package. In the meantime, a full Julia version of the package is also available in GitHub~\\cite{github2}, which is suitable\nfor those familiar with Julia. QuanEstimation includes several well-studied metrological tools in quantum parameter\nestimation, such as the various types of Cram\\'{e}r-Rao bounds and their quantum correspondences, quantum Ziv-Zakai\nbound, and Bayesian estimation. To perform the scheme design, QuanEstimation can execute the optimizations of the\nprobe state, control, measurement, and the comprehensive optimizations, namely, the simultaneous optimizations\namong them. General adaptive measurement schemes as well as the adaptive phase estimation can also be performed\nwith this toolkit.\n\nQuanEstimation is suitable for many practical quantum systems, especially those with finite-dimensional Hilbert\nspaces, such as the trapped ions, nitrogen-vacancy centers, and quantum circuits. Therefore, it is not only\nuseful for the theorists working in the field of quantum parameter estimation, but could also be particularly\nuseful for the experimentalists who are not familiar with the theories in this field yet intend to utilize\nthem to design experimental schemes. More functions and features will be constantly input into the package and\nthe calculation efficiency for certain specific scenarios will be further improved in the future. We believe that\nthere is a good chance that this package would become a common toolkit in the field of quantum metrology for the\nnumerical calculations and scheme designs.\n\n\\begin{acknowledgments}\nThe authors would like to thank Prof.~Re-Bing Wu, Prof.~Christiane P. Koch, Prof.~Lijian Zhang, Jinfeng Qin,\nand Yuqian Xu for helpful discussions. This work was supported by the National Natural Science Foundation of\nChina (Grants No.\\,12175075, No.\\,11805073, No.\\,11935012 and No.\\,11875231), and the National Key Research and\nDevelopment Program of China (Grants No.\\,2017YFA0304202 and No.\\,2017YFA0205700). H.Y. also acknowledges the\nsupport from the Research Grants Council of Hong Kong (Grant No.\\,14307420). R.D.D. was supported by National\nScience Center (Poland) with Grant No.~2020\/37\/B\/ST2\/02134.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction} \\label{sec:intro}\n\nWeinberg (1992, Paper I) identified color-selected variables in the\nIRAS Point Source Catalog (PSC) with AGB stars based on color\nconsistency and the circumstantial sensitivity of the IRAS survey to\nlong-period variables (cf. Harmon \\& Gilmore 1988). These were then\nused as rough standard candles to infer a large-scale asymmetry in the\nstellar distribution. The identification of IRAS variables with AGB\nstars was strengthened by an in-depth study of a bright subset (Allen,\nKleinmann \\& Weinberg 1993). Carbon-selected AGB stars (carbon stars) have also\nproven to be effective tracers (see e.g. Metzger \\& Schechter 1994).\nAdvantages of AGB tracers are reviewed in Weinberg (1994). In\ngeneral, standard candle analyses have the advantage over flux or star\ncount analyses in providing direct information about the\nthree-dimensional structure of the Galaxy. However, uncertainties in\ntheir selection and intrinsic properties may bias any inference and,\nespecially for the IRAS-selected sample, the census is incomplete.\n\nPaper I described an approach to large-scale Galactic structure using\na star count analysis which allows the information to be reconstructed\nand possibly corrected in\nthe observer's coordinate system before translating to a\nGalactocentric system. Unfortunately, this translation approach is\nonly natural if the coverage is complete and suffered in application\nto the IRAS sample because of spatial gaps due to an incomplete second\nfull-sky epoch. Here, we present the results of a different approach\nto the problem: the direct density estimation by maximum likelihood.\nA Bayesian density estimation has the advantage of directly incorporating\nselection effects and missing data.\n\nThe number of ongoing surveys that bear on Galactic structure---SDSS,\n2MASS, DENIS---which at various stages will have surveyed parts of the\nsky is a second motivation for this study; there is a need for a\nsystematic method suited to inferential studies using possibly\nincomplete data from many wave bands. Recent analyses \n(e.g. Bahcall \\& Soneira 1980 in the optical;\nWainscoat et al. 1992 in the infrared) have modeled the\nGalactic components with standard profiles and structural parameters\nchosen to provide a match to star count data. To\nexplore the structural parameters themselves, we propose a Bayesian\ndensity estimation technique to treat data from scattered fields\nduring the survey and to easily incorporate data from wave bands.\nConceptually, this approach is midway between a classical inversion\nand modeling. \n\nThe first part of the paper describes and characterizes the\nmethod. More specifically, \\S\\ref{sec:iras}\nreviews the IRAS selection procedure described in Paper I and motivates the\napproach. The new\nanalysis based on statistical density estimation is presented\nin \\S\\ref{sec:bayes} and precisely defined in \\S\\ref{sec:likelihood}.\nThe second part of the paper\ndescribes Monte-Carlo tests and the results of \napplying the method to the IRAS data\n(\\S\\ref{sec:results}). We conclude in \\S\\ref{sec:summary} with a\nsummary and discussion.\n\n\\section{IRAS source selection} \\label{sec:iras}\n\nThe analysis in Paper I was based on the variables selected in the\nIRAS Point Source Catalog (1988) by both color and\n$P_{var}$. Following the source selection procedure described in\nPaper I, we selected stars from IRAS Point Source Catalog with\n$F_{12}>2$ Jy and variability flag $P_{var}\\ge 98\\%$. Although the flux\nlimit reduces the confusion in source identification toward\nthe center of the Galaxy, it also restricts the sensitivity to distant\nsources. The limiting distance to a star ($d$) is estimated using a simple\nexponential layer with vertical scale height $h$ and mid-plane extinction\ncoefficient $K_{12}$:\n\\begin{equation}\nm = M + 5 \\lg d - 5 + K_{12}\\,h\\,(1-e^{-d \\sin |b| \/ h})\\,\/\\sin |b|.\n\\label{eq:b1}\n\\end{equation}\nFor a typical AGB star ($L = 3500 L_{\\odot}$, see Appendix A) and\n$K_{12}=0.18$ kpc$^{-1}$, the limiting distance in the plane is\n$R_{lim}=7$ kpc. We assume that the extinction is dominated by the\nmolecular gas, $h=100$ pc and the extincting layer is horizontally\nisotropic. The true extinction toward the inner Galaxy is most likely\ndominated by the molecular ring and nuclear region given the molecular\ngas distribution. However, precise estimate of the true distribution\nis not available and an horizontally isotropic model will adequately\nrepresent its systematic effect on the photometric distances.\n\nOf the more than 158,000 good flux-quality sources listed in IRAS PSC,\n5,736 satisfy both flux limit and variability criteria. Their spatial\ndistribution is shown in Figure \\ref{fig:sp_distr.1}. To obtain\nvariability data, at least two epochs are needed. Unfortunately,\nIRAS' multiple epochs did not have complete sky coverage. Most of the\ncoverage (77\\% in the galactic plane) was achieved in HCON 2 and HCON\n3 separated by roughly 7.5 months on average. The rest of the galactic\nplane is poorly sampled (shaded regions in Figure\n\\ref{fig:sp_distr.1}). For this analysis, all the data in the poorly\nsampled sectors have been excised, reducing the size of the sample to\n5,500 stars.\n\n\n\\section{Method overview} \\label{sec:bayes}\n\nAll of the selection effects but especially data incompleteness\ngreatly complicate the analysis. Bayesian techniques are ideally\nsuited to parameter estimation over data with general but well-defined\nselection criteria and underlies both the maximum entropy and maximum\nlikelihood procedures. Below, we will parameterize the source density\nby an exponentiated orthogonal series with unknown coefficients\n$A_{ij}$ and $B_{ij}$ (cf. eq. \\ref{eq:d15}). In this context, the\nbasic theorem of the theory reads:\n\\begin{equation}\nP \\, ( \\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, D, \\, I \\,) = \n {{ P \\,(\\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, I \\,) \\cdot\n P \\,( D \\, | \\, \\{A_{ij}\\} ,\\, \\{B_{ij}\\} ,\\, I \\,) } \\over\n P \\,( D \\, | \\, I \\,)}. \\label{eq:d1}\n\\end{equation}\nThe probability $P \\, ( \\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, D, \\, I \\,)$\nis the conditional (or {\\it posterior}) probability of the\ncoefficients of the source density provided the data ($D$) and\ninformation ($I$) describing its incompleteness. The probability\n$P\\,(\\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, I \\,)$ is the prior probability\n(or simply, {\\it prior}) of the coefficients provided only the\ninformation. Following Bretthorst (1990), we assign the prior using\nthe maximum entropy principle. In our case it is constant implying\nthat all coefficient values are equally likely initially. The\nfunction $P \\,( D \\, | \\, \\{A_{ij}\\} ,\\, \\{B_{ij}\\} ,\\, I \\,)$ is the\ndirect probability which describes the likelihood of data given the\ncoefficients. Finally, $P \\,( D \\, | \\, I \\,)$ is a normalization\nconstant which may be omitted provided that the posterior probability\nis normalized.\n\nWith these definitions, it follows that\n\\begin{equation}\nP \\, ( \\{A_{ij}\\} ,\\, \\{B_{ij}\\} \\, | \\, D, \\, I \\,) = \\mbox{Const} \\cdot\n P \\,( D \\, | \\, \\{A_{ij}\\} ,\\, \\{B_{ij}\\} ,\\, I \\,), \\label{eq:d2}\n\\end{equation}\nor in words, the posterior probability is proportional to the\nlikelihood function. Therefore, the best estimate of posterior\nprobability is obtained for the set coefficients which maximize the\nlikelihood function.\n\n\\section{Likelihood function} \\label{sec:likelihood}\n\nThe likelihood is the joint probability of the observed stars given a\nsource density. We may then consider the probability of observing a star with\nintrinsic luminosity in the range $( L, L+dL )$ to be detected in the\ndistance interval $( s, s+ds )$, in the azimuth interval $( l, l+dl )$, in\nthe galactic latitude interval $( b, b+db )$\nand with magnitude in the range $( m, m+dm )$. Assuming a normal\ndistribution of intrinsic luminosities $L$ and a normal error\ndistribution for the apparent magnitudes $m$ this becomes:\n\\begin{eqnarray}\nP_n\\,(s,\\,l,\\,b,\\,m,\\,L\\,|\\,\\sigma_m,\\,\\sigma_L,\\,K_{12},\\,h,\\,R_0)\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dL\\,dm = \\nonumber \\\\ \nC \\cdot \\Sigma (r,\\,\\phi,\\,z)\\,e^{-{(L-\\overline L)}^2\/2 \\sigma_L^2}\\,\ne^{-{(m-\\overline m)}^2\/2 \\sigma_m^2}\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dL\\,dm. \\label{eq:d3}\n\\end{eqnarray}\nHere $s$, $l$, $b$ are coordinates about the observer's position, $r$,\n$\\phi$, $z$ are coordinates about the center of the Galaxy, $C$ is the\nnormalization constant, $\\Sigma (r, \\phi, z)$ is the source density at\ngalactocentric radius $R_0$, $\\overline L$ and $\\sigma_L$ are the mean\nintrinsic luminosity and the dispersion of the sample, $\\sigma_m$ is\nthe measurement error in magnitudes and $\\overline m = \\overline\nm\\,(s, b)$ is given by equation (\\ref{eq:b1}). Alternatively, we may\nreplace luminosity by absolute magnitude:\n\\begin{eqnarray}\nP_n\\,(s,\\,l,\\,b,\\,m,\\,M\\,|\\,\\sigma_m,\\,\\sigma_M,\\,K_{12},\\,h,\\,R_0)\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dM\\,dm = \\nonumber \\\\\nC \\cdot \\Sigma (r,\\,\\phi,\\,z)\\,e^{-{(M-\\overline M)}^2\/2 \\sigma_M^2}\\,\ne^{-{(m-\\overline m)}^2\/2 \\sigma_m^2}\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dM\\,dm,\\label{eq:d4}\n\\end{eqnarray}\nwhere $\\overline M$ and $\\sigma_M$ correspond to $\\overline L$ and\n$\\sigma_L$. The Gaussian distributions in $L$ or $M$ in the above two\nequations can be generalized to an arbitrary luminosity function for\ntraditional star count applications. Although we will not give the\ngeneral expressions below, the development is parallel.\n\nSince the convolution of two Gaussians is a new Gaussian whose\nvariance is the sum of the two individual variances\n\\begin{equation}\n\\sigma_{m, eff}^2 = \\sigma_m^2 + \\sigma_M^2, \\label{eq:d5}\n\\end{equation}\nequation (\\ref{eq:d4}) can be rewritten as\n\\begin{eqnarray}\nP_n\\,(s,\\,l,\\,b,\\,m\\,|\\,\\sigma_{m, eff},\\,k,\\,H,\\,R_0)\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dm = \\nonumber \\\\\nC \\cdot \\Sigma (r,\\,\\phi,\\,z)\\,e^{-{(m-\\overline m)}^2\/2 \\sigma_{m, eff}^2}\\,\ns^2\\,ds\\,\\cos b\\,db\\,dl\\,dm \\label{eq:d6}\n\\end{eqnarray} \nafter integrating over the unmeasured absolute magnitude $M$. \nFor notational clarity, we will\nomit the subscript ``eff'' and write simply $\\sigma_m$. The constant\n$C$ is determined from the normalization condition:\n\\begin{equation}\nC \\int _{-\\infty} ^{+\\infty} e^{-{(m-\\overline m)}^2\/2 \\sigma_m^2}\\,dm\n\\int dl \\int _0 ^{s_{max}(b)}\\,s^2\\,ds \\int _{-{\\pi \\over 2}} ^{\\pi \\over 2}\n{ \\Sigma (r,\\,\\phi,\\,z)\\,\\cos b\\,db} = 1. \\label{eq:d7}\n\\end{equation}\nThe integration over $l$ runs over entire circle except missing\nazimuthal sectors, explicitly accounting for missing data at\nparticular ranges in azimuth. The limiting distance $s_{max}$ in the\n$l$, $b$ direction incorporates the 2 Jy flux limit. \n\nIn a standard star count analysis no explicit distance information is\nprovided and $s$ is eliminated from analysis by integration, yielding\n\\begin{eqnarray}\nP_n\\,(l,\\,b,\\,m\\,|\\,\\ldots)\\,\\cos b\\,db\\,dl\\,dm = \\nonumber \\\\\nC \\,\\int _0 ^{s_{max}(b)} {\\Sigma (r,\\,\\phi,\\,z)\\,\ne^{-{(m-\\overline m)}^2\/2 \\sigma_m^2}\\,s^2\\,ds} \\cos b\\,db\\,dl\\,dm. \n\\label{eq:d9}\n\\end{eqnarray}\nFor our relatively small sample of IRAS stars, sensitivity to vertical\nstructure will be poor. This motivates replacing the general unknown\nthree-dimensional disk density with a density which depends on radial\nposition and azimuth alone: $\\Sigma (r,\\,\\phi,\\,z) = {\\overline\n\\Sigma} (r,\\,\\phi)$.\n\nFinally, the joint probability of observing $N$ stars selected from\nthe IRAS PSC is\n\\begin{equation} \nL \\equiv P_{total} = \\prod _{n=1} ^N P_n ( l,b,m | \\ldots).\\label{eq:d11}\n\\end{equation}\nExpressing the likelihood function in logarithmic form, our desired\nsolution is the set of parameters which maximize\n\\begin{equation} \n\\log L = \\sum _{n=1} ^N \\log P_n ( l,b,m | \\ldots).\\label{eq:d12}\n\\end{equation}\n\nThis and nearly all star count analyses reduce to standard problem of\ndensity estimation: find the density function $f ( x )$, which\nsatisfies non-negativity constraint\n\\begin{equation}\nf(x) \\ge 0 \\label{eq:d13}\n\\end{equation}\nand integral constraint\n\\begin{equation}\n\\int f(x) dx = 1 \\label{eq:d14}\n\\end{equation}\nwhich best describes the observed data distribution. Both parametric\nand non-parametric estimation techniques have been used to solve this\nproblem (e.g. Silverman 1986; Izenman 1991). For inhomogeneous\nmultidimensional data, the positivity constraint is\ncumbersome. However, searching for the unknown function $f ( x )$\nin the form of an exponentiated orthogonal series (Clutton-Brock\n1990), guarantees positivity. A candidate stellar surface density is:\n\\begin{equation}\n\\overline \\Sigma (r,\\phi) = \\exp \\biggl\\{ \\sum_{i=1} ^{i_{max}} \\sum_{j=0}\n^{j_{max}} \\left[ A_{ij}\\cos j\\phi + B_{ij}\\sin j\\phi \\right]\nJ_j(k_i^j r) \\biggr\\} ,\\label{eq:d15}\n\\end{equation}\nwhere $J_j(x)$ is Bessel function of $j^{\\hbox{th}}$ order and $k_i^j$\nis $i^{\\hbox{th}}$ root of Bessel function of $j^{\\hbox{th}}$ order\nand are chosen to produce a complete orthogonal set over the disk of\nradius $R_{max}$. The coefficients $A_{ij}, B_{ij}$ are the parameters\nto be determined. There is no loss of generality in taking the\nFourier-Bessel series although the choice is arbitrary.\n\n\n\\section{Results} \\label{sec:results}\n\n\\subsection{Sensitivity to incompleteness}\n\nA major advantage of the approach presented here over that in Paper I\nis that the significance of inferred structure is robustly\nquantified. In particular, we can test the sensitivity of selection\neffects to the detection of a bar. To test the presence of the\ncoverage gaps, we generated four sample disks of 1,000 stars each\nusing the source density (\\ref{eq:d15}) with\n$\\sqrt{A_{ij}^2+B_{ij}^2}=1$ for $j=0,2$ and zero otherwise and the\nfollowing bar position angles: $0^\\circ$, $\\pm45^\\circ$, and\n$90^\\circ$. The root sum square of the coefficients $A_{ij}$ and\n$B_{ij}$ represents the strength of $i^{th}$ radial component for the\n$j^{th}$ polar harmonic. Figure \\ref{fig:test} shows the restored\nstrength of a harmonic $\\sqrt{A_{ij}^2 + B_{ij}^2}$ as a function of\nthe position angle of the bar.\nInsensitivity of these strengths to bar position angle suggests that\nmissing azimuths will not obscure the inference of true bar. The\ncomputed values are consistent with the expected value of unity.\n\nConversely, regions of missing data can produce non-axisymmetric\ndistortions, and in principle, suggest the existence of a bar in\ninitially axisymmetric sample. However, analysis of a simulated\naxisymmetric disk ($A_{10}=A_{20}=1$; all others = 0) and the same\nazimuthal incompleteness as in the real sample shows that the power in\nthe non-axisymmetric harmonics is about 3\\% of the axisymmetric\ncontribution. Together these tests suggest that the misidentification\nof a bar relative due to missing azimuthal sectors alone is unlikely.\n\n\n\\subsection{Application to IRAS data}\n\nThe formalism developed in \\S\\ref{sec:likelihood} requires\nthe distance to galactic center $R_0$, extinction in the\nplane $K_{12}$ and average luminosity of the AGB stars $\\overline L$.\nWe adopted $R_0=8.0$ kpc, $K_{12}=0.18$\nmag\/kpc and $\\overline L = 3500 L_{\\odot}$. The method can\nbe straightforwardly modified for complex models (e.g. patchy or non-uniform\nextinction), the only limitation\nhere is the CPU available and sufficient data to attain a satisfactory\nmeasure of confidence.\n\nChoosing the truncation of the series in equation (\\ref{eq:d15}) poses\na problem common to many non-parametric density estimations: because\ntoo few terms result in large bias and too many terms increase\nvariance, $i_{max}$, $j_{max}$ would be best determined by jointly\nminimizing the bias and the variance. However, this approach is\ncomputationally prohibitive due to the integral in\nequation (\\ref{eq:d9}) and the normalization (\\ref{eq:d7}). Therefore,\na heuristic approach was adapted in selecting $i_{max}$, $j_{max}$\nbased on the increase in the likelihood function when a particular\nterm or set of terms is added. Significance could be quantified in\nterms of the likelihood ratio (Wilks 1962) but we have not done this\nhere. In addition, the hardware available to us makes it impossible to\nsample the parameter space beyond $i_{max}=4$,\n$j_{max}=4$. Nevertheless, up to that limit, the space was sampled\nthoroughly, with some of the solutions shown in Figure\n\\ref{fig:dens_all} along with the corresponding offsets of the\nlikelihood function (the lowest value of likelihood is set to $0$ for\nease in comparison).\nSome of the figures feature the ghost peaks due to the absence of data\nbeyond the galactic center or in missing azimuthal sectors (see\nFigs. \\ref{fig:sp_distr.1} and \\ref{fig:sp_distr.2}). The likelihood\nanalysis may attempt to place a non-existing source density peak in\nthat region, provided it will increase the overall score. We will\npursue penalizing the likelihood function and other procedures for\nchoosing an alternative prior (dropping the assumption that all\ncoefficients in (\\ref{eq:d15}) are equally likely initially) in future\nwork.\n\nMore importantly, all reconstructions in Figure \\ref{fig:dens_all}\nimply a jet-like feature in the first quadrant. As in Paper I, the\ndepth of our sample (estimated to correspond to a mean distance of 7\nkpc in the plane) prevents ascertaining whether this feature\ncorresponds to a bisymmetric bar or is a lopsided distortion.\nHowever, decreasing the flux limit to 1 Jy leads to detection of\nsimilar feature on the far side of the Galaxy, suggesting a real\nbar. This motivates a reconstruction with enforced bisymmetry, shown\nin Figure \\ref{fig:dens_even}. Here the corresponding prior assigns\nzero values to coefficients of odd azimuthal order.\nThe likelihood value (the origin is the same as in Figure\n\\ref{fig:dens_all}) has dropped substantially, because the resulting density\nlacks data support beyond the Galactic center. \nIn both figures, the bar is well defined and has a similar length and \nposition angle. \n\nTo quantify the strength and position angle of the bar, we fitted the\nisodensity contours ($i_{max}=j_{max}=4$) by ellipses. The logarithm\nof a suitable likelihood function for estimating the semi-major axes,\neccentricity and position angle is\n\\begin{equation}\n\\log L = \\sum _{i=1} ^M {\\biggl[ \\Sigma_{rec}(r_i, \\phi_i)-C \\biggr]}^2,\n\\label{eq:d17}\n\\end{equation}\nwhere $\\Sigma_{rec}(r, \\phi)$ is the reconstructed density function and\n$C$ is isodensity level. The summation runs over equally spaced points\non ellipse. For a given ellipse, a grid of semimajor axis values are\nspecified and the surface density $C$, position angle $\\phi_0$ and\neccentricity $e$ which maximizes $\\log L$ are found. The results\nare presented in Figures \\ref{fig:levels} and \\ref{fig:angles}.\n\nFigure \\ref{fig:levels} indicates that the density profile drops to\nhalf of its central value at about 4 kpc. The half-length would then\nbe about 4 kpc, in good agreement with the value obtained in Paper I.\nIf we take this value as the size of the major axis of the bar, then\nthe axis ratio varies from 2.2 in the central regions to 2.7 in the\nouter regions of the bar. The value of the position angle for the\nentire extent of the bar (out to 4 kpc) is $\\approx 19^{\\circ}$. The\naccuracy of the position angle determination can be quantified in\nterms of confidence interval, making use of the fact that in the limit\nof large number of sources $N$, the likelihood in $n$ dimensions is\ndistributed as $\\chi^2\/2$ with $n$ degrees of freedom (e.g. Lehmann\n1959). We analyzed the likelihood as the function of a single variable\n-- orientation angle of the bar in the plane. The analysis gives the\nuncertainty of $1^{\\circ}$ at $3\\sigma$ level.\n\nAnother way to determine the parameters of the bar is to look at the\nmap of the ratio of non-axisymmetric to axisymmetric components of the\ndensity. The ratio displays two peaks at $3.3\\pm0.1$ kpc located on\nthe opposite sides from the center, the line connecting them has the\nposition angle of $\\sim24^{\\circ}\\pm2^{\\circ}$. The peak ratio, the\nrelative strength of the bar, is $0.73$. This implies the existence\nof a strong bar in the intermediate age population responsible for the\nAGB stars.\n\n\\subsection{Disk scale length}\n\nHaving calculated the source density, we are in a position to\ncharacterize the parent population of the IRAS variables. In Paper I,\nwe assumed that these variables represented a disk population based on\ntheir flux distribution but several colleagues have suggested in\ndiscussion that the IRAS variables are more likely to be bulge stars.\nHere, we determine the scale length of the population in the Galactic\nplane. For comparison, we fit our reconstruction by an oblate\nspheroid model (G0 bulge model from the DIRBE study by Dwek et\nal. 1995):\n\\begin{equation} \n\\Sigma_{G0} ( x, y ) = \\Sigma_0 e^{-0.5 r^2},\n\\label{eq:d18}\n\\end{equation}\nwith $r^2 = ( x^2 + y^2 ) \/ r_0^2$. The scale length $r_0$ is found\nby minimizing the following cost function while simultaneously\nsatisfying the overall normalization constraint for $\\Sigma_{G0}$\n(eq. \\ref{eq:d14}):\n\\begin{equation} \n\\hbox{cost} = \\int d^2 r {\\biggl[ \\Sigma_{rec}-\\Sigma_{G0} \\biggr]}^2.\n\\label{eq:d19}\n\\end{equation}\nTo estimate the value of $r_0$, we used the covariance matrix from\nthe likelihood \nanalysis used to determine $\\Sigma_{rec}$ to make 5000 Monte Carlo\nrealizations of the source density. The ensemble of realizations,\nthen, have $\\Sigma_{rec}$ as their mean. For each realization, we\nfound $r_0$ by minimizing the cost function (\\ref{eq:d19}) and the\nresulting distribution of scale lengths is shown in Figure\n\\ref{fig:dwek}. Our result $r_0 = 4.00\\pm0.55$ kpc indicates that the\nIRAS variables have the scale length of the old disk population. This\nvalue is in good agreement with the scale length $4.5$ kpc reported by\nHabing (1988), derived from analysis of a color-selected IRAS sample.\nDwek's value obtained by analyzing bulge emission was\n$r_0=0.91\\pm0.01$ kpc. The factor of $4$ difference between the scale\nlengths suggests that the IRAS bar and the bulge-bar belong to\ndistinct populations.\n\n\\subsection{Optical depth due to microlensing}\n\nOriginally proposed as a test for dark matter in the Milky Way halo\n(Paczy\\'nski 1986), gravitational microlensing was later shown (Griest\net al. 1991; Paczy\\'nski 1991) to be potentially useful for extracting\ninformation about the inner regions of our Galaxy. Three groups (OGLE,\nMACHO and EROS) are monitoring stars in the Galactic bulge for\ngravitational microlensing and have found higher event rates\nthan most theoretical estimates. Udalski et al. (1994) derived\nlensing optical depth $\\tau = (3.3 \\pm 1.2) \\times 10^{-6}$ toward the\nBaade's window ($l = 1^{\\circ}, b = -3.9^{\\circ}$) based on the\nanalysis of the OGLE data, and MACHO group reported $\\tau =\n3.9^{+1.8}_{-1.2} \\times 10^{-6}$ (Alcock et al. 1995a) estimated from\nthe sample of clump giants, while theoretical estimates give optical\ndepths in the range $0.5 - 2.0 \\times 10^{-6}$ (e.g. Alcock et\nal. 1995a; Evans 1994). Following Paczy\\'nski's et al. (1994) suggestion that\na bar with a small inclination angle could enhance the optical depth,\nZhao et al. (1995) have developed a detailed bar model and found $\\tau\n= (2.2 \\pm 0.5) \\times 10^{-6}$.\nHere, we estimate the optical depth using our density\nreconstruction, $\\Sigma_{rec}$, assuming that our AGB sample represents\nthe entire stellar disk.\n\nThe lensing optical depth is defined as the probability of any of the\nsources being lensed with magnification factor $A > 1.34$, with\n\\begin {equation}\nA = {u^2+2 \\over u \\sqrt{u^2+4}}, \\qquad u \\equiv {r \\over R_E}\n\\label {eq:d20}\n\\end {equation}\n(Refsdal 1964), where $r$ is the distance between the projected position \nof the source and the lensing mass, $R_E$ is the radius of Einstein ring.\nKiraga \\& Paczy\\'nski (1994) derived\n\\begin {equation}\n\\tau = {4 \\pi G \\over c^2}\\,\\, \n{\\int _0 ^{\\infty} \\left[ \\int _0 ^{D_s} \\rho \\, {D_d(D_s-D_d) \\over D_s} \\,\\, dD_d \\right] \n\\rho \\, D_s^{2+2\\beta}\\, dD_s \\over \\int _0 ^{\\infty} \\rho \\, D_s^{2+2\\beta}\\, dD_s},\n\\label {eq:d21}\n\\end {equation}\nwhere $D_s$ is the distance to the sources, $D_d$ is the distance to\nthe deflectors and the free parameter $\\beta$ accounts for\ndetectablity of sources in a flux-limited survey. The reasonable\nrange is $-3 \\le \\beta \\le -1$ and we take $\\beta = -1$ following\nEvans (1994) and Kiraga \\& Paczy\\'nski (1994). \nThe density $\\rho = \\rho_{bulge}\n+ \\rho_{disk}$, with $\\rho_{bulge}$ given by equation (1) of Kent\n(1992), and\n\\begin {equation}\n\\rho_{disk} = C \\, \\Sigma_{44} (r, \\phi) \\, e^{-|z|\/h}, \n\\label {eq:d22}\n\\end {equation}\nwhere $\\Sigma_{44}$ is the surface density of our $i=4, j=4$ model (\\ref{eq:d15}) and \n$h=0.325$ kpc is the scale height. \nWe explored two possible normalization prescriptions: (1) Assign a\nlocal column density of $\\sim 50\\, M_{\\odot}\\, pc^{-2}$ (``canonical\ndisk'' following Kuijken \\& Gilmore 1989; Gould 1990). The mass of the\ndisk in this case is $M_{disk} = 1.95 \\times 10^{10} M_{\\odot}$. \n(2) Assign the total disk mass of $M = 6 \\times 10^{10} M_{\\odot}$\n(Bahcall \\& Soneira 1980). The second normalization gives local\ncolumn density of approximately $100\\, M_{\\odot}\\, pc^{-2}$ (``maximal\ndisk'' of Alcock et al. 1995b). We prefer the latter here because the\noptical depth estimate depends on the global mass distribution rather\nthan the local density. In addition, there are some indications that\nthe variation of the column density with galactic longitude may be\nquite significant -- a factor of $2-3$ (Rix \\& Zaritsky 1995; Gnedin,\nGoodman \\& Frei 1995). The mass of the bulge is $M_{bulge} = 1.65\n\\times 10^{10} M_{\\odot}$.\n\nFor the canonical disk case, the total lensing optical depth at\nBaade's window is $1.1 \\times 10^{-6}$, and both bulge and disk lenses\ncontribute $50\\%$ to that number. Most of the optical depth (76\\%) is\ndue to lensing of bulge sources. If the disk is maximal, optical depth\nis $1.6 \\times 10^{-6}$. Disk lenses now account for $1.1 \\times 10^{-6}$\n(68\\% of the total optical depth) and the contribution by bulge sources\nstill dominates (59\\%).\nFor both scenarios, optical depth is a function of the\norientation of the bar. We investigate the enhancement produced by\nthe bar over axisymmetric models of the disk $\\rho\\propto e^{-r\/R} \\,\ne^{-|z|\/h}$, where $R = 3.5$ kpc for fixed disk mass. Figure\n\\ref{fig:baadetau} displays the ratio of optical depths of\nnon-axisymmetric to axisymmetric disk models as a function of the\nposition angle of the bar for both normalization scenarios.\nThe difference between the two curves illustrates the role of the disk\nin lensing. The largest enhancement of approximately 30\\% obtains\nwhen the bar is aligned along the line of sight as expected. The ratio\nof optical depths decreases gradually when the bar is in the first\nGalactic quadrant, with $\\ge 20\\%$ enhancement out to $\\phi_0 =\n50^{\\circ}$.\n\nCurrent generation optical-band lensing surveys have concentrated on\nlow-extinction bulge-centered windows to maximize the lensing event\nrate. An infrared-band lensing microlensing survey would be less\nconstrained by extinction and therefore more efficient probe of the\noverall structure of the Galaxy. In particular, any bar which is not\nperfectly aligned along the Sun--Galactic Center axis will produce an\nasymmetry in the optical depth. We describe this asymmetry by the ratio\nof the difference in optical depths at positive and negative longitude\nto their arithmetic mean. This ratio is shown in Figure \\ref{fig:reldiff} for\nour model (cf. eqns. \\ref{eq:d21} and \\ref{eq:d22}). Comparison with\nthe Bahcall \\& Soneira model (1980) suggests that $\\beta\\approx-1$ is\na fair approximation of the high-luminosity end of the disk luminosity\nfunction. Therefore, equation (\\ref{eq:d21}) also applies at large\n$|l|$ where both lenses and sources are disk members. The large 40\\%\nasymmetry about $|l|\\approx30^{\\circ}$ is due to a local increase in\nthe surface density at negative longitudes close to the observer\n(Figure \\ref{fig:dens_even}). More important than the details of\nasymmetry is the suggestion that a pencil-beam microlensing survey in\nthe infrared would be sensitive to global asymmetries in the stellar\ndisk component. Confusion is not a limitation at $b=0^\\circ$ for larger\nvalues of $| l |$ and the optical depth has a magnitude similar to Baade's\nwindow.\n\n\\section{Summary and discussion} \\label{sec:summary}\n\nThis paper explores a model-independent Bayesian estimation of the\nstellar density from star counts, rigorously accounting for incomplete\ndata. The general approach can incorporate multiple colors and even\ndifferent databases. The usual high dimensionality and topological\ncomplexity of the posterior distribution, however, complicates both\noptimization algorithms and subsequent moment analyses. We propose\nhere a hybrid downhill plus directed-search Monte Carlo algorithm; the\nformer speeds convergence and the latter facilitates the location of\nthe global extremum. Other similar and potentially more efficient\ntechniques which can bypass the extremization step altogether (such as\ngeneral Markov Chain Monte Carlo) are worth careful consideration.\n\nApplication of the technique to the variability-selected sample\ndescribed in Weinberg (1992), assumed to be AGB stars, confirms the\npresence of a strong non-axisymmetric feature in the first Galactic\nquadrant. By imposing bisymmetry on the source density, clear\nsignature of a bar is obtained. The size and shape of density\nisophotes suggests a bar semi-major axis of approximately 4 kpc and\nposition angle of $\\phi_0 = 18^\\circ \\pm 2^\\circ$ at the outer edge of\nthe bar. The analysis of the scale length for the AGB candidate\ndistribution gives $r_0=4.00\\pm0.55$ kpc, indicating that these\nobjects are part of the old disk population.\n\nFinally, we use our estimate for non-axisymmetric Galactic disk to\nexplore the dependence of optical depth to gravitational microlensing\nby bulge and disk stars. The disk bar does enhance the optical depth\n$\\tau$ towards Baade's window by roughly 30\\% but the overall value is\nstill roughly a factor of two below the MACHO result $\\tau =\n3.9^{+1.8}_{-1.2} \\times 10^{-6}$. Of interest for future\nmicrolensing surveys is the finding that our inferred large-scale bar\nwill produce a significant asymmetry in $\\tau$ at positive and\nnegative longitudes beyond the bulge. The peak asymmetry for our\nmodel occurs at $|l|=30^\\circ$ and at $b=0$ we predict similar values\nof $\\tau$ to the Baade's window field. Such a survey might best be\ncarried out in the infrared to take advantage of the low interstellar\nextinction and colors of the late-type giants. At\n$|l|\\gtrsim30^\\circ$, confusion should not be a limitation at\n$b=0^\\circ$.\n\n\\acknowledgements\n\nWe thank Steve Price and Mike Skrutskie for comments. This work was\nsupported in part by NASA grant NAG 5-1999 and the Alfred P. Sloan\nFoundation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nIn this paper, we study the problem of producing near-optimal solutions of random optimization problems by polynomials of low degree in the input data. Namely, we prove that no low-degree polynomial can succeed at achieving a certain objective value in two optimization problems: (a) optimizing the Hamiltonian of the (spherical or Ising) $p$-spin glass model, and (b) finding a large independent set in a sparse \\ER graph, with high probability in the realization of the problem. We rule out polynomials of degree as large as $cn$ for the $p$-spin glass models and as large as $cn\/\\log n$ for the independent set problem for a constant $c$, provided the algorithm is assumed to succeed modulo exponentially small in $n$ probability, where $n$ is the problem dimension. More generally, we provide a tradeoff between the degree of polynomials that we rule out and the success probability assumed. \nFor the spherical $p$-spin model, we also give a lower bound against Langevin dynamics.\n\nOur motivation for focusing on ``low-degree'' approximations is two-fold. Firstly, from an approximation theory perspective, producing near-optimal solutions by a polynomial in the input is very natural.\nIndeed, in many problems of interest the best known polynomial-time algorithms can be placed within the family of low-degree methods. For example, in the settings we consider here, the best known polynomial-time optimization results can be captured by the approximate message passing (AMP) framework~\\cite{montanari-sk,EMS-opt} (for the $p$-spin) and by the class of local algorithms on sparse graphs~\\cite{LauerWormald} (for the independent set problem), respectively. Both of these families of algorithms are captured by constant-degree polynomials; see Appendix~\\ref{app:low-deg-alg} for more details.\nFor spherical $p$-spin glass models, earlier work of \\cite{subag-full-rsb} introduced an algorithm which performs as well as AMP; we expect this algorithm to also fall into the family of low-degree methods, but verifying this is less clear. Secondly, a recent line of work~\\cite{p-cal,HS-bayesian,sos-hidden,sam-thesis} on the \\emph{sum-of-squares hierarchy} has produced compelling evidence that the power of low-degree polynomials is a good proxy for the intrinsic computational complexity of a broad class of \\emph{hypothesis testing} problems. Below, we briefly review this theory of low-degree polynomials in hypothesis testing.\n\nThe low-degree framework was initiated in \\cite{HS-bayesian,sos-hidden,sam-thesis} to study computational hardness in hypothesis testing problems. Specifically, this line of work has focused on high-dimensional testing problems where the goal is to determine whether a given sample (e.g., an $n$-vertex graph) was drawn from the ``null'' distribution $\\QQ_n$ (e.g., the \\ER model) or the ``planted'' distribution $\\PP_n$ (e.g., a random graph with planted structure such as a large clique or a small cut). Through an explicit and relatively straightforward calculation, one can determine whether there exists a (multivariate) polynomial $f$ (in the entries of the observed sample) of a given degree $D = D(n)$ that can distinguish $\\PP_n$ from $\\QQ_n$ (in a particular sense) \\cite{HS-bayesian,sos-hidden,sam-thesis}. A conjecture of Hopkins~\\cite{sam-thesis} (inspired by~\\cite{p-cal,HS-bayesian,sos-hidden}) postulates that for ``natural'' high-dimensional testing problems, if there is a polynomial-time algorithm to distinguish $\\PP_n, \\QQ_n$ (with error probability $o(1)$) then there is also an $O(\\log n)$-degree polynomial that can distinguish $\\PP_n, \\QQ_n$. One justification for this conjecture is its deep connection with the \\emph{sum-of-squares (SoS) hierarchy}---a powerful class of meta-algorithms---and in particular the \\emph{pseudo-calibration} approach~\\cite{p-cal}, which suggests that low-degree polynomials are as powerful as any SoS algorithm (see~\\cite{sos-hidden,sam-thesis,sos-survey} for details). Another justification for the conjecture is that $O(\\log n)$-degree polynomials can capture a very broad class of spectral methods (see~\\cite[Theorem~4.4]{lowdeg-notes} for specifics), which in turn capture the best known algorithms for many high-dimensional testing problems (e.g., \\cite{tensor-pca-sos,fast-sos,sos-hidden}). For many classical statistical tasks---planted clique, sparse PCA, community detection, tensor PCA, etc.---it has indeed been verified that $O(\\log n)$-degree polynomials succeed (at testing) in the same parameter regime as the best known polynomial-time algorithms (e.g.,~\\cite{HS-bayesian,sos-hidden,sam-thesis,sk-cert,lowdeg-notes,subexp-sparse}). (Oftentimes, the hypothesis testing variants of these types of problems seem to be equally hard as the more standard task of recovering the planted signal.) Lower bounds against low-degree polynomials are one concrete form of evidence that the existing algorithms for these problems cannot be improved (at least without drastically new algorithmic techniques). For more details on the low-degree framework for hypothesis testing, we refer the reader to~\\cite{sam-thesis,lowdeg-notes}.\n\nOne goal of the current work is to extend the low-degree framework to the setting of random optimization problems. This includes defining what it means for a low-degree polynomial to succeed at an optimization task, and giving techniques by which one can prove lower bounds against all low-degree polynomials. One difference between the optimization and testing settings is that many existing optimization algorithms can be represented as constant-degree polynomials (see Appendix~\\ref{app:low-deg-alg}), instead of the $O(\\log n)$-degree required in the testing case. A substantial difficulty that we face in the optimization setting is that, in contrast to the testing setting, it does not seem possible to prove lower bounds against low-degree polynomials via a straightforward explicit calculation. To overcome this, our proofs take a more indirect route and leverage a certain structural property---the \\emph{overlap gap property (OGP)}---of the optimization landscape, combined with stability properties of low-degree polynomials. We also use similar techniques to give lower bounds against Langevin dynamics, a canonical Monte Carlo analogue of gradient descent; while this is not a low-degree polynomial (due to its continuous-time nature), it is similar in spirit and has similar stability properties.\n\nWhile the OGP has been used to rule out various classes of other algorithms previously (see below), its usage in our current setting presents some substantial technical difficulties which we need to overcome. Roughly speaking, the property states that for every pair of nearly-optimal solutions $x_1$ and $x_2$, their normalized overlap (normalized inner product) measured with respect to the ambient Hilbert space must lie in a disjoint union of intervals $[0,\\nu_1]\\cup [\\nu_2,1]$. This property extends to the case of families of instances as well in the sense that even if one considers a natural interpolation between two independent instances of the problem, for every two members of the interpolated family and every pair of solutions $x_1,x_2$ which are near optimizers for these two members, respectively, it is still the case that the overlap of $x_1$ and $x_2$ belongs to $[0,\\nu_1]\\cup [\\nu_2,1]$. The main idea of the proof from OGP is based on the contradiction argument. If the result of the algorithm is known to be stable then, denoting by $x(t)$ the result of the algorithm corresponding to the interpolation step $t$, it should be the case that the overlap between $x(0)$ and $x(t)$ changes ``continuously''. At the same time we show separately that the starting solution $x(0)$ and terminal solution $x(1)$ have an overlap at most $\\nu_1$, and thus at some point the overlap between $x(0)$ and $x(t)$ belongs to $(\\nu_1,\\nu_2)$, which is a contradiction. \n\nEstablishing stability for low-degree polynomials and Langevin dynamics is quite non-trivial and constitutes the key technical contribution of the paper. For the case of polynomials, these stability results harness results from Gaussian and Boolean Fourier analysis. We prove two separate variants of this stability result, depending on whether the random input is Gaussian- or Bernoulli-distributed. A key technical result in the Gaussian case is Theorem~\\ref{thm:hyp-stable} which informally states that if we have two $\\rho$-correlated random instances $X$ and $Y$ of a random tensor, and $f$ is a {vector-valued} low-degree polynomial defined on such tensors, then the distance $\\|f(X)-f(Y)\\|_2$ is unlikely to exceed a certain value which depends continuously on $\\rho$. In particular this distance is small when $\\rho\\approx 1$. Proving this result relies on a well-known consequence of hypercontractivity for low-degree polynomials,\nand basic properties of Hermite polynomials (the orthogonal polynomials of the Gaussian measure).\nIn the case of Bernoulli-distributed inputs, we prove a related stability result (Theorem~\\ref{thm:binary-stable}) which shows that when the input variables are resampled one at a time, the output of a vector-valued low-degree polynomial will never change significantly in one step, with nontrivial probability. The proof involves the notion of total influence from Boolean analysis, as well as a direct proof by induction on the dimension.\nThe proof of stability for Langevin dynamics is based on the continuous dependence of stochastic differential equations {on their coefficients}. \n\nThe OGP emerged for the first time in the context of spin glass theory and random constraint satisfaction problems. It was first proven implicitly in~\\cite{achlioptas2008algorithmic}, \\cite{AchlioptasCojaOghlanRicciTersenghi}, and \\cite{mezard2005clustering}. These papers established that the set of satisfying assignments of a random K-SAT formula partitions into clusters above a certain clause-to-variables density. This was postulated as evidence of algorithmic hardness of finding satisfying assignments for such densities. Implicitly, the proof reveals that the overlaps of satisfying assignments exhibit the OGP, and clustering is inferred from this. It is worth noting that while OGP implies the existence of clusters, the converse is not necessarily the case, as one can easily construct a clustered space of solutions with overlaps spanning the entire interval $[0,1]$. \nA direct algorithmic implication of the OGP was shown for the first time in~\\cite{gamarnik2014limits}, where OGP was proven to be a barrier for local algorithms---defined as the so-called \\emph{factors of i.i.d.\\ (FIID)}---designed to find large independent sets in sparse \\ER graphs. The OGP was used to show that, asymptotically, these algorithms cannot find independent sets larger than a multiplicative factor $1\/2+1\/(2\\sqrt{2}) \\approx 0.85$ of optimal. The present paper recovers this result as a special case, since (as we discuss in Appendix~\\ref{app:low-deg-alg}) local algorithms can be captured by constant-degree polynomials. The lower bound against local algorithms was improved by~\\cite{rahman2017local} to a multiplicative factor of $1\/2$. This is the best possible since $1\/2$-optimal independent sets can be found by local algorithms; more precisely, this was shown in \\cite{LauerWormald} for the case of random regular graphs, but a similar result is expected to hold for sparse \\ER graphs as well (although we are not aware of any literature formally verifying this). It is not clear how to improve the multiplicative factor in the lower bound to $1\/2$ for low-degree polynomials, as~\\cite{rahman2017local} uses a more sophisticated variant of OGP than we use here. Several subsequent papers used OGP to rule out various classes of algorithms, including local algorithms for finding large cuts in random hypergraphs~\\cite{chen2019suboptimality}, random walk--based algorithms (WALKSAT)~\\cite{coja2017walksat}, and AMP-type algorithms for optimizing the Hamiltonian of the Ising $p$-spin model~\\cite{gamarnik2019overlap}. \nThe current work draws inspiration from a key idea in \\cite{chen2019suboptimality,gamarnik2019overlap}, namely that a particular variant of OGP---the same variant that we use in the current work---implies failure of any sufficiently ``stable'' algorithm.\n\n\nWe emphasize that the class of algorithms ruled out by the lower bounds in this paper (namely, low-degree polynomials) not only captures existing methods such as AMP and local algorithms, but contains a strictly larger (in a substantial way) class of algorithms than prior work on random optimization problems. We now illustrate this claim in the setting of the $p$-spin optimization problem. The best known polynomial-time algorithms for optimizing the $p$-spin Hamiltonian are captured by the AMP framework \\cite{montanari-sk,EMS-opt}. Roughly speaking, AMP algorithms combine a linear update step (tensor power iteration) with entry-wise non-linear operations. For a fairly general class of $p$-spin optimization problems (including spherical and Ising mixed $p$-spin models), it is now known precisely what objective value can be reached by the best possible AMP algorithm~\\cite{EMS-opt}. While this may seem like the end of the story, we point out that for the related \\emph{tensor PCA} problem---which is a variant of the $p$-spin model with a planted rank-1 signal---AMP is known to be substantially sub-optimal compared to other polynomial-time algorithms~\\cite{RM-tensor}. None of the best known polynomial-time algorithms~\\cite{RM-tensor,tensor-pca-sos,fast-sos,kikuchi,hastings-quantum,replicated-gradient} use the tensor power iteration step as in AMP, and there is evidence that this is fundamental~\\cite{algorithmic-tensor}; instead, the optimal algorithms include spectral methods derived from different tensor operations such as \\emph{tensor unfolding}~\\cite{RM-tensor,tensor-pca-sos} (which can be interpreted as a higher-order ``lifting'' of AMP~\\cite{kikuchi}). These spectral methods are captured by $O(\\log n)$-degree polynomials. With this in mind, we should \\emph{a priori} be concerned that AMP might also be sub-optimal for the (non-planted) $p$-spin optimization problem. This highlights the need for lower bounds that rule out not just AMP, but all low-degree polynomial algorithms. While the lower bounds in this paper do not achieve the precise optimal thresholds for objective value, they rule out quite a large class of algorithms compared to existing lower bounds for random optimization problems.\n\nWe refer the reader to Appendix~\\ref{app:low-deg-alg} for a more detailed discussion of how various optimization algorithms can be approximated by low-degree polynomials.\n\n\n\n\\subsubsection*{Notation}\n\nWe use $\\| \\cdot \\|_2$ and $\\langle \\cdot,\\cdot \\rangle$ to denote the standard $\\ell^2$ norm and inner product of vectors. We also use the same notation to denote the Frobenius norm and inner product of tensors. We use the term \\emph{polynomial} both to refer to (multivariate) polynomials $\\RR^m \\to \\RR$ in the usual sense, and to refer to vector-valued polynomials $\\RR^m \\to \\RR^n$ defined as in~\\eqref{eq:vec-val-poly}. We abuse notation and use the term \\emph{degree-$D$ polynomial} to mean a polynomial of degree \\emph{at most} $D$. A \\emph{random polynomial} has possibly-random coefficients, as defined in Section~\\ref{sec:poly-alg}. We use $A^c$ to denote the complement of an event $A$. Unless stated otherwise, asymptotic notation such as $o(1)$ or $\\Omega(n)$ refers to the limit $n \\to \\infty$ with all other parameters held fixed. In other words, this notation may hide constant factors depending on other parameters such as the degree $d$ in the independent set problem.\n\n\n\n\\section{Main Results}\n\\label{sec:main-results}\n\n\n\\subsection{Optimizing the $p$-Spin Glass Hamiltonian}\n\n\nThe first class of problems we consider here is optimization of the (pure) $p$-spin glass Hamiltonian, defined as follows. Fix an integer $p \\geq 2$ and let $Y \\in (\\RR^n)^{\\otimes p}$ be a $p$-tensor with real coefficients. For $x \\in \\RR^n$, consider the objective function\n\\begin{equation}\\label{eq:p-spin-def}\nH_n(x;Y) = \\frac{1}{n^{(p+1)\/2}} \\g{Y,x^\\tp}.\n\\end{equation}\nNote that all homogeneous polynomials of degree $p$ (in the variables $x$) can be written in this form for some $Y$. We focus on the case of a random coefficient tensor $Y$. In this setting, the function $H_n$ is \nsometimes called the Hamiltonian for a $p$-spin glass model in the statistical physics literature. More precisely, for various choices of a (compact) domain $\\cX_n \\subset \\R^n$, we are interested in approximately solving the optimization problem\n\\begin{equation}\\label{eq:max-H}\n\\max_{x \\in \\cX_{n}} H_n(x;Y)\n\\end{equation}\ngiven a random realization of the coefficient tensor $Y$ with i.i.d\\ $\\mathcal{N}(0,1)$ entries. Here and in the following we let $\\PP_Y$ denote the law of $Y$. (When it is clear from context we omit the subscript $Y$.)\n\n\nWe begin first with a simple norm constraint, namely, we will take as domain \n$\\cS_n =\\{x\\in\\R^n: \\norm{x}_2 =\\sqrt n\\}$, the sphere in $\\R^n$ of radius $\\sqrt{n}$.\nWe then turn to understanding a binary constraint, namely where the domain is the discrete hypercube $\\Sigma_n =\\{+1,-1\\}^n$. Following the statistical physics literature, in the former setting, we call the objective the \\emph{spherical} $p$-spin glass Hamiltonian and the latter setting the \\emph{Ising} $p$-spin glass Hamiltonian.\n\nIn both settings, quite a lot is known about the maximum. It can be shown \\cite{Ton02,JagTob17} that the maximum value of $H_n$ has an almost sure limit (as $n \\to \\infty$ with $p$ fixed), \ncalled the \\emph{ground state energy}, which we will denote by $E_p(\\cS)$ \nfor the spherical setting and $E_p(\\Sigma)$ for the Ising setting. Explicit \nvariational formulas are known for $E_p(\\cS)$ \\cite{ABC13,JagTob17,ChenSen17} and $E_p(\\Sigma)$ \\cite{AuffChen18,JS17}.\n\nAlgorithmically, it is known how to find, in polynomial time, a solution of value $E_p^\\infty(\\cS) - \\varepsilon$ or $E_p^\\infty(\\Sigma_n) - \\varepsilon$ (respectively for the spherical and Ising settings) for any constant $\\varepsilon > 0$~\\cite{subag-full-rsb,montanari-sk,EMS-opt}. In both the spherical and Ising settings, these constants satisfy $E_2^\\infty = E_2$ and $E_p^\\infty < E_p$ for $p \\ge 3$. In other words, it is known how to efficiently optimize arbitrarily close to the optimal value in the $p=2$ case, but not when $p \\ge 3$.\n\n\n\n\\subsubsection{Low-Degree Polynomial Algorithms}\\label{sec:poly-alg}\n\nOur goal here is to understand how well one can optimize~\\eqref{eq:max-H} via the output of a vector-valued low-degree polynomial in the coefficients $Y$. To simplify notation we will often abuse notation and refer to the space of \n$p$-tensors on $\\R^n$ by $\\R^m \\cong (\\R^n)^\\tp$ where $m=n^p$.\n\nWe say that a function $f:\\RR^m\\to\\RR^n$ is a polynomial of degree (at most) $D$ if it may be written in the form \n\\begin{equation}\\label{eq:vec-val-poly}\nf(Y) = (f_1(Y),\\ldots,f_n(Y)),\n\\end{equation}\nwhere each $f_i:\\RR^m\\to\\R$ is a polynomial of degree at most $D$. \n\nWe will also consider the case where $f$ is allowed to have random coefficients, \nprovided that these coefficients are independent of $Y$. That is, we will assume that \nthere is some probability space $(\\Omega,\\PP_\\omega)$ and that \n$f:\\RR^m\\times\\Omega\\to\\RR^n$ is such that $f(\\cdot,\\omega)$ is a polynomial of degree \nat most $D$ for each $\\omega\\in\\Omega$. We will abuse notation and refer to this as a \\emph{random polynomial} $f: \\RR^m \\to \\RR^n$.\n\nOur precise notion of what it means for a polynomial to optimize~$H_n$ will depend somewhat on the domain~$\\cX_n$. This is because it is too much to ask for the polynomial's output to lie in $\\cX_n$ exactly, and so we fix a canonical rounding scheme that maps the polynomial's output to $\\cX_n$. We begin by defining this notion for the sphere: $\\cX_n = \\cS_n$.\n\n\\paragraph{The spherical case.}\n\nWe will round a polynomial's output to the sphere $\\cS_n$ by normalizing it in the standard way. To this end, for a random polynomial $f: \\RR^m \\to \\RR^n$ we define the random function $g_f: \\RR^m \\to \\cS_n \\cup \\{\\infty\\}$ by\n\\[\ng_f(Y,\\omega) = \\sqrt n \\frac{f(Y,\\omega)}{\\norm{f(Y,\\omega)}_2},\n\\]\nwith the convention $g_f(Y,\\omega) = \\infty$ if $f(Y,\\omega)=0$.\n\n\\begin{definition}\nFor parameters $\\mu \\in \\RR$, $\\delta \\in [0,1]$, $\\gamma \\in [0,1]$, and a random polynomial $f: \\RR^m \\to \\RR^n$, we say that $f$ $(\\mu,\\delta,\\gamma)$-optimizes the objective \\eqref{eq:p-spin-def} on $\\cS_n$ if the following are satisfied when $(Y,\\omega)\\sim\\PP_Y\\otimes \\PP_\\omega$:\n\\begin{itemize}\n \\item $\\displaystyle \\Ex_{Y,\\omega} \\|f(Y,\\omega)\\|^2_2 = n$ \\; (normalization).\n \\item With probability at least $1-\\delta$ over $Y$ and $\\omega$, we have both $H_n(g_f(Y,\\omega);Y) \\ge \\mu$ and $\\|f(Y,\\omega)\\|_2 \\ge \\gamma \\sqrt{n}$.\n\\end{itemize}\n\\end{definition}\n\n\n\\noindent Implicitly in this definition, the case $f(Y,\\omega)=0$ must occur with probability at most $\\delta$. The meaning of the parameters $(\\mu,\\delta,\\gamma)$ is as follows: $\\mu$ is the objective value attained after normalizing the polynomial's output to the sphere, and $\\delta$ is the algorithm's failure probability. Finally, $\\gamma$ is involved in the norm bound $\\|f(Y,\\omega)\\|_2 \\ge \\gamma \\sqrt{n}$ that we need for technical reasons. Since the domain is $\\cS_n$, $f$ is ``supposed to'' output a vector of norm $\\sqrt{n}$. While we do not require this to hold exactly (and have corrected for this by normalizing $f$'s output), we do need to require that $f$ usually does not output a vector of norm too much smaller than $\\sqrt{n}$. This norm bound is important for our proofs because it ensures that a small change in $f(Y,\\omega)$ can only induce a small change in $g_f(Y,\\omega)$.\n\nWe now state our main result on low-degree hardness of the spherical $p$-spin model, with the proof deferred to Section~\\ref{sec:pf-lowdeg-pspin}.\n\n\\begin{theorem}\\label{thm:spherical-lowdeg}\nFor any even integer $p \\ge 4$ there exist constants $\\mu < E_p(\\cS)$, $n^* \\in \\NN$, and $\\delta^* > 0$ such that the following holds. For any $n \\ge n^*$, any $D \\in \\NN$, any $\\delta \\le \\min\\{\\delta^*,\\frac{1}{4} \\exp(-2D)\\}$, and any $\\gamma \\ge (2\/3)^D$, there is no random degree-$D$ polynomial that $(\\mu, \\delta, \\gamma)$-optimizes \\eqref{eq:p-spin-def} on $\\cS_n$.\n\\end{theorem}\n\n\n\\noindent A number of remarks are in order. First, this result exhibits a tradeoff between the degree $D$ of polynomials that we can rule out and the failure probability $\\delta$ that we need to assume. In order to rule out polynomials of \\emph{any} constant degree, we need only the mild assumption $\\delta = o(1)$. On the other hand, if we are willing to restrict to algorithms of failure probability $\\delta = \\exp(-cn)$ (which we believe is reasonable to expect in this setting), we can rule out all polynomials of degree $D \\le c'n$ for a constant $c' = c'(c)$. It has been observed in various hypothesis testing problems that the class of degree-$n^\\delta$ polynomials is at least as powerful as all known $\\exp(n^{\\delta-o(1)})$-time algorithms~\\cite{sam-thesis,lowdeg-notes,subexp-sparse}. This suggests that optimizing arbitrarily close to the optimal value in the spherical $p$-spin (for $p \\ge 4$ even) requires fully exponential time $\\exp(n^{1-o(1)})$.\n\nThe best known results for polynomial-time optimization of the spherical $p$-spin were first proved by~\\cite{subag-full-rsb} but can also be recovered via the AMP framework of~\\cite{EMS-opt}. As discussed in Appendix~\\ref{app:low-deg-alg}, these AMP algorithms can be captured by constant-degree polynomials. Furthermore, the output of such an algorithm concentrates tightly around $\\sqrt{n}$ and thus easily satisfies the norm bound with $\\gamma = (2\/3)^D$ required by our result. We also expect that these AMP algorithms have failure probability $\\delta = \\exp(-\\Omega(n))$; while this has not been established formally, a similar result on concentration of AMP-type algorithms has been shown by~\\cite{gamarnik2019overlap}.\n\nOur results are limited to the case where $p \\ge 4$ is even and $\\mu$ is a constant slightly smaller than the optimal value $E_p(\\cS)$. These restrictions are in place because the OGP property used in our proof is only known to hold for these values of $p$ and $\\mu$. If the OGP were proven for other values of $p$ or for a lower threshold $\\mu$, our results would immediately extend to give low-degree hardness for these parameters (see Theorem~\\ref{thm:spherical-ogp-lowdeg}). Note that we cannot hope for the result to hold when $p=2$ because this is a simple eigenvector problem with no computational hardness: there is a constant-degree algorithm to optimize arbitrarily close to the maximum (see Appendix~\\ref{app:low-deg-alg}).\n\n\n\n\\paragraph{The Ising case.}\n\nWe now turn to low-degree hardness in the Ising setting, where the domain is the hypercube: $\\cX_n = \\Sigma_n$. In this case, we round a polynomial's output to the hypercube by applying the sign function. For $x \\in \\RR$, let\n\\[ \\sgn(x) = \\left\\{\\begin{array}{ll} +1 & \\text{if } x \\ge 0 \\\\ -1 & \\text{if } x < 0, \\end{array}\\right. \\]\nand for a vector $x \\in \\RR^n$ let $\\sgn(x)$ denote entry-wise application of $\\sgn(\\cdot)$. We now define our notion of near optimality for a low-degree polynomial.\n\n\\begin{definition}\nFor parameters $\\mu \\in \\RR$, $\\delta \\in [0,1]$, $\\gamma \\in [0,1]$, $\\eta \\in [0,1]$, and a random polynomial $f: \\RR^m \\to \\RR^n$, we say that $f$ $(\\mu,\\delta,\\gamma,\\eta)$-optimizes the objective \\eqref{eq:p-spin-def} on $\\Sigma_n$ if the following are satisfied.\n\\begin{itemize}\n \\item $\\displaystyle \\Ex_{Y,\\omega} \\|f(Y,\\omega)\\|^2_2 = n$ \\; (normalization).\n \\item With probability at least $1-\\delta$ over $Y$ and $\\omega$, we have both $H_n(\\sgn(f(Y,\\omega));Y) \\ge \\mu$ and \\mbox{$|\\{i \\in [n] \\;:\\; |f_i(Y,\\omega)| \\ge \\gamma\\}| \\ge (1 - \\eta)n$}. \n\\end{itemize}\n\\end{definition}\n\n\\noindent The interpretation of these parameters is similar to the spherical case, with the addition of $\\eta$ to take into account issues related to rounding. More precisely, as in the spherical case, $\\mu$ is the objective value attained after rounding the polynomial's output to the hypercube, and $\\delta$ is the failure probability. The parameters $\\gamma, \\eta$ are involved in an additional technical condition, which requires $f$'s output not to be too ``small'' in a particular sense. Specifically, all but an $\\eta$-fraction of the coordinates of $f$'s output must exceed $\\gamma$ in magnitude. The need for this condition in our proof arises in order to prevent a small change in $f(Y,\\omega)$ from inducing a large change in $\\sgn(f(Y,\\omega))$.\n\nWe have the following result on low-degree hardness in the Ising setting. The proof is deferred to Section~\\ref{sec:pf-lowdeg-pspin}.\n\\begin{theorem}\\label{thm:ising-lowdeg}\nFor any even integer $p \\ge 4$ there exist constants $\\mu < E_p(\\Sigma)$, $n^* \\in \\NN$, $\\delta^* > 0$, and $\\eta > 0$ such that the following holds. For any $n \\ge n^*$, any $D \\in \\NN$, any $\\delta \\le \\min\\{\\delta^*,\\frac{1}{4} \\exp(-2D)\\}$, and any $\\gamma \\ge (2\/3)^D$, there is no random degree-$D$ polynomial that $(\\mu, \\delta, \\gamma, \\eta)$-optimizes \\eqref{eq:p-spin-def} on $\\Sigma_n$.\n\\end{theorem}\n\n\\noindent This result is very similar to the spherical case, and the discussion following Theorem~\\ref{thm:spherical-lowdeg} also applies here. The best known algorithms for the Ising case also fall into the AMP framework~\\cite{montanari-sk,EMS-opt} and are thus captured by constant-degree polynomials. These polynomials output a solution ``close'' to the hypercube in a way that satisfies our technical condition involving $\\gamma, \\eta$. As in the spherical case, the case $p=2$ is computationally tractable; here it is not a simple eigenvector problem but can nonetheless be solved by the AMP algorithm of~\\cite{montanari-sk,EMS-opt}.\n\n\n\n\n\\subsubsection{Langevin Dynamics and Gradient Descent}\n\n\n\nOne natural motivation for understanding low-degree hardness is to investigate the performance of natural iterative schemes, such as power iteration or gradient descent. In the spherical $p$-spin model, the natural analogue of these algorithms (in continuous time) are \\emph{Langevin dynamics} and \\emph{gradient flow}. \nWhile these are not directly low-degree methods, the overlap gap property can still be seen to imply hardness for these results in a fairly transparent manner. \n\nTo make this precise, let us introduce the following.\nLet $B_t$ denote spherical Brownian motion. (For a textbook introduction to spherical Brownian motion see, e.g., \\cite{Hsu02}.) For any variance $\\sigma\\geq 0$, we introduce \\emph{Langevin dynamics} for $H_n$ \nto be the strong solution to the stochastic differential equation\n\\[\ndX_t = \\sigma dB_t + \\nabla H_n(X_t;Y)dt,\n\\]\nwith $X_0=x$, where here $\\nabla$ denotes the spherical gradient. Note that since $H_n(x;Y)$ is a polynomial in $x$, $H_n$ is (surely) smooth and consequently the solution is well-defined in the strong sense \\cite{Hsu02}. The case $\\sigma = 0$ is referred to as \\emph{gradient flow} on the sphere. \n\nIn this setting, it is natural to study the performance with random starts which are independent of $Y$, e.g., a uniform at random start. In this case, if the initial distribution is given by $X_0\\sim\\nu$ for some $\\nu\\in\\cM_1(\\cS_n)$, the space of probability measures on $\\cS_n$, we will denote the law by $Q_\\nu$. In this setting we have the following result which is, again, a consequence of the overlap gap property. \n\n\\begin{theorem}\\label{thm:langevin-main}\nLet $p\\geq 4$ be even. There exists $\\mu < E_p(\\cS)$ and $c>0$ such that for any $\\sigma\\geq0$, $T\\geq0$ fixed, \n $n$ sufficiently large, and $\\nu\\in\\cM_1(\\cS_n)$, if $X_t$ denotes Langevin dynamics for $H_n(\\cdot;Y)$ with variance $\\sigma$ and initial data $\\nu$, then\n\\[\n\\PP_Y\\otimes Q_\\nu(H_n(X_T;Y) \\leq \\mu )\\geq 1-\\exp(-c n).\n\\]\nIn particular, the result holds for $\\nu_n = \\mathrm{Unif}(\\cS_n)$, the uniform measure on $\\cS_n$.\n\\end{theorem}\n\n\\noindent The proof can be found in Section~\\ref{sec:pf-langevin}. To our knowledge, this is the first proof that neither Langevin dynamics nor gradient descent reach the ground state started from uniform at random start. {We note furthermore, that the above applies even to $T \\leq c' \\log n$ for some $c'>0$ sufficiently small.}\n\nThere has been a tremendous amount of attention paid to the Langevin dynamics of spherical $p$-spin glass models. It is impossible here to provide a complete reference though we point the reader here to the surveys \\cite{BCKM98,Cug03,Gui07,jagannath2019dynamics}. \nTo date, much of the analysis of the dynamics in the \\emph{non-activated} regime considered here ($n\\to \\infty$ and then $t\\to\\infty$) has concentrated on the Crisanti--Horner--Sommers--Cugiandolo--Kurchan (CHSCK) equations approach \\cite{crisanti1993sphericalp,CugKur93}.\nThis approach centers around the analysis \nof a system of integro-differential equations which are satisfied by the scaling limit of natural observables of the underlying system. While this property of the scaling limit has now been shown rigorously \\cite{BADG01,BADG06}, there is limited rigorous understanding of the solutions of the CHSCK equations beyond the case when $p=2$.\nA far richer picture is expected here related to the phenomenon of \\emph{aging} \\cite{Gui07,BA02}. \n\nMore recently a new, differential inequality--based approach to understanding this regime was introduced in \\cite{BGJ20}, which provides upper and lower bounds on the energy level reached for a given initial data. That being said, this upper bound is nontrivial only for $\\sigma$ sufficiently large.\n\nWe end by noting that overlap gap--like properties, namely ``free energy barriers'' have been used to develop spectral gap estimates for Langevin dynamics which control the corresponding $L^2$-mixing time \\cite{GJ16,arous2018spectral}. In \\cite{arous2018spectral}, it was shown that exponentially-small spectral gaps are connected to the existence of free energy barriers for the overlap, which at very low temperatures can be shown to be equivalent to a variant of the overlap gap property in this setting. To our knowledge, however, this work is the first approach to connect the behavior of Langevin dynamics in the non-activated regime ($n\\to\\infty$ and then $t\\to\\infty$) that utilizes the overlap distribution. Finally we note here that the overlap gap property has been connected to the spectral gap for local, reversible dynamics of Ising spin glass models in \\cite{arous2018spectral} as well as to gradient descent and approximate message passing schemes in \\cite{gamarnik2019overlap}. \n\n\n\n\\subsection{Maximum Independent Set Problem in Sparse Random Graphs}\n\nWe now consider the problem of finding a large independent set in a sparse random graph. Here, we are given the adjacency matrix of an $n$-vertex graph, represented as $Y \\in \\{0,1\\}^m$ where $m = \\binom{n}{2}$. We write $Y \\sim G(n,d\/n)$ to denote an \\ER graph on $n$ nodes with edge probability $d\/n$, i.e., every possible edge occurs independently with probability $d\/n$. We are interested in the regime where first $n \\to \\infty$ (with $d$ fixed) and then $d \\to \\infty$. A subset of nodes $S\\subseteq [n]$ is an \\emph{independent set} if it spans no edges, i.e., for every $i,j \\in S$, $(i,j)$ is not an edge. Letting $\\cI(Y)$ denote the set of all independent sets of the graph $Y$, consider the optimization problem\n\\begin{equation}\\label{eq:max-indep}\n\\max_{S \\in \\cI(Y)} |S|\n\\end{equation}\nwhere $Y \\sim G(n,d\/n)$.\n\nAs $n \\to \\infty$ with $d$ fixed, the rescaled optimum value of~\\eqref{eq:max-indep} is known to converge to some limit with high probability:\n\\begin{align*}\n \\frac{1}{n}\\, {\\max_{S \\in \\cI(Y)} |S|}\\to\\alpha_d,\n\\end{align*}\nas shown in~\\cite{BayatiGamarnikTetali}. The limit $\\alpha_d$ is known to have the following asymptotic behavior as $d\\to\\infty$:\n\\begin{align*}\n \\alpha_d=(1+o_d(1)){2\\log d\\over d},\n\\end{align*}\nas is known since the work of Frieze~\\cite{FriezeIndependentSet}.\nThe best known polynomial-time algorithm for this problem is achieved by a straightforward greedy algorithm which constructs a $1\/2$-optimal independent set, i.e., an independent set of size $\\frac{\\log d}{d} n$ asymptotically as $n \\to \\infty$ and then $d \\to \\infty$.\n\nWe will study the ability of low-degree polynomials to find a large independent set. It is too much to ask for a polynomial to exactly output the indicator {vector} of an independent set, so we fix the following rounding scheme that takes a polynomial's output and returns an independent set. Recall the terminology for random polynomials defined in Section~\\ref{sec:poly-alg}.\n\n\\begin{definition}\nLet $f: \\{0,1\\}^m \\to \\RR^n$ be a random polynomial. For $Y \\in \\{0,1\\}^m$, and $\\eta > 0$, let $V^\\eta_f(Y,\\omega) \\in \\cI(Y)$ be the independent set obtained by the following procedure. Let\n\\[A = \\{i \\in [n] \\,:\\, f_i(Y,\\omega) \\ge 1\\},\\] \\[\\tilde A = \\{i \\in A \\,:\\, \\text{$i$ has no neighbors in $A$ in the graph $Y$}\\},\\]\nand\n\\[B = \\{i \\in [n] \\,:\\, f_i(Y,\\omega) \\in (1\/2,1)\\}.\\]\nLet\n\\[ V^\\eta_f(Y,\\omega) = \\left\\{\\begin{array}{ll} \\tilde A & \\text{if } |A \\setminus \\tilde A| + |B| \\le \\eta n, \\\\ \\emptyset & \\text{otherwise.} \\end{array}\\right. \\]\n\\end{definition}\n\n\\noindent In other words, $f$ should output a value $\\ge 1$ to indicate that a vertex is in the independent set and should output a value $\\le 1\/2$ to indicate that it is not. It is allowed to make up to $\\eta n$ ``errors'', each of which can either be a vertex for which the output value lies in $(1\/2,1)$, or a vertex that violates the independent set constraint. Vertices that violate the independent set constraint are thrown out, and if too many errors are made then the empty set $\\emptyset$ is returned. For our proofs it is crucial that this definition of $V_f^\\eta$ ensures that a small change in $f(Y,\\omega)$ cannot induce a large change in the resulting independent set $V_f^\\eta(Y,\\omega)$ (without encountering the failure event $\\emptyset$).\n\nWe now formally define what it means for a polynomial to find a large independent set.\n\\begin{definition}\nFor parameters $k \\in \\NN$, $\\delta \\in [0,1]$, $\\gamma \\ge 1$, $\\eta > 0$, and a random polynomial $f: \\{0,1\\}^m \\to \\RR^n$, we say that $f$ $(k,\\delta,\\gamma,\\eta)$-optimizes~\\eqref{eq:max-indep} if the following are satisfied.\n\\begin{itemize}\n \\item $\\displaystyle \\Ex_{Y,\\omega} \\|f(Y,\\omega)\\|^2_2 \\le \\gamma k$.\n \\item With probability at least $1-\\delta$ over $Y$ and $\\omega$, we have $|V_f^\\eta(Y,\\omega)| \\ge k$.\n\\end{itemize}\n\\end{definition}\n\n\\noindent The parameter $k$ denotes the objective value attained (after rounding), i.e., the size of the independent set. For us, $k$ will be a fixed multiple of $\\frac{\\log d}{d} n$, since this is the scale of the optimum. The parameter $\\delta$ is the algorithm's failure probability. Note that if $f$ were to ``perfectly'' output the $\\{0,1\\}$-valued indicator vector of a size-$k$ independent set, then we would have $\\|f(Y,\\omega)\\|^2_2 = k$. The parameter $\\gamma$ controls the degree to which this can be violated. Finally, $\\eta$ is the fraction of ``errors'' tolerated by the rounding process $V_f^\\eta$.\n\nWe now state our main result of low-degree hardness of maximum independent set, with the proof deferred to Section~\\ref{sec:pf-lowdeg-indep}.\n\n\\begin{theorem}\\label{thm:MIS-main}\nFor any $\\alpha > 1 + 1\/\\sqrt{2}$ there exists $d^* > 0$ such that for any $d \\ge d^*$ there exist $n^* > 0$, $\\eta > 0$, and $C_1, C_2 > 0$ such that the following holds. Let $n \\ge n^*$, $\\gamma \\ge 1$, and $D \\le \\frac{C_2 n}{\\gamma \\log n}$, and suppose $\\delta \\ge 0$ satisfies\n\\[ \\delta < \\exp\\left(-C_1 \\gamma D \\log n\\right). \\]\nThen for $k = \\alpha \\frac{\\log d}{d} n$, there is no random degree-$D$ polynomial that $(k,\\delta,\\gamma,\\eta)$-optimizes~\\eqref{eq:max-indep}.\n\\end{theorem}\n\n\\noindent This shows that low-degree polynomials cannot find an independent set of size (asymptotically) exceeding $(1 + 1\/\\sqrt{2}) \\frac{\\log d}{d} n$, which is roughly $85$\\% of the optimum. This is the threshold above which OGP can be shown using a first moment argument as in~\\cite{gamarnik2014limits}.\n\nIf $\\gamma$ is a constant, Theorem~\\ref{thm:MIS-main} gives a similar tradeoff between $D$ and $\\delta$ as our results for the $p$-spin model, although here there is an extra factor of $\\log n$. If we are willing to restrict to algorithms of failure probability $\\delta = \\exp(-cn)$ then we can rule out all polynomials of degree $D \\le c'n\/\\log n$ for a constant $c' = c'(c)$. As in the $p$-spin model, this suggests that exponential time $\\exp(n^{1-o(1)})$ is needed in order to find an independent set larger than $(1 + 1\/\\sqrt{2}) \\frac{\\log d}{d} n$.\n\nAs discussed in the introduction, the best known polynomial-time algorithm can find an independent set $1\/2$ as large as the optimum (asymptotically), and we expect this can also be achieved by a local algorithm (although this has only been shown rigorously for regular graphs). Any such local algorithm can be represented as a constant-degree polynomial (see Appendix~\\ref{app:low-deg-alg}). We expect that this polynomial satisfies our technical assumptions with parameters $k=(1+o_d(1)){\\log d\\over d}n$, $\\gamma = O(1)$, $\\delta = \\exp(-\\Omega(n))$, and any constant $\\eta > 0$ (although we have not included a formal proof of this).\n\n\n\n\n\n\\subsection{The Overlap Gap Property}\n\nAs discussed in the introduction, the preceding results will follow due to certain geometric properties of the super-level sets of the objectives. The main property is called the \\emph{overlap gap property (OGP)}. Let us begin by defining this formally in a general setting. \n\n\n\\begin{definition}\\label{definition:OGP}\nWe say that a family of real-valued functions $\\mathcal{F}$ with common domain $\\mathcal{X}\\subset \\R^n$ satisfies the \\emph{overlap gap property} for an overlap $R:\\cX\\times \\cX\\to \\R_{\\ge 0}$ \nwith parameters $\\mu \\in \\RR$ and $0\\leq\\nu_1<\\nu_2\\leq 1$ if for every $f_1,f_2\\in\\mathcal{F}$ and every $x_1,x_2\\in\\cX$ satisfying\n$f_k(x_k)\\geq \\mu$ for $k=1,2$, we have that\n$ R(x,y) \\in [0,\\nu_1]\\cup [\\nu_2,1].$\n\\end{definition}\n\\noindent For ease of notation, when this holds, we simply say that $\\mathcal{F}$ satisfies the $(\\mu,\\nu_1,\\nu_2)$-OGP for $R$ on $\\cX$. Furthermore, as it is often clear from context, we omit the dependence of the above on $R$.\n\nWhile the definition above might be satisfied for trivial reasons and thus not be informative, it will be used in this paper in the setting where $\\|x\\|_2^2\\le n$ for every $x\\in \\mathcal{X}$, {$R(x_1,x_2)=|\\langle x_1,x_2\\rangle|\/n$}, and with parameters chosen so that with high probability $\\mu<\\sup_{x\\in \\mathcal{X}}H(x)$ for every $H\\in\\mathcal{F}$. Thus, in particular $R(x_1,x_2)\\le 1$ for every $x_1,x_2\\in \\mathcal{X}$, and $\\mu$ measures some proximity from optimal values for each objective function $H$. The definition says informally that for every two $\\mu$-optimal solutions with respect to any two choices of objective functions, their normalized inner product is either at least $\\nu_2$ or at most $\\nu_1$. \n\nIn the following, we require one other property of functions, namely separation of their superlevel sets.\n\\begin{definition}\\label{definition:well-separated}\nWe say that two real-valued functions $f,g$ with common domain $\\cX$ are $\\nu$-separated above $\\mu$ with respect to the overlap $R:\\cX\\times \\cX \\to \\R_{\\ge 0}$ if for any $x,y \\in \\cX$ with $f(x)\\geq \\mu$ and $g(y)\\geq \\mu$, we have that $R(x,y) \\leq \\nu$.\n\\end{definition}\n\\noindent This property can be thought of a strengthening of OGP for two distinct functions. In particular, the parameter $\\nu$ will typically equal the parameter $\\nu_1$ in the definition of OGP.\n\nLet us now turn to stating the precise results regarding these properties in the settings we consider here. It can be shown that the overlap gap property holds for $p$-spin glass Hamiltonians in both the spherical and Ising settings with respect to the overlap $R(x,y) =\\frac{1}{n}\\abs{\\g{x,y}}$. More precisely, let $Y$ be i.i.d.\\ $\\mathcal{N}(0,1)$ and let $Y'$ denote an independent copy of $Y$. \nConsider the corresponding family of real-valued functions\n\\begin{equation}\\label{eq:interpolated-family-p-spin}\n\\cA(Y,Y') =\\{\\cos(\\tau) H_n(\\cdot\\,;Y)+\\sin(\\tau)H_n(\\cdot\\,;Y') \\,:\\, \\tau \\in [0,\\pi\/2]\\}.\n\\end{equation}\nWe then have the following, which will follow by combining bounds from \\cite{ChenSen17,AuffChen18}. The second result is a restatment of \\cite[Theorem 3.4]{gamarnik2019overlap}. The proof can be found in Section~\\ref{sec:pf-ogp-pspin}.\n\n\\begin{theorem} \\label{thm:pspin-ogp}\nTake as overlap $R(x,y) =\\frac{1}{n}\\abs{\\g{x,y}}$ and let $Y$ and $Y'$ be independent $p$-tensors with i.i.d.\\ $\\mathcal{N}(0,1)$ entries. For every even $p\\geq4$\nthere exists an $\\eps>0$ such that the following holds:\n\\begin{enumerate}\n \\item For the domain $\\cS_n$, there are some $0\\leq\\nu_1<\\nu_2\\leq1$ and some $c>0$ such that the following holds with probability at least $1-\\exp(-c n)$:\n \\begin{itemize}\n \\item $\\cA(Y,Y')$ has the overlap gap property for $R$ with parameters $(E_p(\\cS)-\\eps,\\nu_1,\\nu_2)$. \n \\item $H_n(\\cdot\\,;Y)$ and $H_n(\\cdot\\,;Y')$ are $\\nu_1$-separated above $E_p(\\cS)-\\eps$ with respect to $R$.\n \\end{itemize}\n \\item For the domain $\\Sigma_n$, there are some $0\\leq\\nu_1<\\nu_2\\leq1$ and some $c>0$ such that the following holds with probability at least $1-\\exp(-c n)$:\n \\begin{itemize}\n \\item $\\cA(Y,Y')$ has the overlap gap property for $R$ with parameters $(E_p(\\Sigma)-\\eps,\\nu_1,\\nu_2)$.\n \\item $H_n(\\cdot\\,;Y)$ and $H_n(\\cdot\\,;Y')$ are $\\nu_1$-separated above $E_p(\\Sigma)-\\eps$ with respect to $R$.\n \\end{itemize}\n\\end{enumerate}\n\\end{theorem}\n\n\nLet us now turn to the maximum independent set problem. Let us begin by first observing that we may place this family of optimization problem on a common domain. To this end, \nconsider as domain, the Boolean hypercube $\\cB_n =\\{0,1\\}^n$. \nNote that by viewing a vector $x$ as the indicator function of the set $S=S(x):=\\{i:x_i =1\\}$, we have a correspondence between the points $x\\in\\cB_n$ and subsets of the vertex set $[n]$.\nLet $m = \\binom{n}{2}$, let $Y \\in \\{0,1\\}^m$ denote the adjacency matrix of some graph on $[n]$ vertices, and consider the function $F(x;Y)$ given by \n\\[\nF(x;Y) = \\abs{S(x)} \\cdot \\One\\{S(x)\\in \\mathcal{I}(Y)\\}.\n\\]\nThe maximum independent set problem for $Y$ can then be written in the form \n\\[\n\\max_{x\\in\\cB_n} F(x;Y).\n\\]\n\n\n\\noindent Let us now construct the analogue of the family $\\cA(Y,Y')$ from \\eqref{eq:interpolated-family-p-spin} in this setting. \n\\begin{definition}\\label{def:path}\nFor $Y, Y' \\in \\{0,1\\}^m$, the \\emph{path from $Y$ to $Y'$} is $Y = Z_0 \\to Z_1 \\to \\cdots \\to Z_m = Y'$ where $(Z_i)_j = Y_j$ for $j > i$ and $(Z_i)_j = Y'_j$ otherwise. The path is denoted by $Y\\mapsto Y'$. \n\\end{definition}\n\\noindent Here (and throughout) we have fixed an arbitrary order by which to index the edges of a graph (the coordinates of $Y$).\n\n\n\nNow let $Y, Y' \\in \\{0,1\\}^m$ be (the adjacency matrices of) independent $G(n,d\/n)$ random graphs. We can then consider the family of functions \n\\begin{equation}\\label{eq:interpolated-family-MIS}\n\\cF(Y,Y') = \\{F(\\cdot\\,;Z) \\,:\\, Z \\text{ is on the path } Y\\mapsto Y'\\}.\n\\end{equation}\n\n\\noindent We can now state the relevant overlap gap property.\n\n\\begin{theorem}\\label{thm:ogp-graph}\nFor any $\\alpha > 1 + 1\/\\sqrt{2}$ there exist constants $0 \\le \\tilde\\nu_1 < \\tilde\\nu_2 \\le 1$ and $d^* > 0$ such that for any constant $d \\ge d^*$, the following holds. If $Y,Y' \\sim G(n,d\/n)$ independently, the following holds with probability at least $1-\\exp(-\\Omega(n))$.\n\\begin{itemize}\n\\item The family of functions $\\mathcal{F}$ from \\eqref{eq:interpolated-family-MIS} with domain $\\mathcal{X}=\\mathcal{B}_n$ satisfies the overlap gap property with overlap $R(x_1,x_2)=\\frac{1}{n} |\\langle x_1,x_2\\rangle|$ and parameters $\\mu = k := \\alpha \\frac{\\log d}{d} n$, $\\nu_1 = \\tilde \\nu_1 \\frac{k}{n}$, $\\nu_2 = \\tilde \\nu_2 \\frac{k}{n}$ with probability at least $1-\\exp(-\\Omega(n))$.\n\\item Furthermore, the functions $F(\\cdot\\,;Y)$ and $F(\\cdot\\,;Y')$ are $\\nu_1$-separated above $\\mu$.\n\\end{itemize}\n\\end{theorem}\n\n\\noindent Above (and throughout), $\\Omega(n)$ pertains to the limit $n \\to \\infty$ with $\\alpha,d$ fixed, i.e., it hides a constant factor depending on $\\alpha,d$. Note that here the overlap is simply the (normalized) cardinality of the intersection of the two sets: $R(x_1,x_2) = \\frac{1}{n}|S(x_1) \\cap S(x_2)|$.\n\nThe proof of Theorem~\\ref{thm:ogp-graph}---which is deferred to Section~\\ref{sec:pf-ogp-indep}---is an adaptation of the first moment argument of~\\cite{gamarnik2014limits}: we compute the expected number of pairs of independent sets whose overlap lies in the ``forbidden'' region, and show that this is exponentially small.\n\n\n\n\n\\section{Proofs for $p$-Spin Model}\\label{sec:pf-pspin}\n\n\\subsection{Low-Degree Polynomials are Stable}\n\nIn this section we prove a noise stability--type result for polynomials of Gaussians, which will be a key ingredient in our proofs. Throughout this section, let $d\\geq 1$ and let $Y\\in \\RR^d$ be a vector with i.i.d.\\ standard Gaussian entries. Denote the standard Gaussian measure on $\\R^d$ by $\\Gamma^d$. For two standard Gaussian random vectors defined on the same probability space, we write $X\\sim_\\rho Y$ if their covariance satisfies $\\mathrm{Cov}(X,Y) = \\rho\\, I$ for some $\\rho \\in [0,1]$, where $I$ denotes the identity matrix. Throughout this section, all polynomials have non-random coefficients. The goal of this section is to prove the following stability result.\n\\begin{theorem}\n\\label{thm:hyp-stable}\nLet $0\\leq \\rho\\leq 1$.\nLet $X,Y$ be a pair of standard Gaussian random vectors on $\\R^d$ such that $X\\sim_\\rho Y$. Let $P$ denote the joint law of $X, Y$. Let $f: \\R^d \\to \\RR^k$ be a (deterministic) polynomial of degree at most $D$ with $\\EE \\norm{f(X)}_2^2 = 1$. For any $t \\ge (6e)^D$,\n\\[ P(\\|f(X) - f(Y)\\|_2^2 \\ge 2t(1-\\rho^D)) \\le \\exp\\left(-\\frac{D}{3e} t^{1\/D}\\right). \\]\n\\end{theorem}\n\n\nWe begin by recalling the following standard consequence of hypercontractivity; see Theorem~5.10 and Remark~5.11 of \\cite{janson-gaussian} or \\cite[Sec.\\ 3.2]{LedouxTalagrand}.\n\\begin{proposition*}[Hypercontractivity for polynomials]\nIf $f: \\R^d \\to \\RR$ is a degree-$D$ polynomial and $q \\in [2,\\infty)$ then\n\\begin{equation}\\label{eq:hyp-moment}\n\\Ex\\left[|f(Y)|^q\\right] \\le (q-1)^{qD\/2} \\Ex[f(Y)^2]^{q\/2}. \n\\end{equation} \n\\end{proposition*}\n\n\\noindent Let us now note the following useful corollary of this result for vector-valued polynomials.\n\n\\begin{lemma}\n\\label{lem:2-norm-moment}\nIf $f: \\RR^d \\to \\RR^k$ is a degree-$D$ polynomial and $q \\in [2,\\infty)$ then\n\\[ \n\\Ex[\\|f(Y)\\|_2^{2q}] \\le [3(q-1)]^{qD} \\Ex[\\|f(Y)\\|_2^2]^q.\n\\]\n\\end{lemma}\n\\begin{proof}\nLet us begin by observing that by the Cauchy-Schwarz inequality and \\eqref{eq:hyp-moment}, \n\\begin{align}\n\\EE[\\|f(Y)\\|_2^4] \n&\\le \\sum_i \\EE[f_i(Y)^4] + 2 \\sum_{i < j} \\sqrt{\\EE[f_i(Y)^4]\\EE[f_j(Y)^4]}\\nonumber \\\\\n&\\le \\sum_i 9^D \\EE[f_i(Y)^2]^2 + 2 \\sum_{i < j} 9^D \\EE[f_i(Y)^2] \\EE[f_j(Y)^2]= 9^D \\left(\\E \\norm{f(Y)}_2^2\\right)^2.\\label{eq:2-norm-moment-1}\n\\end{align}\nOn the other hand, since $\\|f(Y)\\|_2^2$ is a polynomial of degree at most $2D$, we may again apply \\eqref{eq:hyp-moment} to obtain\n\\[\n\\EE[\\|f(Y)\\|_2^{2q}] \\le (q-1)^{qD} \\EE[\\|f(Y)\\|_2^4]^{q\/2} \\leq [3(q-1)]^{qD}\\E[\\norm{f(Y)}_2^2]^2\n\\]\nas desired, where in the last line we used \\eqref{eq:2-norm-moment-1}.\n\\end{proof}\n\n\\noindent With these results in hand we may now prove the following preliminary tail bound.\n\\begin{proposition}\n\\label{prop:2-norm-tail}\nIf $f: \\R^d \\to \\RR^k$ is a degree-$D$ polynomial, then for any $t \\ge (6e)^D$,\n\\[ \n\\Gamma^d(\\|f(Y)\\|_2^2 \\ge t \\,\\EE[\\|f(Y)\\|_2^2]) \\le \\exp\\left(-\\frac{D}{3e} t^{1\/D}\\right). \n\\]\n\\end{proposition}\n\\noindent (Recall that $\\Gamma^d(\\cdot)$ denotes probability under standard Gaussian measure.)\n\\begin{proof}\nUsing Lemma~\\ref{lem:2-norm-moment}, for any $q \\in [2,\\infty)$,\n\\begin{align*}\n\\Gamma^d(\\|f(Y)\\|_2^2 \\ge t) &= \\Gamma^d(\\|f(Y)\\|_2^{2q} \\ge t^q) \\le \\EE[\\|f(Y)\\|_2^{2q}]t^{-q}\\\\\n& \\le [3(q-1)]^{qD} \\EE[\\|f(Y)\\|_2^2]^q t^{-q} \n\\le (3q)^{qD}\\, \\EE[\\|f(Y)\\|_2^2]^q\\, t^{-q}\n\\end{align*}\nand so, letting $q = t^{1\/D}\/(3e) \\ge 2$,\n\\[ \\Gamma^d(\\|f(Y)\\|_2^2 \\ge t \\,\\EE[\\|f(Y)\\|_2^2]) \\le [(3q)^D\/t]^q = \\exp(-Dq) = \\exp(-D t^{1\/D}\/(3e)).\\qedhere \\]\n\\end{proof}\n\n\\noindent It will be helpful to recall the \\emph{noise operator}, $T_\\rho:L^2(\\Gamma^d)\\to L^2(\\Gamma^d)$, defined by\n\\[\nT_\\rho f(x) = \\E f(\\rho x+\\sqrt{1-\\rho^2}Y)\n\\]\nwhere $\\rho \\in [0,1]$. Recall that for $t\\geq 0$, $P_t := T_{e^{-t}}$ is the classical Ornstein-Uhlenbeck semigroup. In particular,\nif $(h_\\ell)$ are the Hermite polynomials on $\\R$ normalized to be an orthonormal basis for $L^2(\\Gamma^1)$, then the eigenfunctions of $T_\\rho$\nare given by products of Hermite polynomials \\cite{LedouxTalagrand}.\nIn particular, for any $\\psi(x)$ of the form $\\psi(x) = h_{\\ell_1}(x_1)\\cdots h_{\\ell_d}(x_d)$. \nwe have\n\\begin{equation}\\label{eq:hermite-eigenval}\nT_\\rho \\psi (x) = \\rho^D \\psi(x)\n\\end{equation}\nwhere $D=\\sum \\ell_j$. With this in hand we are now in position to prove the following inequality. \n \\begin{lemma}\\label{lem:ex-noise}\nIf $f: \\RR^d \\to \\RR^k$ is a degree-$D$ polynomial with $\\EE \\|f(Y)\\|_2^2 = 1$, then for any $\\rho \\in [0,1]$, if $X\\sim_\\rho Y$,\n\\[ \n\\Ex \\|f(X) - f(Y)\\|_2^2 \\le 2(1 - \\rho^D). \n\\] \n\\end{lemma}\n\\begin{proof}\nLet $X_\\rho$ be given by\n\\[\nX_\\rho = \\rho Y + \\sqrt{1-\\rho^2} Y',\n\\]\nwhere $Y'$ is an independent copy of $Y$. Observe that $(X_\\rho,Y)$ is equal in law to $(X,Y)$. In this case, we see that\n\\begin{align*}\n \\Ex\\|f(X) - f(Y)\\|_2^2 \n = 2 - 2 \\EE \\langle f(X), f(Y) \\rangle \n = 2 - 2 \\EE \\langle f(X_\\rho), f(Y) \\rangle\n = 2 - 2 \\EE \\langle T_\\rho f(Y), f(Y) \\rangle.\n\\end{align*}\nConsider the collection of products of real valued Hermite polynomials of degree at most $D$,\n\\[\n\\mathcal{H}_D =\\{\\psi:\\R^d\\to\\R \\,:\\, \\psi(x) = h_{\\ell_1}(x_1)\\cdots h_{\\ell_d}(x_d)\\; s.t.\\, \\sum \\ell_i \\leq D\\}.\n\\]\nObserve that $\\mathcal{H}_D$ is an orthonormal system in $L^2(\\Gamma^d)$ and that the collection of real-valued polynomials $p:\\R^d\\to\\R$ of degree at most $D$ is contained in its closed linear span. As such, since $\\rho^D\\leq \\rho^s$ for $0\\leq s\\leq D$, we see that for any $1\\leq i \\leq d$,\n\\[\n\\rho^D \\E f_i(Y)^2 \\leq \\E\\, T_\\rho f_i(Y) f_i(Y) \\leq \\E f_i(Y)^2\n\\]\nby \\eqref{eq:hermite-eigenval}. Summing in $i$ yields \n\\[\n\\rho^D\\leq \\E \\langle T_\\rho f(Y),f(Y)\\rangle \\leq 1.\n\\]\nCombining this with the preceding bound yields the desired inequality.\n\\end{proof}\n\n\\noindent We are now in position to prove the main theorem of this section.\n\\begin{proof}[Proof of Theorem~\\ref{thm:hyp-stable}]\nLet $Y'$ be an independent copy of $Y$. Then \nif we let $\\tilde{Y} = (Y,Y')$, this is a standard Gaussian vector on $\\R^{2d}$. Furthermore if we let\n\\[\nh(\\tilde{Y})= f(Y) - f(\\rho Y + \\sqrt{1-\\rho^2}Y'),\n\\]\nthen $h$ is a polynomial of degree at most $D$ in $\\tilde{Y}$ and, by Lemma~\\ref{lem:ex-noise}, \n\\[\n\\EE \\|h(\\tilde Y)\\|_2^2=\\E\\norm{f(X)-f(Y)}_2^2 \\le 2(1-\\rho^D).\n\\]\nThe result now follows from Proposition~\\ref{prop:2-norm-tail}.\n\\end{proof}\n\n\n\n\n\n\\subsection{Failure of Low-Degree Algorithms}\\label{sec:pf-lowdeg-pspin}\n\nIn this section we prove our main results on low-degree hardness for the spherical and Ising $p$-spin models (Theorems~\\ref{thm:spherical-lowdeg} and \\ref{thm:ising-lowdeg}). The main content of this section is to show that the OGP and separation properties imply failure of stable algorithms, following an interpolation argument similar to~\\cite{gamarnik2019overlap}. The main results then follow by combining this with the stability of low-degree polynomials (Theorem~\\ref{thm:hyp-stable}) and the fact that OGP and separation are known to hold (Theorem~\\ref{thm:pspin-ogp}).\n\n\\paragraph{The spherical case.}\n\nWe begin by observing the following elementary fact: when two vectors of norm at least $\\gamma$ are normalized onto the unit sphere, the distance between them can only increase by a factor of $\\gamma^{-1}$.\n\\begin{lemma}\\label{lem:norm-bound}\nIf $\\|x\\|_2 = \\|y\\|_2 = 1$ and $a \\ge \\gamma$, $b \\ge \\gamma$ then $\\|x - y\\|_2 \\le \\gamma^{-1} \\|ax - by\\|_2$.\n\\end{lemma}\n\\begin{proof}\n We have\n \\[ \\|ax - by\\|_2^2 = a^2 + b^2 - 2ab \\langle x,y \\rangle = (a-b)^2 + ab \\|x - y\\|_2^2 \\ge \\gamma^2 \\|x - y\\|_2^2.\\qedhere \\]\n\\end{proof}\n\n\\noindent Throughout the following, it will be convenient to define the following interpolated family of tensors. Consider $(Y_\\tau)_{\\tau \\in [0,\\pi\/2]}$ defined by\n\\begin{equation}\\label{eq:Y-tau-def}\nY_\\tau = \\cos(\\tau) Y+ \\sin(\\tau) Y'.\n\\end{equation}\nNote that by linearity of inner products, we may equivalently write $\\cA(Y,Y')$ from \\eqref{eq:interpolated-family-p-spin} as\n\\[\n\\cA(Y,Y') =\\{ H_n(x;Y_\\tau) \\,:\\, \\tau \\in [0,\\pi\/2]\\}.\n\\]\n\n\\noindent The following result shows that together, the OGP and separation properties imply failure of low-degree polynomials for the spherical $p$-spin.\n\n\\begin{theorem}\\label{thm:spherical-ogp-lowdeg}\nFor any $0 \\le \\nu_1 < \\nu_2 \\le 1$, there exists a constant $\\delta^* > 0$ such that the following holds. Let $p, n, D \\in \\NN$ and $\\mu \\in \\RR$.\nSuppose that $Y,Y'$ are independent $p$-tensors with i.i.d.\\ standard Gaussian entries and let $\\cA(Y,Y')$ be as in \\eqref{eq:interpolated-family-p-spin}. Suppose further that with probability at least $3\/4$ over $Y,Y'$, we have that $\\cA(Y,Y')$ has the $(\\mu,\\nu_1,\\nu_2)$-OGP on domain $\\cS_n$ {with overlap $R=|\\langle \\cdot,\\cdot\\rangle|\/n$}, and that $H_n(\\cdot\\,,Y)$ and $H_n(\\cdot\\,,Y')$ are $\\nu_1$ separated above $\\mu$. Then for any $\\delta \\le \\min\\{\\delta^*,\\frac{1}{4} \\exp(-2D)\\}$ and any $\\gamma \\ge (2\/3)^D$, there is no random degree-$D$ polynomial that $(\\mu, \\delta, \\gamma)$-optimizes \\eqref{eq:p-spin-def} on $\\cS_n$.\n\\end{theorem}\n\n\\begin{proof}\nLet $Y,Y'$ be as in the statement of the theorem, and let $P=\\PP_Y \\otimes \\PP_\\omega$ denote the joint law of $(Y,\\omega).$\nAssume on the contrary that $f$ is a random degree $D$ polynomial which $(\\mu,\\delta,\\gamma)$-optimizes $H_n(\\cdot,Y)$. We first reduce to the case where $f$ is deterministic. \n\nLet $A(Y,\\omega)$ denote the ``failure'' event\n\\[\nA(Y,\\omega)= \\{H_n(g_f(Y,\\omega);Y) < \\mu \\;\\vee\\; \\|f(Y,\\omega)\\|_2 < \\gamma \\sqrt{n}\\}.\n\\]\nSince $\\EE \\|f(Y,\\omega)\\|_2^2 = n$ and $\\PP(A(Y,\\omega)) \\le \\delta$, we have by Markov's inequality, \n\\[\n\\PP_\\omega\\{\\EE_Y \\|f(Y,\\omega)\\|_2^2 \\ge 3n\\} \\le 1\/3\n\\quad \\text{ and }\\quad \n\\PP_\\omega(\\PP_Y(A(Y,\\omega)) \\ge 3\\delta) \\le 1\/3.\n\\]\nThis means that there exists an $\\omega^* \\in \\Omega$ such that $\\EE_Y \\|f(Y,\\omega^*)\\|_2^2 \\le 3n$ and $\\PP_Y\\{A(Y,\\omega^*)\\} \\le 3\\delta$. Fix this choice of $\\omega = \\omega^*$ so that $f(\\cdot) = f(\\cdot,\\omega^*)$ becomes a deterministic function.\n\nLet $Y,Y' \\in (\\RR^n)^\\tp$ be independently i.i.d.\\ $\\mathcal{N}(0,1)$, let $Y_\\tau$ be as in \\eqref{eq:Y-tau-def}, and $\\cA(Y,Y')$ as in \\eqref{eq:interpolated-family-p-spin}.\nFor some $L \\in \\NN$ to be chosen later, divide the interval $[0,\\pi\/2]$ into $L$ equal sub-intervals: $0 = \\tau_0 < \\tau_1 < \\cdots < \\tau_L = \\pi\/2$, and let $x_\\ell = g_f(Y_{\\tau_\\ell})$. We claim that with positive probability (over $Y,Y'$), all of the following events occur simultaneously and that this leads to a contradiction:\n\\begin{enumerate}\n \\item [(i)] The family $\\cA(Y,Y')$ has the $(\\mu,\\nu_1,\\nu_2)$-OGP on $\\cS_n$ and $H_n(\\cdot,Y)$ and $H_n(\\cdot, Y')$ are $\\nu_1$-separated above $\\mu$.\n \\item [(ii)] For all $\\ell \\in \\{0,1,\\ldots,L\\}$, $f$ succeeds on input $Y_{\\tau_\\ell}$, i.e., the event $A(Y_{\\tau_\\ell},\\omega^*)^c$ holds.\n \\item[(iii)] For all $\\ell \\in \\{0,1,\\ldots,L-1\\}$, $\\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2^2 < \\gamma^2 cn$ for some $c = c(\\nu_1,\\nu_2) > 0$ to be chosen later.\n\\end{enumerate}\nFirst, let us see why (i)-(iii) imply a contradiction. Combining (i) and (ii) gives $|\\frac{1}{n} \\langle x_0,x_\\ell \\rangle| \\in [0,\\nu_1] \\cup [\\nu_2,1]$ for all $\\ell$, and $|\\frac{1}{n} \\langle x_0,x_L \\rangle| \\in [0,\\nu_1]$. Since we also have $|\\frac{1}{n} \\langle x_0,x_0 \\rangle| = 1$, there must exist an $\\ell$ that crosses the OGP gap in the sense that\n\\[ \n\\nu_2 - \\nu_1 \\le \\frac{1}{n}\\Big|\\abs{\\g{x_0,x_\\ell}} - \\abs{\\g{x_0,x_{\\ell+1}}}\\Big| \\leq\n\\frac{1}{n}|\\langle x_0,x_\\ell \\rangle - \\langle x_0,x_{\\ell+1} \\rangle| \\leq \\frac{1}{\\sqrt n} \\|x_\\ell - x_{\\ell+1}\\|_2.\n\\]\nSince $\\|f(Y_{\\tau_\\ell})\\|_2, \\|f(Y_{\\tau_{\\ell+1}})\\|_2 \\ge \\gamma \\sqrt{n}$ by (ii), Lemma~\\ref{lem:norm-bound} gives\n\\[ \\nu_2 - \\nu_1 \\le \\frac{1}{\\sqrt n} \\|x_\\ell - x_{\\ell+1}\\|_2 \\le \\frac{1}{\\gamma \\sqrt{n}} \\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2, \\]\nwhich contradicts (iii) provided we choose $c \\le (\\nu_2-\\nu_1)^2$.\n\nIt remains to show that (i)-(iii) occur simultaneously with positive probability. By assumption, (i) fails with probability at most $1\/4$, so it is sufficient to show that (ii) and (iii) each fail with probability at most $1\/3$. By a union bound, (ii) fails with probability at most $3\\delta (L+1)$, which is at most $1\/3$ provided\n\\begin{equation}\\label{eq:L-cond-1}\nL \\le \\frac{1}{9\\delta} - 1.\n\\end{equation}\nFor (iii), we will apply Theorem~\\ref{thm:hyp-stable} with some $\\tilde D \\ge D$ (since we are allowed to use any upper bound on the degree) and $t = (6e)^{\\tilde D}$. For any $\\ell$ we have $Y_{\\tau_\\ell} \\sim_\\rho Y_{\\tau_{\\ell+1}}$ with $\\rho = \\cos\\left(\\frac{\\pi}{2L}\\right)$. Using $\\EE_Y \\|f(Y)\\|_2^2 \\le 3n$,\n\\begin{equation}\\label{eq:ffd}\n\\PP( \\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2^2 \\ge 6n(6e)^{\\tilde D}(1-\\rho^{\\tilde D}) ) \\le \\exp(-2 \\tilde{D}).\n\\end{equation}\nSince\n\\[ 1-\\rho^{\\tilde D} = 1-\\cos^{\\tilde D}\\left(\\frac{\\pi}{2L}\\right) \\le 1 - \\left(1 - \\frac{1}{2}\\left(\\frac{\\pi}{2L}\\right)^2\\right)^{\\tilde D} \\le \\frac{\\tilde D}{2}\\left(\\frac{\\pi}{2L}\\right)^2, \\]\nequation~\\eqref{eq:ffd} implies\n\\[ \\PP( \\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2^2 \\ge \\gamma^2 cn ) \\le \\exp(-2\\tilde{D}) \\]\nprovided\n\\begin{equation}\\label{eq:L-cond-2}\nL \\ge \\frac{\\pi}{2\\gamma} \\sqrt{\\frac{3\\tilde{D}}{c}}(6e)^{\\tilde D\/2}.\n\\end{equation}\nThus, (iii) fails with probability at most $L \\exp(-2\\tilde{D})$, which is at most $1\/3$ (as desired) provided\n\\begin{equation}\\label{eq:L-cond-3}\nL \\le \\frac{1}{3} \\exp(2\\tilde{D}).\n\\end{equation}\nTo complete the proof, we need to choose integers $\\tilde D \\ge D$ and $L$ satisfying~\\eqref{eq:L-cond-1}, \\eqref{eq:L-cond-2}, \\eqref{eq:L-cond-3}, i.e.,\n\\begin{equation}\\label{eq:L-final}\n\\frac{\\pi}{2\\gamma} \\sqrt{\\frac{3\\tilde{D}}{c}}(\\sqrt{6e})^{\\tilde D} \\le L \\le \\min\\left\\{\\frac{1}{9\\delta}-1, \\frac{1}{3}(e^2)^{\\tilde D}\\right\\}.\n\\end{equation}\nRequire $\\delta \\le \\frac{1}{4} \\exp(-2\\tilde{D})$ so that the second term in the $\\min\\{\\cdots\\}$ is smaller (when $\\tilde{D}$ is sufficiently large). Since $\\gamma \\ge (2\/3)^D \\ge (2\/3)^{\\tilde D}$ and $\\frac{3}{2}\\sqrt{6e} < e^2$, there now exists an $L \\in \\NN$ satisfying~\\eqref{eq:L-final} provided that $\\tilde D$ exceeds some constant $D^* = D^*(c)$. Set $\\tilde D = \\max\\{D,D^*\\}$ and $\\delta^* = \\frac{1}{4} \\exp(-2D^*)$ to complete the proof.\n\\end{proof}\n\n\\noindent Our main result on low-degree hardness of the spherical $p$-spin now follows by combining the above with the fact that OGP and separation hold in a neighborhood of the optimum.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:spherical-lowdeg}]\nThis result follows by combining Theorem~\\ref{thm:spherical-ogp-lowdeg} with Theorem~\\ref{thm:pspin-ogp}.\n\\end{proof}\n\n\\paragraph{The Ising case.} We now turn to the corresponding result for the Ising $p$-spin model, which again shows that together, OGP and separation imply failure of low-degree polynomials. \n\n\\begin{theorem}\\label{thm:ising-ogp-lowdeg}\nFor any $0 \\le \\nu_1 < \\nu_2 \\le 1$ there exist constants $\\delta^* > 0$ and $\\eta > 0$ such that the following holds. Let $p, n, D \\in \\NN$ and $\\mu \\in \\RR$. Suppose that $Y,Y'$ are independent $p$-tensors with i.i.d.\\ standard Gaussian entries and let $\\cA(Y,Y')$ be as in \\eqref{eq:interpolated-family-p-spin}. Suppose further that with probability at least $3\/4$ over $Y,Y'$, we have that $\\cA(Y,Y')$ has the $(\\mu,\\nu_1,\\nu_2)$-OGP on domain $\\Sigma_n$ with overlap $R=|\\langle \\cdot,\\cdot\\rangle|\/n$, and that $H_n(\\cdot\\,,Y)$ and $H_n(\\cdot\\,,Y')$ are $\\nu_1$ separated above $\\mu$. Then for any $\\delta \\le \\min\\{\\delta^*,\\frac{1}{4} \\exp(-2D)\\}$ and any $\\gamma \\ge (2\/3)^D$, there is no random degree-$D$ polynomial that $(\\mu, \\delta, \\gamma, \\eta)$-optimizes \\eqref{eq:p-spin-def} on $\\Sigma_n$.\n\\end{theorem}\n\\begin{proof}\nThe proof is nearly identical to that of Theorem~\\ref{thm:spherical-ogp-lowdeg} above, so we only explain the differences. We now define $A(Y,\\omega)$ to be the failure event \n\\[A(Y,\\omega) = \\{H_n(\\sgn(f(Y,\\omega));Y) < \\mu \\;\\vee\\; |\\{k \\in [n] \\;:\\; |f_k(Y,\\omega)| \\ge \\gamma\\}| < (1 - \\eta)n\\},\n\\]\nand define $x_\\ell = \\sgn(f(Y_{\\tau_\\ell}))$. The only part of the proof we need to modify is the proof that (i)-(iii) imply a contradiction, including the choice of $c$. As above, combining (i) and (ii) gives the existence of an $\\ell$ for which $\\nu_2 - \\nu_1 \\le \\frac{1}{\\sqrt n} \\|x_\\ell - x_{\\ell+1}\\|_2$, i.e., $\\frac{1}{4}\\|x_\\ell - x_{\\ell+1}\\|_2^2 \\ge \\frac{1}{4}(\\nu_2 - \\nu_1)^2 n$, implying that $x_\\ell$ and $x_{\\ell+1}$ differ in at least $\\Delta := \\frac{1}{4}(\\nu_2 - \\nu_1)^2 n$ coordinates. Let $\\eta = \\Delta\/(2n) = \\frac{1}{8}(\\nu_2 - \\nu_1)^2$ so that there must be at least $\\Delta\/2$ coordinates $i$ for which $|f_i(Y_{\\tau_\\ell}) - f_i(Y_{\\tau_{\\ell+1}})| \\ge \\gamma$. This implies $\\|f(Y_{\\tau_\\ell}) - f(Y_{\\tau_{\\ell+1}})\\|_2^2 \\ge \\gamma^2 \\cdot \\frac{\\Delta}{2} = \\frac{1}{8}\\gamma^2 (\\nu_2 - \\nu_1)^2 n$, which contradicts (iii) provided we choose $c \\le \\frac{1}{8}(\\nu_2 - \\nu_1)^2$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:ising-lowdeg}]\nThis result follows by combining Theorems~\\ref{thm:ising-ogp-lowdeg} and \\ref{thm:pspin-ogp}.\n\\end{proof}\n\n\n\n\n\\subsection{Stability of Langevin and Gradient Flows}\nLet $U\\in C^\\infty(\\cS_n)$ be some smooth function and for any $\\sigma\\geq0$ we can consider \\emph{Langevin Dynamics with potential $U$ and variance $\\sigma$} to be the strong solution of the stochastic differential equation (in It\\^o form)\n\\[\n\\begin{cases}\ndX_t = \\sigma dB_t - \\nabla U dt\\\\\nX_0 \\sim \\nu,\n\\end{cases}\n\\]\nwhere $B_t$ is spherical Brownian motion, $\\nabla$ is the spherical gradient, and $\\nu\\in\\mathcal{M}_1(\\cS_n)$ is some probability measure on the sphere called \\emph{the initial data}. Note that in the case $\\sigma=0$ this is simply gradient flow for $U$.\n\nWe recall here the following basic fact about the well-posedness of such equations, namely their continuous dependence on the function $U$. In the following, for a vector-valued function $F: \\cS_n\\to T\\cS_n$, we let $\\norm{F}_\\infty$ denote the essential supremum of the norm of $F$ induced by the canonical metric. (Here $T\\cS_n$ denotes the tangent bundle to $\\cS_n$.)\n\\begin{lemma*}\nLet $U,V\\in C^\\infty(\\cS_n)$ and $\\sigma\\geq 0$. Fix $\\nu\\in \\mathcal{M}_1(\\cS_n)$. Let $X^U_t$ and $X^V_t$ denote the corresponding solutions to Langevin dynamics with potentials $U$ and $V$ respectively and with the same variance $\\sigma$ with respect to the same Brownian motion $B_t$. Suppose further that their initial data are the same. Then there is a universal $C>0$ such that for any $t>0$\n\\begin{equation}\\label{eq:well-posed}\n\\sup_{s\\leq t}\\norm{X^U_s - X^V_s}_2 \\leq C t e^{Ct \\norm{\\nabla U}_\\infty \\vee \\norm{\\nabla V}_\\infty}\\norm{\\nabla U-\\nabla V}_\\infty \\quad \\text{ a.s.,}\n\\end{equation}\nwhere $\\norm{\\cdot}_2$ denotes Euclidean distance in the canonical embedding of $\\cS_n\\subseteq \\RR^n$.\n\\end{lemma*}\n\\noindent The proof of this result is a standard consequence of Gronwall's inequality and can be seen, e.g., in \\cite{varadhan,Tes12}.\n\n In this section, for a $p$-tensor $A$ we will write $A(x_1,\\cdots,x_p)$ to denote the action of $A$ on $p$ vectors, i.e., $A(x_1,\\cdots, x_p)= \\g{A,x_1\\otimes\\cdots\\otimes x_p}.$ Viewing this as a multilinear operator, we denote the operator norm by\n\\[\n\\norm{A}_{\\op} = \\sup_{\\norm{x_1}_2=\\cdots=\\norm{x_p}_2=1} A(x_1,...,x_p).\n\\]\nAs a consequence of the above, we note the following. We then have the following.\n \n\\begin{lemma}\\label{lem:langevin-stability-main}\nLet $\\delta = n^{-\\alpha}$ for some $\\alpha>0$ and let $\\{\\tau_i\\}$ denote a partition of $[0,\\pi\/2]$ with $|\\tau_{i+1}-\\tau_i| \\leq \\delta$ with $\\lceil\\delta^{-1}\\rceil+1$ elements. Let $(X^{\\tau_i})_i$ denote the family of strong solutions to Langevin dynamics with variance $\\sigma\\geq0$, potentials $H_n(\\cdot;Y_{\\tau_i})$ and initial data, $\\nu\\in\\mathcal{M}_1(\\cS_n)$ . We have that there is a $C>0$ independent of $n$ such that for any $T>0$ \n\\[\n\\sup_{i}\\sup_{s\\leq T}\\norm{X_s^{\\tau_i}-X_s^{\\tau_{i+1}}}_2 \\leq C T e^{C T} n^{-\\alpha}\n\\]\nwith probability at least $1-e^{-\\Omega(n)}$.\n\\end{lemma}\n\\begin{proof}\nEvidently, the proof will follow by \\eqref{eq:well-posed} upon controlling the gradients of $H_n(\\cdot;Y)$. To this end, we see that\n\\[\n\\nabla H_n(x;Y_\\tau) = \\frac{1}{n^{\\frac{p+1}{2}}}(Y_{\\tau}(\\pi_x,x,\\ldots,x)+\\cdots+Y_{\\tau}(x,\\ldots,x,\\pi_x))\n\\]\nwhere $\\pi_x$ denotes the projection on to $T_x \\cS_n$. In particular,\n\\[\nn^{-\\frac{p+1}{2}}\\norm{\\nabla H_n(x;Y_{\\tau})}_2 \\leq \\frac{1}{\\sqrt{n}} \\norm{Y_{\\tau}}_{\\op} \\leq \\frac{1}{\\sqrt{n}}(\\norm{Y}_{\\op}+\\norm{Y'}_{\\op}).\n\\]\nBy a standard epsilon-net argument (see, e.g., \\cite[Lemma 3.7]{BGJ20}), we have that\n\\[\n\\norm{Y}_{\\op}\\leq C\\sqrt{n}\n\\]\nwith probability $1-e^{-\\Omega(n)}$ (while the lemma in~\\cite{BGJ20} states the result for the expectation, one can either apply this to the probability by Borell's inequality, or simply note that the penultimate step in that proof is the desired high-probability bound). \nThus after a union bound, with probability $1-\\exp(-\\Omega(n))$\n\\[\n\\sup_{0\\leq\\tau\\leq \\pi\/2} \\norm{\\nabla H_n(\\cdot\\,;Y_\\tau)}_\\infty \\leq 2C.\n\\]\nOn the other hand, in law we have that\n$Y_{\\tau_i}-Y_{\\tau_{i+1}} = Z$ satisfies \n\\[\nZ \\stackrel{(d)}{=} Y\\sqrt{(\\cos(\\tau_i)-\\cos(\\tau_{i+1}))^2+(\\sin(\\tau_i)-\\sin(\\tau_{i+1}))^2}.\n\\]\nSince both cosine and sine are 1-Lipschitz, we see that the entries of $Z$ are i.i.d.\\ and have variance at most $\\delta$. \nConsequently, by the same epsilon-net argument, we have with probability $1-O(n^\\alpha e^{-c n})$,\n\\[\n\\max_{i}\\norm{\\nabla H_n(\\cdot\\,,Y_{\\tau_i})-\\nabla H_n(\\cdot\\,,Y_{\\tau_{i+1}})}_2\\leq C \\delta\n\\]\nas desired.\n\\end{proof}\n\n\\subsection{Failure of Langevin Dynamics}\\label{sec:pf-langevin}\n\nWe begin by noting the following concentration result. \n\\begin{lemma}\\label{lem:concetration-langevin}\nFix $T\\geq 0$ and $\\sigma\\geq0$. Let $X_T$ denote the solution of Langevin dynamics with potential $H_n$ , variance $\\sigma$, and initial data $\\nu\\in\\cM_1(\\cS_n)$, and let $Q_\\nu$ denote its law conditionally on $Y$. Then we have that there is some $c>0$ such that for every $\\eps>0$\n\\[\n\\PP_Y\\otimes Q_\\nu( \\abs{H_n(X_T;Y)-\\EE H_n(X_T;Y)} \\geq \\eps) \\leq \\exp(-c \\eps^2 n)\n\\]\n\\end{lemma}\n\\begin{proof}\nNote as before that for any two tensors $Y$ and $Y'$, we have\n\\[\n\\norm{\\nabla H_n(\\cdot\\,;Y)-\\nabla H_n(\\cdot\\,;Y')}_2\\leq \\norm{Y-Y'}_{\\op} \\leq \\norm{Y-Y'}_2,\n\\]\nwhere here for a tensor $A$, $\\norm{A}_2$ denotes the square root of the sum of the squares of its entries. Consequently, by \\eqref{eq:well-posed}, the map $Y\\mapsto X_T$ is uniformly $C$-Lipschitz for some $C=C(T)>0$ independent of $n$. The result then follows by Gaussian concentration of measure.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:langevin-main}]\nIn the following, we let $P=\\PP\\otimes Q_\\mu$. Recall the family $(Y_\\tau)$ from \\eqref{eq:Y-tau-def} and $\\cA(Y,Y')$ from \\eqref{eq:interpolated-family-p-spin}. Let $\\delta=n^{-\\alpha}$ for some $\\alpha>0$ and define $(\\tau_i)$ as in Lemma~\\ref{lem:langevin-stability-main}.\nFix an $\\eps>0$ and let $G$ denote the event that the overlap gap property holds for $\\cA(Y,Y')$ with parameters $(E_p(\\cS)-\\eps,\\nu_1,\\nu_2)$\nas well as $\\nu_1$-separation of $H_n(\\cdot\\,;Y_0)$ and $H_n(\\cdot\\,;Y_1)$ above level $E_p(\\cS)-\\eps$. By Theorem~\\ref{thm:pspin-ogp}, this holds for every $\\eps>0$ sufficiently small with probability $1-\\exp(-\\Omega(n))$. \n\n\nLet $X^{\\tau_i}$ denote the solutions to Langevin dynamics\ncorresponding to the potentials $H_n(\\cdot\\,;Y_{\\tau_i})$.\nLet $B_n$ and $\\tilde{B}_n$ denote the bad events\n\\begin{align*}\n\\tilde{B}_n &= \\{\\exists i \\,:\\, H_n(X_T^{\\tau_i};Y_{\\tau_i}) \\geq E_p(\\cS)-\\eps \\}\\\\\nB_n &= \\{ H_n(X_T^{\\tau_i};Y_{\\tau_i}) \\geq E_p(\\cS)- 3\\eps \\; \\forall i\\}.\n\\end{align*}\nLet $E_i(\\eps)$ denote the complement of the event bounded in Lemma~\\ref{lem:concetration-langevin} applied to $X^{\\tau_i}_T$, and let $E(\\eps)= \\cap E_i(\\eps)$ which has probability at least $1-\\exp(-\\Omega(n))$. Note that on $\\tilde{B}_n\\cap E(\\eps)$, we have that $\\E H_n(X^{\\tau_i}_T;Y_{\\tau_i})\\geq E_p(\\cS)-2\\eps$ for some $i$. As the expectation is non-random and independent of $i$, this holds for all $i$. Consequently, $\\tilde{B}_n\\cap E(\\eps)\\subset B_n$. Thus we have $P(\\tilde{B}_n)\\leq P(B_n) + \\exp(-\\Omega(n))$. \n\nSuppose now that the events $B_n$ and $G$ have non-empty intersection. Let us work on this intersection.\nBy $\\nu_1$-separation, recalling the overlap function $R(x,y)=\\abs{\\frac{1}{n}\\g{x,y}}$, we have that \n\\[\nR(X^0_T,X^1_T) \\leq \\nu_1\n\\]\nwhereas $R(X^0,X^0)=1$. On the other hand, by Lemma~\\ref{lem:langevin-stability-main}, it follows that\n\\[\n\\abs{R(X_T^0,X_T^{\\tau_i})-R(X_T^0,X_T^{\\tau_{i+1}})}\\leq \\sqrt{n} C T e^{C T}n^{-\\alpha}.\n\\]\nThus, choosing $\\alpha>1\/2$, we see that for $n$ sufficiently large, there must be some (random) $j$ such that \n\\[\n\\nu_1 < \\abs{R(X_T^0,X_T^{\\tau_j})} <\\nu_2.\n\\]\nThis contradicts the overlap gap property. Thus $B_n\\subseteq G^c$.\nConsequently, we have that \n\\[\nP(\\tilde{B}_n)\\leq P(B_n) +e^{-\\Omega(n)} \\leq P(G^c)+e^{-\\Omega(n)}=e^{-\\Omega(n)}.\n\\]\nObserving that $\\tilde{B}_n^c$ is contained in the event we are trying to bound yields the desired result by monotonicity of probabilities.\n\\end{proof}\n\n\\subsection{Proof of Overlap Gap Property}\n\\label{sec:pf-ogp-pspin}\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:pspin-ogp}]\nWe begin with the spherical setting. Let us view $H_n(x;Y)$ as a Gaussian process on $\\cS_n$. It was shown in \\cite[Theorem 3]{ChenSen17} that for any $\\tau,\\eps>0$ with \n$\\tau \\leq \\pi\/2$ there are\n$C,c,\\tilde \\mu>0$ such that with probability at least $1-Ce^{-c n}$, \n\\[\n\\max_{R(x,y)>\\eps} H_n(x;Y_\\tau) + H_n(x,Y)\n<\\max_{x\\in\\cS_n} H_n(x;Y_\\tau)+\\max_{x\\in\\cS_n} H_n(x;Y)- \\tilde \\mu,\n\\]\nso that if both $u,v$ satisfy\n\\begin{equation*}\n \\begin{aligned}\n \\max_{x\\in\\cS_n} H_n(x;Y_\\tau) - \\tilde \\mu \/2 &\\leq H_n(u,Y_\\tau)\\\\\n \\max_{x\\in\\cS_n} H_n(x;Y) - \\tilde \\mu \/2 &\\leq H_n(v;Y)\n \\end{aligned}\n\\end{equation*}\nthen it must be that $|R(u,v)|<\\eps$. (The result is stated there for $\\tau <\\pi\/2$, but can be extended to the easier case of $\\tau = \\pi\/2$. See Remark~\\ref{rem:tau} below.) One can then replace the maximum on the right-hand side of the above upon recalling that by Borell's inequality, \n\\[\n\\PP( \\abs{ \\max_{x\\in \\cS_n} H_n(x;Y) -\\E \\max_{x\\in\\cS_n} H_n(x;Y)} \\geq \\eps)\\leq C\\exp(-cn\\eps^2)\n\\]\nfor some $C,c>0$. In particular, upon recalling that $\\E\\max_{\\cS_n} H_n(x;Y)\\to E_p(\\cS)$ \\cite{JagTob17}, for $n$ sufficiently large we obtain\n\\begin{equation}\\label{eq:disorder-overlap}\n \\begin{aligned}\n E_p(\\cS) - \\tilde \\mu \/4 &\\leq H_n(u,Y_\\tau)\\\\\n E_p(\\cS) - \\tilde \\mu \/4 &\\leq H_n(v;Y).\n \\end{aligned}\n\\end{equation}\nOn the other hand, as shown in \\cite[Theorem 6]{AuffChen18}, \\eqref{eq:disorder-overlap} holds with $\\tau =0$ as well, except now we have that the inner products of the near-maximal $u,v$ must satisfy $R(u,v)\\in[0,\\nu_1]\\cup[\\nu_2,1]$ for some $0\\leq\\nu_1<\\nu_2\\leq1$. By combining these results we can obtain the overlap gap \nproperty with parameters $(E_p(\\cS)-\\tilde\\mu\/4,\\nu_1,\\nu_2)$ \nby applying the discretization argument from in \n\\cite{gamarnik2019overlap}. Note that, \n\\eqref{eq:disorder-overlap} in the case $\\tau = \\pi\/2$ implies \n$\\eps$-separation below level $E_p(\\cS)-\\tilde \\mu\/4$. As $\\eps$ was arbitrarily small we can take $\\eps=\\nu_1$.\n\nAfter recalling that $\\E \\max_{\\Sigma_n} H_n(x;Y)\\to E_p(\\Sigma)$ \\cite{Ton02}, we see that the second result is a restatement of \\cite[Theorem 3.4]{gamarnik2019overlap} after applying Borell's inequality as in \\eqref{eq:disorder-overlap}.\n\\end{proof}\n\\begin{remark}\\label{rem:tau}\nWhile the result of \\cite[Theorem 3]{ChenSen17} is only stated for $0<\\tau<\\pi\/2$, it easily extends to the case $\\tau = \\pi\/2$ by differentiating in the Lagrange multiplier term $\\lambda$ in the ``RSB bound'' from \\cite[Eq.\\ 59]{ChenSen17}. \nFor the reader's convenience, we sketch this change. We follow here the notation of \\cite{ChenSen17}. By comparing to \\cite[Eq.\\ 78]{ChenSen17}, one sees that $E(0,u,\\lambda)$ from \\cite[Eq.\\ 61]{ChenSen17} satisfies $E(0,u,0)= 2 E_p(\\cS) \\, (= 2GS )$. On the other hand for $u>0$ we have $\\partial_\\lambda E(0,u,0) = - u<0$, from which it follows that $\\min_\\lambda T(0,u,\\lambda)<2 E_p(\\cS)$ as desired. The case $u<0$ follows by symmetry.\n\\end{remark}\n\n\n\n\\section{Proofs for Maximum Independent Set}\n\n\n\\subsection{Low-Degree Polynomials are Stable}\n\nIn this section we prove a key structural property (Theorem~\\ref{thm:binary-stable}) of low-degree polynomials \non the Boolean hypercube. Roughly speaking, with nontrivial probability, a low-degree polynomial will not change its output significantly at any step when its input coordinates are resampled one at a time.\n\nThroughout this section, we work with the Boolean hypercube $\\{0,1\\}^m$ and let $Y=(Y_1,...,Y_m)$\ndenote a Bernoulli random vector, $Y\\in\\{0,1\\}^m$, with independent entries that satisfy\n\\[\nP(Y_i = 1) = p_i,\n\\]\nfor some $0 < p_i < 1$. We view the hypercube as a graph where the vertex set is $V = \\{0,1\\}^m$ and the edge set consists of those edges $(x,y)$ such that $x$ and $y$ differ in exactly one coordinate.\n\nWe introduce the following local regularity property of (non-random) functions $\\{0,1\\}^m \\to \\RR^n$.\n\n\\begin{definition}\nLet $f:\\{0,1\\}^m\\to\\RR^n$ and let $c>0$. \nAn edge $(x,y)$ in $\\{0,1\\}^m$ is said to be \\emph{$c$-bad}\nfor $f$ if \n\\[\n\\|f(x)-f(y)\\|_2^2 \\geq c\\, \\E \\|f(Y)\\|_2^2. \n\\]\n\\end{definition}\n\n\\noindent For $x,y \\in \\{0,1\\}^m$, recall the definition of the path $x \\mapsto y$ (Definition~\\ref{def:path}), which naturally corresponds to a walk on the edges of the hypercube graph. We now turn to the main result of this section, which shows that for a low-degree polynomial, a random path has no bad edges with nontrivial probability.\n\n\\begin{theorem}\\label{thm:binary-stable}\nLet $Y$ be a Bernoulli random vector with $P(Y_i=1)=p_i$, let $Y'$ be an independent copy of $Y$, and let $\\lambda = \\min_i (p_i\\wedge 1-p_i)$. For any $c>0$ and any (deterministic) degree-$D$ polynomial $f: \\{0,1\\}^m \\to \\RR^n$ we have\n\\[\nP( Y \\mapsto Y' \\text{ has no } c\\text{-bad edge for } f ) \\geq \\lambda^{4D\/c}.\n\\]\n\\end{theorem}\n\n\\noindent The key steps in the proof of Theorem~\\ref{thm:binary-stable} are contained in the following two lemmas. Throughout the following, for a point $x\\in\\{0,1\\}^m$, we let $x_{-i}$ denote the all-but-$i$th coordinates of $x$, and let $q(x)=P(x\\mapsto Y' \\text{ has no } c\\text{-bad edge})$. \n\n\\begin{lemma}\\label{lem:total-inf}\nLet $f: \\{0,1\\}^m \\to \\RR^n$ be a polynomial of degree $D$ and let $Y$ be a Bernoulli random vector with $P(Y_i=1)=p_i$.\nLet $B_i$ denote the event that the edge corresponding to flipping the $i$th coordinate of $Y$ is $c$-bad for $f$. Then\n\\begin{equation}\\label{eq:total-inf}\n\\frac{c}{2} \\sum_{i=1}^m (p_i \\wedge 1-p_i) P(B_i) \\le D.\n\\end{equation}\n\\end{lemma}\n\n\\begin{lemma}\\label{lem:potential}\nIf $Y$ is a Bernoulli random vector with $P(Y_i = 1)=p_i$, then \n\\begin{equation}\\label{eq:potential}\n-\\E \\log q(Y) \\le \\sum_{i=1}^m S(p_i) P(B_i)\n\\end{equation}\nwhere $S$ denotes the binary entropy $S(p) = -p \\log p - (1-p) \\log(1-p)$.\n\\end{lemma}\n\n\\noindent Intuitively,~\\eqref{eq:total-inf} states that if $D$ is small, there cannot be too many bad edges. The proof will be based on the fact that low-degree polynomials have small \\emph{total influence}. Intuitively,~\\eqref{eq:potential} states that if most paths contain a bad edge then there must be many bad edges in total. The actual definition of ``bad'' will not be used in the proof of the latter lemma. We defer the proof of these lemmas momentarily\n\nWe first show how to deduce Theorem~\\ref{thm:binary-stable} from the above lemmas, and then we prove the lemmas.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:binary-stable}]\n\nIf $p \\le 1\/2$ then $-p \\log p \\ge -(1-p)\\log(1-p)$ and so $S(p) \\le -2p \\log p$. If instead $p > 1\/2$ then $-p \\log p \\le -(1-p)\\log(1-p)$ and so $S(p) \\le -2(1-p) \\log(1-p)$. Therefore, in either case we have\n\\begin{equation}\\label{eq:H-bound}\nS(p_i) \\le 2 (p_i \\wedge 1-p_i) \\log(1\/\\lambda).\n\\end{equation}\n\n\\noindent We now have\n\\begin{align*}\n- \\log \\E\\, q(Y) \n\\le -\\E \\log q(Y)\n\\le \\sum_{i=1}^m S(p_i) P(B_i) \n\\le 2 \\log(\\frac{1}{\\lambda}) \\sum_{i=1}^m (p_i \\wedge 1-p_i) P(B_i) \\le 2 \\log(\\frac{1}{\\lambda}) \\cdot \\frac{2D}{c} \n\\end{align*}\nwhere in the first inequality we used Jensen's inequality, the second we used \\eqref{eq:potential}, the third we used \\eqref{eq:H-bound}, and the last we used \\eqref{eq:total-inf}.\nThe result follows by re-arrangement.\n\\end{proof}\n\nBefore turning to the proof of the above lemmas, let us pause and recall here some basic facts from Fourier analysis on the Boolean cube. For more on this see \\cite{o-book}. For $i \\in [m]$, let $\\phi_i(Y_i) = \\frac{Y_i - p_i}{\\sqrt{p_i(1-p_i)}}$, and for $S \\subseteq [m]$, let $\\phi_S(Y) = \\prod_{i \\in S} \\phi_i(Y_i)$.\nRecall that the functions $\\{\\phi_S\\}_{S\\subseteq[m]}$ form an orthonormal basis for $L^2(P)$. For a function $f$ we denote its fourier coefficients by $\\hat f(S) = \\E f(Y) \\phi_S(Y)$.\nObserve that Parseval's theorem in this setting reads: for a function $f:\\{0,1\\}^m\\to\\R$, we have\n\\[\n\\E[f(Y)^2] = \\sum_{S\\subseteq[m]} \\hat f(S)^2.\n\\]\nFor a function $f$ we denote the \\emph{total influence} by \n\\[\nI(f) = \\sum_{S\\subseteq [m]}\\abs{S} \\cdot \\hat f (S)^2.\n\\]\nFinally, consider the \\emph{Laplacian} operator $L_i$, defined by \n\\[\nL_i f = \\sum_{S\\ni i} \\hat f(S) \\phi_S(x),\n\\]\nwhich can be thought of as ``the part of $f$ that depends on $i$''.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:total-inf}]\nLet us begin by first fixing an entry $f_j$ of $f$. Since $f_j$ is of degree at most $D$, its spectrum is such that $\\hat f_j (S) =0 $ for any $S$ with $\\abs{S}>D$. As such,\n\\begin{align*}\nD\\,\\E[f_j(Y)^2] = D \\sum_{\\abs{S}\\leq D} \\hat f_j(S)^2 &\\geq I(f_j) = \\sum_{i}\\E(L_i f_j(Y))^2\\\\\n&= \\sum_{i} \\E [(1-p_i) (L_i f_j(Y_{-i}[0]))^2 +p_i (L_i f_j(Y_{-i}[1]))^2]\n\\end{align*}\nwhere $Y_{-i}[\\ell] \\in \\{0,1\\}^m$ is obtained from $Y_{-i}$ by setting the $i$th coordinate to $\\ell$.\nUsing that for any $a,b\\in\\R$ and $p\\in[0,1]$, $(p\\wedge 1-p) (a-b)^2\\leq 2((1-p)a^2 +p b^2),$ we see that the above display is bounded below by \n\\[ \\frac{1}{2}\\sum_{i} (p_i \\wedge 1-p_i)\\E(L_if_j(Y_{-i}[1])-L_if_j(Y_{-i}[0]))^2\n= \\frac{1}{2}\\sum_i (p_i \\wedge 1-p_i) \\E(f_j(Y_{-i}[1])-f_j(Y_{-i}[0]))^2. \\]\nSumming over $j$ and applying the definition of $c$-bad edge, we obtain\n\\begin{align*}\nD\\,\\E\\norm{f(Y)}_2^2 &\\geq \\frac{1}{2}\\sum_i (p_i \\wedge 1-p_i) \\E\\|f_j(Y_{-i}[1])-f_j(Y_{-i}[0])\\|_2^2\\\\\n&\\geq \\frac{1}{2} \\sum_i (p_i\\wedge 1-p_i) P(B_i) \\,c \\,\\E\\|f(Y)\\|_2^2.\n\\end{align*}\nCancelling the net factor of $\\E\\norm{f(Y)}_2^2$ yields the result.\n\\end{proof}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:potential}]\nProceed by induction on $m$. The base case $m = 1$ is straightforward: if the single edge is bad then both sides of~\\eqref{eq:potential} equal $S(p_1)$, and otherwise both sides equal 0.\n\nFor the inductive step, let $q_0(Y_{-1})$ denote the probability over $Y'_{-1}$ that the path $Y_{-1}[0] \\mapsto Y'_{-1}[0]$ has no $c$-bad edge. Similarly define $q_1(Y_{-1})$ for the path $Y_{-1}[1] \\mapsto Y'_{-1}[1]$. (Note that these are probabilities on $\\{0,1\\}^{m-1}$.) \nIntegrating in the first coordinate, we have by independence,\n\\begin{equation}\\label{eq:pot-exp}\n-\\E \\log q(Y) = -\\E[(1-p_1) \\log q(Y_{-1}[0]) + p_1 \\log q(Y_{-1}[1])]\n\\end{equation}\nwhere\n\\[ \nq(Y_{-1}[0]) = (1-p_1)q_0(Y_{-1}) + p_1 q_1(Y_{-1}) \\One_{ B_1^c} \\]\nand\n\\[ q(Y_{-1}[1]) = (1-p_1)q_0(Y_{-1}) \\One_{B_1^c} + p_1 q_1(Y_{-1}). \\]\nIf $B_1^c$ holds (i.e., the edge corresponding to flipping the first coordinate is ``good'') then the expression inside the expectation in~\\eqref{eq:pot-exp} is\n\\begin{align*}\n(1-p_1) \\log q(Y_{-1}[0]) + p_1 \\log q(Y_{-1}[1]) &= \\log [(1-p_1)q_0(Y_{-1}) + p_1 q_1(Y_{-1})] \\\\\n&\\ge (1-p_1) \\log q_0(Y_{-1}) + p_1 \\log q_1(Y_{-1})\n\\end{align*}\nusing concavity of $t \\mapsto \\log t$. If instead $B_1$ holds (i.e., the edge is bad),\n\\begin{align*}\n(1-p_1) \\log q(Y_{-1}[0]) + p_1 \\log q(Y_{-1}[1]) &= (1-p_1) \\log[(1-p_1) q_0(Y_{-1})] + p_1 \\log[p_1 q_1(Y_{-1})] \\\\\n&= -S(p_1) + (1-p_1) \\log q_0(Y_{-1}) + p_1 \\log q_1(Y_{-1}).\n\\end{align*}\nPutting it all together,\n\\begin{align}\n-\\E \\log q(Y) &\\le -\\E[-S(p_1)\\One_{B_1} + (1-p_1) \\log q_0(Y_{-1}) + p_1 \\log q_1(Y_{-1})] \\nonumber \\\\\n&= S(p_1) P(B_1) - (1-p_1) \\E \\log q_0(Y_{-1}) - p_1 \\E\\log q_1(Y_{-1}).\n\\label{eq:pot-final}\n\\end{align}\nBy the induction hypothesis,\n\\[ -\\E \\log q_0(Y_{-1}) \\le \\sum_{i > 1} S(p_i) P(B_i\\vert Y_1 =0). \\] \nSimilarly,\n\\[ -\\E \\log q_1(Y_{-1}) \\le \\sum_{i > 1} S(p_i) P(B_i\\vert Y_1 = 1). \\]\nCombining these yields \n\\[ - (1-p_1) \\E \\log q_0(Y_{-1}) - p_1 \\E \\log q_1(Y_{-1}) \\le \\sum_{i > 1} S(p_i) P(B_i). \\]\nPlugging this into~\\eqref{eq:pot-final} completes the proof.\n\\end{proof}\n\n\n\n\n\\subsection{Failure of Low-Degree Algorithms}\n\\label{sec:pf-lowdeg-indep}\n\nThis section is devoted to proving Theorem~\\ref{thm:MIS-main}. We start with the following result which shows that together, OGP and separation imply that low-degree polynomials fail to find large independent sets. Recall the family of functions $\\cF(Y,Y')$ from~\\eqref{eq:interpolated-family-MIS}.\n\\begin{theorem}\\label{thm:indep-ogp-lowdeg}\nSuppose $d \\le n\/2$ and $\\nu_2 n \\le k$. Suppose that with probability at least $1-\\Delta$ when $Y,Y' \\sim G(n,d\/n)$ independently, $\\cF(Y,Y')$ has $(k,\\nu_1,\\nu_2)$-OGP, and $F(\\cdot\\,,Y)$ and $F(\\cdot\\,;Y')$ are $\\nu_1$-separated above $k$. If\n\\begin{equation}\\label{eq:Delta-cond}\n\\Delta + 3\\delta(m+1) < \\exp\\left(-\\frac{96 \\gamma D k \\log(n\/d)}{(\\nu_2 - \\nu_1)^2 n}\\right)\n\\end{equation}\nthen for any $\\eta \\le \\frac{1}{4}(\\nu_2 - \\nu_1)^2$, there is no random degree-$D$ polynomial that $(k,\\delta,\\gamma,\\eta)$-optimizes~\\eqref{eq:max-indep}.\n\\end{theorem}\n\n\\begin{proof}\nAssume on the contrary that $f$ $(\\mu,\\delta,\\gamma,\\eta)$-optimizes maximum independent set. Let $A(Y,\\omega)$ denote the ``failure'' event\n\\[\nA(Y,\\omega) =\\{|V_f^\\eta(Y,\\omega)| < k\\}.\n\\]\nAs in the proof of Theorem~\\ref{thm:spherical-ogp-lowdeg}, we can reduce to the case where $f$ is deterministic: there exists $\\omega^* \\in \\Omega$ such that the resulting deterministic function $f(\\cdot) = f(\\cdot,\\omega^*)$ satisfies $\\EE_Y \\|f(Y)\\|_2^2 \\le 3\\gamma k$ and $\\PP_Y(A(Y,\\omega^*)) \\le 3\\delta$.\n\n\nLet $Y,Y' \\sim G(n,d\/n)$ independently, and let $Y = Z_0 \\to Z_1 \\to \\cdots \\to Z_m = Y'$ be the path $Y \\mapsto Y'$. \nLet $S_j = V_f^\\eta(Z_j)$. Consider the following events.\n\\begin{enumerate}\n \\item [(i)] The family $\\cF(Y,Y')$ has the $(k,\\nu_1,\\nu_2)$-OGP on $\\{0,1\\}^m$ and the functions $F(\\cdot\\,;Y)$ and $F(\\cdot\\,;Y')$ are $\\nu_1$-separated above $k$.\n \\item [(ii)] For all $j \\in \\{0,1,\\ldots,m\\}$, $f$ succeeds on input $Z_j$, i.e., the event $A(Z_j,\\omega^*)^c$ holds.\n\\end{enumerate}\nWith probability at least $1 - \\Delta - 3\\delta(m+1)$, the events (i) and (ii) occur simultaneously. We will show that when this happens, the path $Y \\mapsto Y'$ must contain a $c$-bad edge (for a particular choice of $c$). This will allow us to derive a contradiction with Theorem~\\ref{thm:binary-stable}.\n\nToward this end, suppose (i) and (ii) both occur. Since $\\nu_2 n \\le k$, it follows that some $j$ must cross the OGP gap in the sense that $|S_0 \\cap S_j| \\ge \\nu_2 n$ and $|S_0 \\cap S_{j+1}| \\le \\nu_1 n$. Thus, letting $\\One_{S} \\in \\{0,1\\}^n$ be the indicator of $S$,\n\\[ (\\nu_2 - \\nu_1)n \\le |\\langle \\One_{S_0}, \\One_{S_j} - \\One_{S_{j+1}}\\rangle| \\le \\|\\One_{S_0}\\|_2 \\cdot \\|\\One_{S_j} - \\One_{S_{j+1}}\\|_2 = \\sqrt{|S_0|} \\cdot \\sqrt{|S_j \\triangle S_{j+1}|} \\le \\sqrt{n} \\cdot \\sqrt{|S_j \\triangle S_{j+1}|} \\]\nwhere $\\triangle$ denotes symmetric difference.\nFrom the definition of $V_f^\\eta$, there must be at least $|S_j \\triangle S_{j+1}| - 2 \\eta n$ coordinates $i$ for which $|f_i(Z_j) - f_i(Z_{j+1})| \\ge 1\/2$. This means\n\\[ \\|f(Z_j) - f(Z_{j+1})\\|_2^2\n\\ge \\frac{1}{4}(|S_j \\triangle S_{j+1}| - 2\\eta n)\n\\ge \\frac{1}{4}\\left[(\\nu_2 - \\nu_1)^2 n - 2 \\eta n\\right]\n\\ge \\frac{n}{8}(\\nu_2 - \\nu_1)^2. \\]\nprovided $\\eta \\le \\frac{1}{4}(\\nu_2 - \\nu_1)^2$. Since $\\EE_Y \\|f(Y)\\|_2^2 \\le 3\\gamma k$, we now have that $(Z_j,Z_{j+1})$ is a $c$-bad edge for $c = (\\nu_2 - \\nu_1)^2 n\/(24 \\gamma k)$.\n\nApplying Theorem~\\ref{thm:binary-stable} yields\n\\[ \\Delta + 3\\delta(m+1) \\ge (d\/n)^{4D\/c} = \\exp\\left(-\\frac{96 \\gamma D k \\log(n\/d)}{(\\nu_2-\\nu_1)^2 n}\\right). \\]\nThis contradicts~\\eqref{eq:Delta-cond}, completing the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:MIS-main}]\nLet $\\alpha > 1 + 1\/\\sqrt{2}$, and let $0 \\le \\tilde\\nu_1 < \\tilde\\nu_2 \\le 1$ be the constants from Theorem~\\ref{thm:ogp-graph}. Provided $d$ is sufficiently large, Theorem~\\ref{thm:ogp-graph} allows us to apply Theorem~\\ref{thm:indep-ogp-lowdeg} with parameters $k = \\alpha \\frac{\\log d}{d} n$, $\\nu_j = \\tilde\\nu_j \\frac{k}{n}$ (for $j = 1,2$), $\\Delta = \\exp(-\\Omega(n))$. This requires $\\eta \\le \\frac{1}{4}(\\nu_2 - \\nu_1)^2 = \\frac{\\alpha^2\\log^2 d}{4d^2}(\\tilde\\nu_2 - \\tilde\\nu_1)^2$ and gives the desired result provided\n\\[ \\exp(-\\Omega(n)) + 3\\delta\\left(\\binom{n}{2}+1\\right) < \\exp\\left(-\\frac{96 \\gamma D d \\log(n\/d)}{(\\tilde\\nu_2-\\tilde\\nu_1)^2 \\alpha \\log d}\\right). \\]\nTo satisfy this (for $n$ sufficiently large), it is sufficient to have\n\\[ \\delta < \\frac{1}{3n^2}\\left[\\exp(-\\tilde C_1 \\gamma D \\log n) - \\exp(-\\tilde C_2 n)\\right] \\]\nwhere $\\tilde C_1, \\tilde C_2 > 0$ are constants depending on $\\alpha, d$. It is in turn sufficient to have $\\tilde C_1 \\gamma D \\log n \\le \\frac{1}{2} \\tilde C_2 n$ and $\\delta < \\frac{1}{4n^2} \\exp(-\\tilde C_1 \\gamma D \\log n) = \\exp(-\\tilde C_1 \\gamma D \\log n - \\log 4 - 2 \\log n)$. This completes the proof with $C_2 = \\frac{\\tilde C_2}{2\\tilde C_1}$ and $C_1 = \\tilde C_1 + 3$.\n\\end{proof}\n\n\n\n\n\\subsection{Proof of Overlap Gap Property}\n\\label{sec:pf-ogp-indep}\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:ogp-graph}]\nFix integers $k_1,k_2,\\ell$ satisfying $k_1 \\ge k$, $k_2 \\ge k$, and $1 \\le \\ell \\le k$. Fix $j_1,j_2 \\in \\{0,1,\\ldots,m\\}$. Let $T(k_1,k_2,\\ell,j_1,j_2)$ denote the expected number of ordered pairs $(S_1,S_2)$ where $S_1$ is an independent set in $Z_{j_1}$ with $|S_1| = k_1$, $S_2$ is an independent set in $Z_{j_2}$ with $|S_2| = k_2$, and $|S_1 \\cap S_2| = \\ell$ (where the expectation is over $Y,Y'$). Define $\\alpha_1, \\alpha_2, \\beta$ by the relations $k_1 = \\alpha_1 \\frac{\\log d}{d} n$, $k_2 = \\alpha_2 \\frac{\\log d}{d} n$, and $\\ell = \\beta \\frac{\\log d}{d} n$. Restrict to the case $\\delta \\le \\beta \\le \\alpha-\\delta$ for an arbitrary but fixed constant $\\delta > 0$ (which may depend on $\\alpha$ but not $d$); we will show an interval of forbidden overlaps within this interval for $\\beta$. Note that $\\alpha_1 \\ge \\alpha$ and $\\alpha_2 \\ge \\alpha$, and we can assume $\\alpha_1 \\le 2+\\delta$ and $\\alpha_2 \\le 2+\\delta$ since (for sufficiently large $d$) there are no independent sets of size exceeding $(2+\\delta)\\frac{\\log d}{d} n$ with high probability. We have\n\\begin{equation}\\label{eq:TE}\n \\begin{aligned}\nT(k_1,k_2,\\ell,j_1,j_2) &= \\binom{n}{\\ell}\\binom{n-\\ell}{k_1-\\ell}\\binom{n-k_1}{k_2-\\ell} (1-d\/n)^E \\\\\n&\\le \\binom{n}{\\ell}\\binom{n}{k_1-\\ell}\\binom{n}{k_2-\\ell} (1-d\/n)^E\n\\end{aligned}\n\\end{equation}\nwhere $E \\ge \\binom{k_1}{2} + \\binom{k_2}{2} - \\binom{\\ell}{2}$ (the worst case being $j_1 = j_2$). Using the standard bounds ${n \\choose k} \\le (\\frac{ne}{k})^k$ (for $1 \\le k \\le n$) and $\\log(1+x) \\le x$ (for $x > -1$),\n\\begin{align*}\nT(k_1,k_2,\\ell,j_1,j_2) &\\le \\exp\\left(\\ell \\log\\frac{ne}{\\ell} + (k_1 - \\ell)\\log \\frac{ne}{k_1-\\ell} + (k_2 - \\ell)\\log \\frac{ne}{k_2-\\ell} - E \\frac{d}{n}\\right) \\\\\n&\\le \\exp\\left[\\frac{\\log^2 d}{d} n \\left(\\beta + (\\alpha_1-\\beta) + (\\alpha_2-\\beta) - \\frac{1}{2}(\\alpha_1^2 + \\alpha_2^2 - \\beta^2) + \\varepsilon_d + o(1)\\right)\\right] \\\\\n&= \\exp\\left[\\frac{\\log^2 d}{d} n \\left(\\alpha_1 - \\frac{1}{2} \\alpha_1^2 + \\alpha_2 - \\frac{1}{2} \\alpha_2^2 - \\beta + \\frac{1}{2}\\beta^2 + \\varepsilon_d + o(1)\\right)\\right]\n\\end{align*}\nwhere $\\varepsilon_d \\to 0$ as $d \\to \\infty$. Since $\\alpha_1 \\ge \\alpha > 1 + 1\/\\sqrt{2}$, we have $\\alpha_1 - \\frac{1}{2} \\alpha_2^2 < 1\/4$ and likewise for $\\alpha_2$. Note that $\\beta \\mapsto \\beta - \\frac{1}{2} \\beta^2$ has maximum value $1\/2$ at $\\beta = 1$. Thus if we choose $\\delta > 0$ small enough (depending on $\\alpha$ but not $d$), for any $\\beta \\in [1-\\delta,1+\\delta]$ and any $\\alpha_1 \\ge \\alpha$, $\\alpha_2 \\ge 2$, we have\n\\[ \\alpha_1 - \\frac{1}{2} \\alpha_1^2 + \\alpha_2 - \\frac{1}{2} \\alpha_2^2 - \\beta + \\frac{1}{2}\\beta^2 \\le -\\delta, \\]\nimplying $T(k_1,k_2,\\ell,j_1,j_2) \\le \\exp(-\\Omega(n) (\\delta - \\varepsilon_d - o(1)))$, which is $\\exp(-\\Omega(n))$ for sufficiently large $d$. Accordingly, let $\\nu_1 = (1-\\delta)\/\\alpha$ and $\\nu_2 = (1+\\delta)\/\\alpha$. We now have that OGP (with the desired parameters) holds with high probability, using Markov's inequality and a union bound over the $\\le n^7$ possible values for $(k_1, k_2, \\ell, j_1, j_2)$.\n\nIt remains to show $\\nu_1$-separation, which pertains to the case $j_1 = 0$, $j_2 = m$. In this case,~\\eqref{eq:TE} holds with the stronger statement $E = \\binom{k_1}{2} + \\binom{k_2}{2}$. As a result, the expression in~\\eqref{eq:TE} is non-increasing in $\\beta$ (provided $\\beta \\ge \\delta$ and $d$ is sufficiently large). By the above argument, we can again conclude $T(k_1,k_2,\\ell,0,m) \\le \\exp(-\\Omega(n))$ but now under the weaker condition $\\beta \\ge 1-\\delta$ in place of $\\beta \\in [1-\\delta,1+\\delta]$. This completes the proof.\n\\end{proof}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nConformal field theories describe very special points in the space of quantum field theories\nthat seem to provide unique views into non-perturbative dynamics through a variety of rather\ncomplementary techniques, such as holography, integrability, localization and the conformal\nbootstrap. One of the principal analytical tools for conformal field theory are conformal\npartial wave (or block) expansions that were proposed early on in \\cite{Ferrara:1973vz}.\nThe role they play in the study of models with conformal symmetry is very similar to the\nrole of Fourier analysis in systems with translational symmetry. While conformal blocks\nare entirely determined by kinematics, they allow to separate very neatly the dynamical\nmeat of a theory from its kinematical bones. For example, an $N$-point function of local\noperators in a conformal field theory can be a very complicated object. If expanded in\nconformal blocks, however, the coefficients factorize into a set of three-point couplings,\ni.e.\\ most of the complicated dependence on the insertion points resides in the kinematical\nskeleton of a conformal field theory. This is the reason conformal blocks expansions\nare so important.\n\nConformal blocks for four-point functions of local operators in bosonic conformal field\ntheories are relatively well studied by now, see e.g. \\cite{Dolan:2000ut,Dolan:2003hv,\nDolan:2011dv,Costa:2011dw,SimmonsDuffin:2012uy,Penedones:2015aga,Hogervorst:2013sma,\nEcheverri:2016dun,Schomerus:2016epl,Karateev:2017jgd,Isachenkov:2017qgn,Dyer:2017zef,Erramilli:2019njx,Fortin:2019fvx,Fortin:2019dnq,\nFortin:2020ncr} and references therein. On the other hand, while we know of many\nexamples of such theories in $d=3$ dimensions, most conformal field theories in\n$d\\geq 4$ seem to possess supersymmetry. The enhancement from conformal to superconformal\nsymmetry should lead to simplifications, at least once the kinematical aspects are well\nunder control. This, however, is not yet the case. In fact, while four-point blocks of\nhalf-BPS operators or the superprimary components of more general supermultiplets have\nbeen constructed and applied, see e.g. \\cite{Dolan:2001tt,Dolan:2004mu,Nirschl:2004pa,\nPoland:2010wg,Fortin:2011nq,Fitzpatrick:2014oza,Khandker:2014mpa,Bobev:2015jxa,Bissi:2015qoa,\nDoobary:2015gia,Lemos:2015awa,Liendo:2016ymz,Lemos:2016xke,Chang:2017xmr,Bobev:2017jhk,\nLiendo:2018ukf,Berkooz:2014yda,Li:2016chh,Li:2017ddj,Gimenez-Grau:2019hez}, relatively little\nis actually known about blocks and block expansions for more generic external multiplets\nthat span long(er) representations of the superconformal algebra. On the other hand it\nhas been shown in \\cite{Cornagliotto:2017dup} that the bootstrap with long multiplets\nis significantly more constraining on CFT data than the bootstrap with e.g. external\nBPS operators, see also \\cite{Kos:2018glc}. This provides strong motivation\nto investigate blocks and crossing symmetry for long multiplets, which is the main\ngoal of our work.\n\\medskip\n\nIn order to explain the main results of this paper, let us briefly review a few basic\nfacts about conformal partial wave expansions in bosonic conformal field theories. We\nstart from some four-point correlator $G(x_i)$ with its full dependence on the insertion\npoints $x_i$ of the fields. As is well known, conformal symmetry implies that $G(x_i)$\nis fully determined by a some function of the two cross ratios $u,v$ one can form from\nfour points in $\\mathbb{R}^d$. More precisely, it is possible to write the correlation\nfunction $G$ as\n\\begin{equation}\nG(x_i) = \\Omega(x_i) g(u,v) \\ .\n\\end{equation}\nWe stress that such a behavior is not restricted to scalar correlation functions. If\nthe fields carry spin, then $G$ takes values in the space of polarizations of the\nfour fields. The function $g$, on the other hand takes values in the space of four-point\ntensor structures whose dimension is smaller than that of the space of polarizations,\nin general, at least for $d > 3$. Hence, one should think of $\\Omega$ as a rectangular\nmatrix. We shall refer to such a matrix valued function $\\Omega$ of the insertion points\nas four-point \\textit{tensor factor}. In some sense to become clear below it combines all\nfour-point tensor structures into one single object $\\Omega$. Many authors have studied\ntensor structures for spinning four-point functions in conformal field theories, see\ne.g. \\cite{Osborn:1993cr,Costa:2011mg,Costa:2011dw,Kravchuk:2016qvl,Cuomo:2017wme,Karateev:2018oml,\nKarateev:2019pvw}.\n\nThe tensor factor $\\Omega(x_i)$ is restricted but not determined by conformal symmetry.\nIn fact, there is some obvious `gauge' freedom that is associated with matrix-valued\nfunctions $\\zeta(u,v)$ one can move back and forth between the tensor factor $\\Omega$ and the function $g(u,v)$,\ni.e.\\ the gauge symmetry acts as $(\\Omega,g) \\rightarrow (\\Omega \\zeta^{-1}, \\zeta g)$. The\nfunction $g$ of the cross ratios may be expanded in terms of conformal partial waves which,\nafter the influential work of Dolan and Osborn \\cite{Dolan:2000ut,Dolan:2003hv}, are characterised\nas eigenfunctions of the so-called Casimir differential equations. The form of these equations,\nhowever, depends on the gauge choice that is made when splitting $G$ into $\\Omega$ and $g$. For\nfour-point functions of identical scalar fields of weight $\\Delta_0$, for example, Dolan and Osborn\nchose $\\Omega_s = x_{12}^{-2\\Delta_0} x^{-2\\Delta_0}_{34}$. Note that this factor $\\Omega =\n\\Omega_s$ also depends on a split of the four points into two sets of two, a choice usually\nreferred to as a channel. Here we have displayed the factor $\\Omega$ for the so-called\n$s$-channel. The $t$-channel is obtained by exchanging the fields inserted at $x_2$ and\n$x_4$. With their pick of $\\Omega_s$, Dolan and Osborn worked out the associated\nCasimir differential equation for the function $g_s$ and similarly for $g_t$. Solutions\nof these Casimir equations provided them with a set of blocks $g_{s\/t}^{\\Delta, l}(u,v)$\nin which one can then expand $g_s$ and $g_t$\n\\begin{equation}\nG(x_i) = \\Omega_s(x_i) \\sum p_{\\Delta,l} g^{\\Delta,l}_s (u,v) =\n\\Omega_t(x_i) \\sum p_{\\Delta,l} g^{\\Delta,l}_t (u,v) \\ .\n\\end{equation}\nThe equality between the first and second sum is the famous crossing symmetry equation.\nAn important observation is that writing this equation does actually not require a complete\nknowledge of the tensor factors. It is sufficient to know the ratio of the $s$- and\n$t$-channel $\\Omega$\n$$\nM(u,v) = \\Omega_t^{-1}(x_i) \\Omega_s (x_i) = \\left(\\frac{v}{u}\\right)^{\\Delta_0}\\ ,\n$$\nwhich is a function of the two cross ratios only. We call this important object $M$\nthe \\textit{crossing factor} $M$. In the case of spinning fields the crossing factor\nbecomes a matrix. This ratio of $s$- and $t$-channel tensor factors is not to be\nconfused with the crossing or fusing matrix of the conformal group. While the\ncrossing factor relates the $s$- and $t$-channel tensor factors, the crossing matrix\nrelates the conformal blocks in the two channels by providing the expansion\ncoefficients of $s$-channel blocks in terms of $t$-channel ones, see\n\\cite{Liu:2018jhs,Sleight:2018ryu,Chen:2019gka} for some recent discussion\nin the context of higher dimensional conformal field theory.\n\nIn \\cite{Isachenkov:2016gim} it was noticed that scalar four-point functions $G$ admit an different\ngauge choice for the factor $\\Omega$ such that the associated Casimir equations take the\nform of an eigenvalue equation for an integrable 2-particle Hamiltonian of Calogero-Sutherland\ntype. This was later explained in \\cite{Schomerus:2016epl,Schomerus:2017eny} through harmonic\nanalysis on the conformal group and then extended to fields with spin in which case the quantum\nmechanical potential becomes matrix valued. For spinning fields, the tensor structures of such a\nCalogero-Sutherland gauge were constructed recently in \\cite{Buric:2019dfk}. The goal of our work\nis to extend all this to the case of superconformal symmetry. In \\cite{Buric:2019rms} we have\nconstructed the Casimir equations for superconformal symmetries of type I. The form of these equations\nallows us to compute superblocks systematically as finite sums of spinning bosonic blocks.\nWhat was missing up to now is the construction of the associated tensor structures and in particular\nthe crossing factor $M$. Below we fill this gap and construct both the tensor structures\nand the crossing factor for all superconformal algebras of type I. Explicit formulas\nfor the crossing factors in 4-dimensional superconformal algebras will be given in our\nforthcoming paper \\cite{N1D4_paper}. Early work on tensor structures for four-point\ncorrelators of superconformal field theories includes \\cite{Park:1997bq,Park:1999pd,Osborn:1998qu,Heslop:2002hp,Heslop:2004du,Nirschl:2004pa}.\n\\medskip\n\nLet us now describe the plan of this work in more detail. The next section contains some\nbasic background material on superconformal algebras, where we introduce the notion of\nsuperspace and discuss the infinitesimal and global action of the conformal symmetry\nthereon. Special attention will be paid to the action of the so-called Weyl inversion,\nwhich plays an important role in later sections. Section 3 contains the first new\nresult of this work. There we construct a special family $g(x_i)$ of supergroup elements\nthat depend on the insertion points of the fields along with a matrix realization that\nuniquely encodes the correlation function $G(x_i)$. This generalizes a similar formula\nfor bosonic conformal field theories in \\cite{Buric:2019dfk} to the supersymmetric setup.\nIn section 4 we begin to specialize the discussion to superconformal algebras of type I,\ni.e.\\ to cases in which the R-symmetry group contains an abelian factor $U(1)$. After\nintroducing super Cartan coordinates through a particular KAK factorization of the\nsuperconformal group we can construct the tensor factors $\\Omega$ for any choice\nof spins and any channel through an elegant group theoretical construction. This\nthen allows us to build the crossing factor $M$ as a quotient of $s$- and $t$-channel\ntensor factors and prove its conformal invariance explicitly. Our main result, which is\nstated in eqs.\\ \\eqref{eq:crossingmatdef}, \\eqref{eq:crossingmatrix}, expresses the crossing\nfactor $M$ through representation matrices of some particular family of elements of\nthe group $K$ that is generated by dilations, rotations and R-symmetry transformations.\nAll constructions in section 2-4 are illustrated at the example of $\\mathfrak{g} =\n\\mathfrak{sl}(2|1)$ of the $\\mathcal{N} = 2$ superconformal algebra in $d=1$ dimensions.\nLet us also note that our discussion includes the purely bosonic case $\\mathcal{N}=0$\nfor which the crossing factor was not constructed previously beyond a few special spin\nassignments. As a corollary to our discussion we state the crossing factor for arbitrary\nspinning four-point functions in 3-dimensional conformal field theories. For all other\nhigher dimensional examples, bosonic as well as supersymmetric, our results for the\ncrossing factor are stated in the form of a precise easy-to-follow algorithm. In\norder to obtain ready-to-use formulas one needs to input some classical results\nfrom the group theory of rotations $SO(d)$. We will discuss this for certain\nmixed correlators in $\\mathcal{N}=1$ superconformal theories in an accompanying\nwork \\cite{N1D4_paper}.\n\n\n\n\\section{Superspace and Superconformal Symmetry}\n\nIn order to state and prove our main results we need some background on supergroups,\nsuperspaces and the action of superconformal symmetry thereon. Here we want to review\nthese concepts and at the same time introduce a mathematical language that is appropriate\nfor our subsequent discussion. In particular, we recall the notion of superspace\nin the second subsection and explain how one constructs an infinitesimal action of the\nsuperconformal algebra thereon. This action is lifted to global transformations in the\nthird subsection, with some special focus on the so-called Weyl inversion, a close\nrelative of the conformal inversion which is guaranteed to exist in any superconformal\nfield theory. For more mathematical minded readers we have incorporated a more abstract\nand introductory subsection on the concept of supergroups. While this helps to make\nequations in subsequent subsections mathematically rigorous, readers who feel familiar\nwith supergroups and superspaces are encouraged to skip the first subsection, at least\nupon first reading.\n\n\\subsection{Some basics on superalgebras and supergroups}\n\nIn this subsection we introduce some very basic notions and notations concerning\nsupergroups. Our conventions agree with \\cite{Kostant:1975qe,Leites:1980rna,Wess:1992cp}.\nLet $\\mathfrak{h}$ be some Lie superalgebra, i.e. a graded vector space $\\mathfrak{h} =\n\\mathfrak{h}_{\\bar 0} \\oplus \\mathfrak{h}_{\\bar 1}$ with a graded Lie bracket.\nWe denote the latter by $[. , . ]_\\pm$. The associated \\textit{universal enveloping\nalgebra} $U(\\mathfrak{h})$ is the graded associative algebra generated by elements $X \\in\n\\mathfrak{h}$, with relations such that graded commutators are given by the Lie bracket.\nIn a slight abuse of notations we shall denote the graded commutators in the universal\nenveloping algebra by $[.,.]_\\pm$ as well.\n\nThe universal enveloping algebra comes equipped with a co-product $\\Delta$, i.e. with\na homomorphism\n$$ \\Delta: U(\\mathfrak{h}) \\rightarrow U(\\mathfrak{h}) \\otimes U(\\mathfrak{h})\\ . $$\nHere, the tensor product is to be understood in the graded sense, i.e. elements are\nmultiplied as\n$$ (a_1 \\otimes b_1) \\cdot (a_2 \\otimes b_2) = (-1)^{|a_2||b_1|} a_1 a_2 \\otimes\nb_1 b_2 \\ ,$$\nwhere $|a|=0$ if $a$ is even and $|a|=1$ if $a$ is odd, as usual. On the generating\nelements $X \\in \\mathfrak{h} \\subset U(\\mathfrak{h})$, the co-product is given by\n\\begin{equation}\n\\Delta(X) = X \\otimes 1 + 1 \\otimes X \\ .\n\\end{equation}\nFrom here one can extend $\\Delta$ uniquely to the entire universal enveloping algebra\nas a homomorphism of graded algebras. The co-product is the algebraic structure that\nallows us to build tensor products of any two representations of the Lie superalgbra\n$\\mathfrak{h}$ or its universal envelop $U(\\mathfrak{h})$.\n\\medskip\n\nLet us now turn to another algebra that we can associate to $\\mathfrak{h}$, namely the so-called\n\\textit{structure algebra} $\\mathcal{F}(\\mathfrak{h})$. By definition, $\\mathcal{F}$ is a\ngraded commutative algebra whose generators $x_A$ are associated to the basis elements $X^A$\nof the Lie superalgebra $\\mathfrak{h}$. The elements $x_A$ possess the same degree $|x_A|= |A|$\nas the generators $X^A$, i.e. $x_A$ is an ordinary bosonic variable if $X^A$ is even while\n$x_A$ is a Grassmann variable in case $X^A$ is odd. From the construction we have sketched\nhere it is evident that $\\mathcal{F}$ can be thought of as the \\textit{algebra of functions}\non the supergroup associated with $\\mathfrak{h}$ which is generated here from set of coordinate\nfunctions, one for each element of the Lie superalgebra.\n\\smallskip\n\n\n\nThe two algebras we have associated to $\\mathfrak{h}$ up to now are actually closely related.\nIn the case of bosonic groups, the generators $X$ of the Lie algebra give rise to (right)\ninvariant vector fields that act on functions as some first order differential operators.\nThese differential operators $\\mathcal{R}_X$ can be multiplied and added and thereby\nprovide an action of elements $a$ in the universal enveloping algebra $U(\\mathfrak{h})$ through\ndifferential operators $\\mathcal{R}_a$ of higher order. One may combine the application\nof any such differential operator to a function on the group with the evaluation at the\ngroup unit $e$ to obtain a map that assigns as number\n\\begin{equation} \\label{eq:duality}\n\\mathcal{R}_a(f)(e) = (a,f) = f(a) \\in \\mathbb{C}\n\\end{equation}\nto a pair of an element $ a \\in U(\\mathfrak{h})$ and a (complex valued) function $f$ on the\ngroup. In other words, elements of $U(\\mathfrak{h})$ give linear functionals of the algebra of\nfunctions or structure algebra $\\mathcal{F}(\\mathfrak{h})$ and vice versa. In this form, the\nstatement remains true for Lie superalgebras and is often expressed by saying that\n$\\mathcal{F}(\\mathfrak{h})$ and $U(\\mathfrak{h})$ are dual to each other, see also \\cite{Sternberg:1975}\nfor a nice discussion of this point.\n\\medskip\n\nEquipped with these two algebraic structures, namely the universal enveloping algebra\n$U(\\mathfrak{h})$ and the structure algebra $\\mathcal{F}(\\mathfrak{h})$, we want to introduce the concept\nof \\textit{supergroup elements} $h$. Let us first give a formal definition according to\nwhich $h$ is an even element of the graded tensor product $U(\\mathfrak{h}) \\otimes \\mathcal{F}(\\mathfrak{h})$\nthat satisfies\n\\begin{equation} \\label{eq:Deltah}\n(\\Delta \\otimes \\textit{id}) h = \\ \\stackrel{1}{h}\\ \\stackrel{2}{h} \\ .\n\\end{equation}\nHere, the application of the co-product $\\Delta$ to the first tensor factor of $h$\nproduces an element in $U(\\mathfrak{h}) \\otimes U(\\mathfrak{h}) \\otimes \\mathcal{F}(\\mathfrak{h})$. The\nfactors on the right hand side are elements in the same threefold tensor product.\nMore concretely, $\\stackrel{2}{h}$ is the element $1 \\otimes h$ with trivial entry\nin the first tensor factor. Similarly $\\stackrel{1}{h}$ denotes the element $h$ with\ntrivial entry in the second tensor factor.\n\nThe element $h$ is not uniquely characterized by these properties, but we do not need\nto be more specific. It might be helpful to think of $h$ as the object $h = \\exp (x_A\nX^A)$. The element $x_A X^A$ in the exponent is even and upon expansion of the\nexponential provides us with an even element in the graded tensor product $U(\\mathfrak{h})\n\\otimes \\mathcal{F}(\\mathfrak{h})$. In order to construct this element one moves all the\nelements $x_A$ of the structure algebra to the right of the superalgebra generators\n$X^B$ using\n$$ x_A X^B = (-1)^{|A| |B|} X_B x_A \\ ,$$\nwhich implements our convention to consider the graded tensor product of $U(\\mathfrak{h})$\nand $\\mathcal{F}(\\mathfrak{h})$ rather than the ordinary one. After the reordering we indeed\nobtain an infinite sum of products between elements in the universal enveloping algebra\n$U(\\mathfrak{h})$ with elements of the structure algebra $\\mathcal{F} (\\mathfrak{h})$. If we apply\nthe co-product in the universal enveloping algebra we formally\nobtain\n\\begin{equation}\n(\\Delta \\otimes \\textit{id}) h = e^{x_A (X^A \\otimes 1 + 1 \\otimes X^A)} = e^{x_A (X^A \\otimes 1)}\ne^{x_A (1 \\otimes X^A)} =\\ \\stackrel{1}{h}\\ \\stackrel{2}{h} \\ .\n\\end{equation}\nIn writing the single exponential as a product of exponentials we used the fact that\nthe exponent if an even object so that $x_A (X^A \\otimes 1)$ commutes with $x_A (1\n\\otimes X^A)$. In conclusion, we have constructed an object $h$ with the properties\nwe demanded in the previous paragraph, at least formally. In physics, it is\ncustomary to evaluate $h$ in some representation $\\pi$ of the Lie superalgebra $\\mathfrak{h}$\nor, equivalently, its universal enveloping algebra. Thereby one obtains a finite\ndimensional supermatrix $h^\\pi = (\\pi \\otimes \\textit{id}) h$ with entries from the structure\nalgebra $\\mathcal{F}$. In the following we often use the symbol $h$ for such a\nmatrix $h$ rather than the universal element $h \\in U(\\mathfrak{h}) \\otimes \\mathcal{F}(\\mathfrak{h})$.\n\\medskip\n\nWhat we have explained so far actually suffices as background for most of our\ndiscussion below, except for the construction of an infinitesimal action of the\nconformal superalgebra on superspace in the next subsection. To obtain explicit\nformulas for the first order derivative operators $\\mathcal{R}_X$ that are\nassociated with the elements $X \\in \\mathfrak{h}$ let us first extend the structure\nalgebra $\\mathcal{F}(\\mathfrak{h})$ of ``functions on the supergroup'' to a differentially\ngraded algebra $d\\mathcal{F}(\\mathfrak{h})$ of ``differential forms on the supergroup''.\nThe latter is a bi-graded commutative algebra generated by elements $x_A$ and\n$dx_A$, with a second grading associated to the form degree.\nOn the algebra $d\\mathcal{F}(\\mathfrak{h})$ we can define a differential $d$ that\nsquares to zero $d^2 = 0$ and satisfies the graded Leibniz rule\n$$ d (f \\wedge g) = df \\wedge g + (-1)^{\\textit{deg}(f)} f \\wedge dg \\ . $$\nHere $\\textit{deg}(f)$ denotes the form degree of $f$. Let us stress that there is no\nadditional sign associated with the $\\mathbb{Z}_2$ grading that distinguishes between even\n(bosonic) and odd (fermionic) elements. This means that $d$ is treated as an even object.\nHence, for a given $A$, $x_A$ and $dx_A$ possess the same degree, i.e.\\ $dx_A$ is even\n[odd] in case $x_A$ is even [odd].\n\\medskip\n\nSince the structure algebra $\\mathcal{F}(\\mathfrak{h})$ is contained in the larger differentially\ngraded algebra $d\\mathcal{F}(\\mathfrak{h})$ we can also think of the supergroup element $h \\in\nU(\\mathfrak{h}) \\otimes \\mathcal{F}(\\mathfrak{h})$ as an element of the differential graded algebra\n$U(\\mathfrak{h}) \\otimes d\\mathcal{F}(\\mathfrak{h})$ with the additional rule that $dX^A= X^Ad$, i.e.\\\nwe consider the generators $X^A$ of the Lie superalgebra as constants and the differential\n$d$ as even. Now it makes sense to consider the Maurer-Cartan form\n\\begin{equation}\ndh h^{-1} \\in U(\\mathfrak{h}) \\otimes d\\mathcal{F}(\\mathfrak{h}) \\ .\n\\end{equation}\nIf we apply the differential to the equation \\eqref{eq:Deltah} that characterizes\n$h$ we obtain\n\\begin{equation}\n\\Delta(dh h^{-1}) = \\left(\\stackrel{\\phantom{0}}{d}\\stackrel{1}{h} \\ \\stackrel{2}{h} +\n\\stackrel{1}{h} \\ \\stackrel{\\phantom{0}}{d}\\stackrel{2}{h}\\right)\\ \\stackrel{2}{h}\\!^{-1} \\\n\\stackrel{1}{h}\\!^{-1}\n= \\ \\stackrel{\\phantom{0}}{d}\\stackrel{1}{h} \\ \\stackrel{1}{h}\\!^{-1} +\n \\ \\stackrel{\\phantom{0}}{d}\\stackrel{2}{h} \\ \\stackrel{2}{h}\\! ^{-1}\\ .\n\\end{equation}\nWe conclude that the Maurer-Cartan form takes values in the Lie superalgebra $\\mathfrak{h}\n\\subset U(\\mathfrak{h})$, as it is the case for usual bosonic Lie groups. Consequently, it\nmay be expanded as\n\\begin{equation}\ndh h^{-1} = dx_A C_{AB} X^B\\quad \\textit{where} \\quad C_{AB} \\in\n\\mathcal{F}(\\mathfrak{h})\\ .\n\\end{equation}\nThe matrix elements $C_{AB}$ possess degree $|A|+|B|$, i.e.\\ they are even elements\nof the structure algebra if $|A|=|B|$ and odd otherwise. We also stress that the elements\n$C_{AB}$ depend on the choice of the supergroup element $h$. One of the main uses of the\nmatrix elements $C_{AB}$ is to construct the right-invariant vector fields, i.e. an action\nof the Lie superalgebra $\\mathfrak{h}$ through first order differential operators acting on the\nstructure algebra $\\mathcal{F}(\\mathfrak{h})$. These vector fields are given by\n\\begin{equation} \\label{eq:RHA}\n \\mathcal{R}_{X^A} = \\mathcal{R}_A :=\n \\mathcal{C}^G_{AB}\\partial_B, \\nonumber\n\\end{equation}\nwhere $\\mathcal{C} = C^{-1}$ denotes the inverse of $C$ and $\\partial_B$ is the (graded)\nderivative with respect to the coordinate $x_B$. Its action on an arbitrary function $f\n\\in \\mathcal{F}(\\mathfrak{h})$ can be read off from $df = dx_B (\\partial_B f)$. In particular,\nwhen acting on the individual coordinate functions, $x_A$ is obeys $(\\partial_B x_A) =\n\\delta_{A,B}$. The action of partial derivatives on products of functions satisfies the\ngraded Leibniz rule which implies that\n\\begin{equation}\n\\partial_B x_A = (\\partial_B x_A) + (-1)^{|A||B|} x_A \\partial_B = \\delta_{A,B} +\n(-1)^{|A||B|} x_A \\partial_B \\ .\n\\end{equation}\nSince we have assumed that the differential $d$ acts trivially on the generators\n$X^A$ of the universal enveloping algebra, i.e. $(dX^A) = 0$ we conclude that\n$\\partial_\\beta X^A = 0$, i.e.\\ the generators $X^A$ are constant objects on\nthe supergroup statisfying\n\\begin{equation}\n\\partial_B X^A = (-1)^{|A||B|} X^A \\partial_B \\ \\ .\n\\end{equation}\nWith this list of properties of the partial derivatives we conclude our construction\nof the right invariant vector fields \\eqref{eq:RHA} and thereby our short mathematical\nreview of superalgebras and the theory of supergroups. The formulation we have introduced\nhere is well adapted to our needs below and also paves the way for some interesting\nextensions, see the concluding section.\n\n\\subsection{Superspace and the infinitesimal action of superconformal symmetry}\n\nThis subsection serves two purposes. On the one hand we need to introduce the notion\nof superspace that is one of the crucial ingredients throughout the rest of the paper.\nIn addition we shall also construct an action of the superconformal algebra $\\mathfrak{g}$\nthrough ``first differential operators'' on superspace. This infinitesimal action of\nthe superconformal symmetry on superspace will play only a minor role below since\nmost of our analysis is based on global transformations.\n\nTo set up notations let us denote the superconformal algebra by $\\mathfrak{g}$. Its\nbosonic subalgebra $\\mathfrak{g}_{\\bar 0}$ consists of $d$-dimensional conformal transformations\nin $\\mathfrak{so}(1,d+1)$ as well as R-symmetry transformations in some Lie algebra\n$\\mathfrak{u}$. To define superspace we pick some decomposition\n\\begin{equation}\\label{eq:decomposition}\n \\mathfrak{g} = \\mathfrak{m} \\oplus \\mathfrak{p} \\nonumber\n\\end{equation}\nof $\\mathfrak{g}$ into two Lie subalgebras $\\mathfrak{p}$ and $\\mathfrak{m}$. The standard\nchoice would be to define $\\mathfrak{p}$ as the span of all elements in $\\mathfrak{g}$ that\nlower the eigenvalue of the dilation generator $D \\in \\mathfrak{g}_{\\bar 0}$, i.e.\n$$ \\mathfrak{p} := \\mathfrak{g}_{\\leq 0} =\n\\textit{span}\\left(\\, X \\in \\mathfrak{g}\\, | \\, [D,X] = \\alpha X\n\\, , \\, \\alpha \\leq 0 \\right)\\ . $$\nFor this choice, $\\mathfrak{m}$ then consists of generators $P$ of translations and\nthe supercharges $Q$. We shall briefly comment on other choices below. We also choose\na basis $X^A$ of elements in $\\mathfrak{g}$ that is compatible with the decomposition\n\\eqref{eq:decomposition}. Elements $X^A$ that lie in the subspace $\\mathfrak{m}$ will be\nlabeled by lower case Latin indices while those that lie in the complement $\\mathfrak{p}$\ncarry Greek indices.\n\nThe decomposition of the Lie superalgebra $\\mathfrak{g}$ into $\\mathfrak{m}$ and $\\mathfrak{p}$ determines\na decomposition of the corresponding universal enveloping algebra $U(\\mathfrak{g})=\nU(\\mathfrak{m})\\otimes U(\\mathfrak{p})$ as well as of the structure algebra\n$\\mathcal{F}(\\mathfrak{g})=\\mathcal{F}(\\mathfrak{m})\\otimes\\mathcal{F}(\\mathfrak{p})$. Recall that the\nstructure algebras $\\mathcal{F}(\\mathfrak{m})$ and $\\mathcal{F}(\\mathfrak{p})$ are generated by\nthe coordinates $x_a$ and $x_\\alpha$, respectively, with $x_a$ and $x_\\alpha$ being\nGrassmann variables if the corresponding elements $X^a$ and $X^\\alpha$ are fermionic\ngenerators of the Lie superalgebra. The structure algebra $\\mathcal{F}(\\mathfrak{m})$ is what\nis referred to a \\textit{superspace} $\\mathcal{M} = \\mathcal{F}(\\mathfrak{m})$. Loosely\nspeaking one may think of it as the algebra of ``functions on the supergroup $M$'',\nthough we have not defined what we mean by a supergroup and do not intend to do so.\n\\medskip\n\nNow that we know what superspace is let us construct an infinitesimal action of\nthe superconformal symmetry thereon. Here we shall closely follow the general\nconstructions we outlined in the previous subsection and introduce supergroup\nelements $m=m(x_a)$ and $p=p(x_\\alpha)$. In case of $m$ we work with the\nfollowing standard choice\n\\begin{equation}\nm(x_a) = e^{x_a X^a} .\n\\end{equation}\nThe infinitesimal action of the conformal algebra on the coordinates $x_a$ of our\nsuperspace descends from the left-regular action of $\\mathfrak{g}$ and thus can be\ncomputed from the Maurer-Cartan form,\n\\begin{equation}\n dg g^{-1} = dx_A C^{G}_{AB}X^B\\ .\n\\end{equation}\nIn computing the Maurer-Cartan form for $\\mathfrak{g}$ it is usual to relate it to the\nMaurer-Cartan forms that are associated with $\\mathfrak{m}$ and $\\mathfrak{p}$\n\\begin{equation}\n dm m^{-1} = dx_a C^M_{ab} X^b \\quad , \\quad\n dp p^{-1} = dx_\\alpha C^{P}_{\\alpha\\beta} X^\\beta\\ .\\nonumber\n\\end{equation}\nWith our choice $g=mp$ of the supergroup element $g$ as a product of the two\nelements $m$ and $p$ it follows that\n\\begin{align}\n & dg g^{-1} = dx_A\\partial_A(m p) (m p)^{-1} = dx_a (\\partial_a m) m^{-1} +\n dx_\\alpha m (\\partial_\\alpha p) p^{-1} m^{-1} =\\nonumber\\\\[2mm]\n & = dx_a C^M_{ab} X^b + dx_\\alpha m C_{\\alpha\\beta}X^\\beta m^{-1} =\n dx_a C^M_{ab} X^b + dx_\\alpha C^P_{\\alpha\\beta}\\Big((M_1)_{\\beta a} X^a +\n (M_2)_{\\beta\\gamma}X^\\gamma\\Big) \\ . \\label{MC-form}\n\\end{align}\nThe last equality defines the two matrices $M_{1,2}$,\n\\begin{equation}\n m X^\\beta m^{-1} = (M_1)_{\\beta a} X^a + (M_2)_{\\beta\\gamma} X^\\gamma\\ .\n\\end{equation}\nFrom the equation $(\\ref{MC-form})$ we can read off the coefficients $C^G_{AB}$\nof the Maurer-Cartan form for $\\mathfrak{g}$. The inverse $\\mathcal{C}^G$ of this matrix\nis easily seen to take the form\n\\begin{equation}\n \\mathcal{C}^G = \\begin{pmatrix}\n \\mathcal{C}^M & 0 \\\\\n -M_2^{-1}M_1\\mathcal{C}^M & M_2^{-1} \\mathcal{C}^P\n \\end{pmatrix} \\ , \\nonumber\n\\end{equation}\nwhere the first row\/column corresponds to direction in $\\mathfrak{m}$ while the second\nrow\/column collects all the directions in $\\mathfrak{p}$. As stated before, the matrix\n$\\mathcal{C}^G$ provides us with the right-invariant vector fields \\eqref{eq:RHA}\non the conformal supergroup. To project these operators to the superspace one\nsimply sets $\\partial_\\alpha=0$,\n\\begin{equation}\\label{eq:resultRM}\n \\mathcal{R}^{(M)} = \\begin{pmatrix}\n \\mathcal{C}^M & 0\\\\\n -M_2^{-1}M_1\\mathcal{C}^M & M_2^{-1} \\mathcal{C}^P\n \\end{pmatrix}\n \\begin{pmatrix} \\partial\\\\ 0 \\end{pmatrix} =\n \\begin{pmatrix} \\mathcal{C}^M_{ab}\\partial_b\\\\\n -(M_2^{-1}M_1\\mathcal{C}^M)_{\\alpha b}\\partial_b\n \\end{pmatrix}\\ . \\nonumber\n\\end{equation}\nThis is the main result of this subsection. As mentioned above, the differential\noperators on superspace depend on $C^M$ and hence on the choice of the supergroup\nelement $m$. The choice of the supergroup element $p$, on the other hand, is\nirrelevant since the coefficients $C^P$ of the Maurer-Cartan form $dp p^{-1}$\ndropped out in the last step when we set all derivatives $\\partial_\\alpha$ to\nzero.\n\\smallskip\n\nOur result \\eqref{eq:resultRM} applies to all decompositions of $\\mathfrak{g}$ into two Lie\nsubalgebras $\\mathfrak{m}$ and $\\mathfrak{p}$. As we pointed out in the first paragraph, the standard\nchoice is to take $\\mathfrak{p}$ to contain generators that do not increase the conformal weight.\nIn that case, the structure algebra $\\mathcal{M} = \\mathcal{F}(\\mathfrak{m})$ is called the standard\nsuperspace. If the superconformal algebra $\\mathfrak{g}$ is of type I, however, there\nexist other natural choices to which the constructions of this subsection apply. In a\ntype I superalgebra the R-symmetry contains a $U(1)$ subalgebra which commutes\nwith all bosonic generators but assigns the fermionic ones a non-trivial\nR-charge $\\pm 1$. As usual, we can decompose the Lie superalgebra $\\mathfrak{g} =\n\\mathfrak{g}_{\\leq 0} \\oplus \\mathfrak{g}_{> 0}$ by splitting off those generators in\n$\\mathfrak{g}_{>0}$ that strictly increase the conformal weight. These consist\nof supercharges $Q$ and generators of translations. In a type I superalgebra\nwe can now split the space $\\mathfrak{q}$ or supercharges $Q$ according to\nthe sign of their $U(1)$ R-charge as $\\mathfrak{q} = \\mathfrak{q_+} \\oplus\n\\mathfrak{q}_-$. With this in mind we can introduce two new decompositions\n$\\mathfrak{g} = \\mathfrak{m}_\\pm \\oplus \\mathfrak{p}_\\pm$ of the superconformal algebra where\n \\begin{equation}\n \\mathfrak{p}_\\pm = \\mathfrak{g}_{\\leq 0} \\oplus \\mathfrak{q}_\\pm \\ , \\quad\n \\mathfrak{m}_\\pm = \\mathfrak{g}_1\\oplus\\mathfrak{q}_\\mp = \\mathfrak{g}\/\n \\mathfrak{p}_\\pm \\ . \\nonumber\n\\end{equation}\nFrom the properties of type I Lie superalgebras, one may easily show that both\n$\\mathfrak{p}_\\pm$ and $\\mathfrak{m}_\\pm$ are subalgebras of $\\mathfrak{g}$.\nThe associated superspaces $\\mathcal{M}_\\pm = \\mathcal{F}(\\mathfrak{m}_\\pm)$ are\ncalled the chiral and anti-chiral superspace, respectively.\n\\bigskip\n\n\\noindent\n{\\bf Example:} As an example, let us illustrate the construction of superspace and the differential\noperators in the case of the 1-dimensional $\\mathcal{N}=2$ superconformal algebra $\\mathfrak{g}=\\mathfrak{sl}(2|1)$.\nThe smallest faithful representation of $\\mathfrak{g}$ is 3-dimensional. We may choose the generators\nas\n\\begin{equation} \\label{eq:bosrep}\n D = \\begin{pmatrix}\n 1\/2 & 0 & 0\\\\\n 0 & -1\/2 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\ P = \\begin{pmatrix}\n 0 & 1 & 0\\\\\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\ K = \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\ R = \\begin{pmatrix}\n -1 & 0 & 0\\\\\n 0 & -1 & 0\\\\\n 0 & 0 & -2\n \\end{pmatrix},\\nonumber\n\\end{equation}\nfor the four bosonic generators and\n\\begin{equation} \\label{eq:fermrep}\n Q_- = \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\\\\\n 0 & 1 & 0\n \\end{pmatrix},\\ Q_+ = \\begin{pmatrix}\n 0 & 0 & 1\\\\\n 0 & 0 & 0\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\ S_- = \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 0\\\\\n 1 & 0 & 0\n \\end{pmatrix},\\ S_+ = \\begin{pmatrix}\n 0 & 0 & 0\\\\\n 0 & 0 & 1\\\\\n 0 & 0 & 0\n \\end{pmatrix},\\nonumber\n\\end{equation}\nfor the fermionic ones. Here we shall consider the decomposition $\\mathfrak{g} = \\mathfrak{m} \\oplus \\mathfrak{p}$\nwith the Lie superalgebra $\\mathfrak{m}$ spanned by $P, Q_+$ and $Q_-$. The corresponding superspace\n$\\mathcal{M}$ is generated by one bosonic variable $u$ along with two Grassmann variables\n$\\theta$ and $\\bar \\theta$. In this case the supergroup element $m$ we introduced above takes\nthe following matrix form\n\\begin{equation} \\label{eq:m-1d}\nm(x) = e^{u P + \\theta Q_+ + \\bar \\theta Q_-} = \\begin{pmatrix}\n 1 & X & \\theta \\\\\n 0 & 1 & 0 \\\\\n 0 & -\\bar\\theta & 1\n \\end{pmatrix} \\ ,\n\\end{equation}\nwhere $X = u-\\frac12 \\theta \\bar \\theta$ and $x = (u,\\theta,\\bar\\theta)$ represents the three\ngenerators of the structure algebra.\n\\smallskip\n\nThe construction we outlined above provides us with an action of the superconformal algebra\n$\\mathfrak{g}$ on this superspace with differential operators $\\mathcal{R}_X = u$ of the form\n\\begin{align}\n & p = \\partial_u\\ ,\\quad & k = -u^2\\partial_u - u\\theta\\partial_{\\theta} -\n u\\bar\\theta\\partial_{\\bar\\theta}\\ , \\label{eq:sldop1} \\\\[2mm]\n & d = u\\partial_u + \\frac12\\theta\\partial_{\\theta} + \\frac12\\bar\\theta\n \\partial_{\\bar\\theta}\\ ,\\quad\n & r =\\theta\\partial_{\\theta} - \\bar\\theta\\partial_{\\bar\\theta}\\ ,\n \\label{eq:sldop2}\\\\[2mm]\n & q_+ = \\partial_{\\theta} - \\frac12\\bar\\theta\\partial_u \\ ,\\quad &\n q_- = \\partial_{\\bar\\theta} - \\frac12\\theta\\partial_u\\ ,\n \\label{eq:sldop3} \\\\[2mm]\n & s_+ = -(u+\\frac12\\theta\\bar\\theta)q_+\\ ,\\quad &\n s_- = (u-\\frac12\\theta\\bar\\theta)q_-\\ .\n \\label{eq:sldop4}\n\\end{align}\nAs we pointed out in our discussion above, the choice of $p$ is not relevant for\nthe final result. We encourage the reader to derive these explicit expressions\nfrom our general formula \\eqref{eq:resultRM}.\n\n\\subsection{Global superconformal symmetry and Weyl inversions}\n\nHaving constructed superspace along with an action of the superconformal algebra\nthereon, our next task is to construct the action of global conformal transformations.\nAs we shall see in a moment, most of the global transformations act in an obvious\nway. The only exception are special conformal transformations. For bosonic conformal\nsymmetry, the easiest way to construct these is through the conformal inversion of\ntranslations. We follow essentially the same strategy in the supersymmetric context,\nexcept we need to replace the conformal inversion by a closely related Weyl inversion.\nThe latter extends nicely to superconformal algebras while conformal inversions may\nnot actually exist, see below.\n\nDefining the action of global conformal transformations on superspace requires a\nlittle bit of preparation. We shall think of a global symmetry transformation as\nbeing associated to a supergroup element $h=h(s)$. We may consider $h$ as a matrix\nwhose matrix elements are functions on the supergroup, i.e.\\ elements of the structure\nalgebra generated by the coordinates $s_a$ and $s_\\alpha$. The graded commutative\nalgebra that is generated by these coordinates is just another copy of the algebra\nthat is generated by $x_a$ and $x_\\alpha$. From now on we shall suppress the\ndependence on $s$ again. The left action of such an element $h$ on the supergroup\nelement $g(x)= m(x_a)p(x_\\alpha)$ is simply given by the left multiplication $g(x)\n\\mapsto h g(x)$. In order to obtain the action on superspace, we need to\nfactorize $h g(x)$ as\n\\begin{equation}\nh g(x) = m(y(x,h)) p(x,h) = e^{y(x,h)_a X^a} p(x,h) \\ .\n\\end{equation}\nThis factorization defines the $h$ transform $h(x)_a$ of the superspace\ncoordinates $x_a$. Note that $y(x,h)_a$ are elements in the tensor product of\ntwo structure algebras, the one generated by coordinates $x$ and the one that\nis generated by $s$. It is particularly easy to apply this definition to\nrotations, dilations and R-symmetries since these form a subgroup $K$\nthat respects the split of $\\mathfrak{g}$ into $\\mathfrak{m}$ and $\\mathfrak{p}$. In fact, the\nLie algebra $\\mathfrak{k}$ is even a subalgebra of $\\mathfrak{p}$. In order to factorize\n$$k g(x) = k m(x) p(x) = m(y(x,k)) p(x,k)$$\nfor some $k \\in K$\\footnote{Here we assume that all the matrix elements are\nconstant functions on the supergroup, i.e. they are proportional to the\nidentity element of the structure algebra.} all we need to do is move\n$k$ through $m$. Since the generators $X^a$ transform in some representation\n$\\kappa$ of $K$, the effect can be captured by a linear transformation of the\ncoordinates $x_a$, i.e. $y(x,k)_a =\\kappa_{ab}(k) x_b$. Also (super-)translations\nare easy to discuss. These are associated with elements $h(c) = m(c)$ so that\nmultiplication of $h$ with $g(x)$ only requires to multiply $m(c) m(x) =\nm(y(x,c))$. Since bosonic translations commute among each other and with the\nsupercharges $Q$, the only non-trivial terms in the product $m(c) m(x)$ come\nfrom the non-vanishing anti-commutators of the supercharges. But these can be\nevaluated easily in concrete examples and hence the computation of $c(x)$ is\nstraightforward.\n\\medskip\n\nIt now remains to discuss the action of special (super-)conformal transformations.\nWe will not discuss these directly but instead focus on one particular global\nsuperconformal transformation, namely the superconformal extension of the Weyl\ninversion $w$. As we shall see, this Weyl inversion relates special super\nconformal transformations to supertranslations, just as in the bosonic case.\n\nBefore we enter the discussion of the Weyl inversion, let us briefly recall\nhow the ordinary inversion of conformal field theories is constructed. By\ndefinition, the \\textit{conformal group} is a Lie group with $\\mathfrak{g}=\n\\mathfrak{so}(d+1,1)$ as its Lie algebra. Let $O(d+1,1)$\nbe the group of pseudo-orthogonal matrices. Its identity component is\ndenoted by $SO^+(d+1,1)$. This group can be realised as the quotient\n\\begin{equation}\n SO^+(d+1,1) = \\textit{Spin}(d+1,1)\/\\mathbb{Z}_2 \\nonumber\n\\end{equation}\nof the universal covering group $\\textit{Spin}(d+1,1)$ by its centre. Both $SO^+(d+1,1)$ and\n$Spin(d+1,1)$ act on the compactified Euclidean space, but only the first action is\nfaithful. In the case of $\\textit{Spin}(d+1,1)$, both elements of the centre act trivially.\nObviously, both $SO^+(d+1,1)$ and $\\textit{Spin}(d+1,1)$ possess the same Lie algebra $\\mathfrak{g}\n= \\mathfrak{so}(d+1,1)$. The conformal inversion\n\\begin{equation}\n I x^\\mu = \\frac{x^\\mu}{x^2} \\nonumber\n\\end{equation}\nis an element of $O(d+1,1)$, but it resides in a component that it not connected to the\nidentity component, i.e. the conformal inversion $I$ is not an element of $SO^+(d+1,1)$.\nWe can improve on this issue by multiplying the inversion with some spatial reflection.\nThe so-called Weyl inversion $w=s_{e_d}\\circ I$ involves the reflection on $\\mathbb{R}^d$\nthat sends $x_d$ to $- x_d$ and it belongs to $SO^+(d+1,1)$. We can actually construct\nthe Weyl inversion explicitly through the following exponential of conformal generators,\n\\begin{equation}\n w = e^{\\pi\\frac{K_d-P_d}{2}}. \\label{Weyl-inversion}\n\\end{equation}\nThere are two elements of $\\textit{Spin}(d+1,1)$ which project to $w$. We use the\nexpression \\eqref{Weyl-inversion} as our definition of the Weyl inversion for\n$\\textit{Spin}(d+1,1)$. One can check that its square is the non-trivial element of\nthe centre, i.e.\\ that $w^2=-1$.\n\\medskip\n\nIn passing to the superconformal algebra we use the same formula \\eqref{Weyl-inversion} to\ndefine the Weyl element and hence the Weyl inversion. The bosonic part $\\mathfrak{g}_{\\bar 0}$ of the\nsuperconformal algebra $\\mathfrak{g}$ is generated by the bosonic conformal algebra\n$\\mathfrak{g}_\\textit{bos}$ along with the generators $U \\in \\mathfrak{u}$ of R-symmetry\ntransformations. The latter commute with all elements of $\\mathfrak{g}_{\\bar 0}$ and hence the\nassociated universal enveloping algebras satisfy $U(\\mathfrak{g}_{\\bar 0}) \\cong U(\\mathfrak{g}_\\textit{bos})\n\\otimes U(\\mathfrak{u})$. By construction $w$ lies in $w \\in U(\\mathfrak{g}_\\textit{bos})$ and it is\ntrivial in the $U(\\mathfrak{u})$,\n$$ w = w \\otimes e \\in U(\\mathfrak{g}_\\textit{bos}) \\otimes U(\\mathfrak{u}) \\cong\n U(\\mathfrak{g}_{\\bar 0})\\ . $$\nWhile the action of the element $w$ on generators of the R-symmetry transformations\nis trivial, its action on the fermionic generators is not. Using that conjugation of\nthe generator $D$ of dilations with the Weyl inversion is given by $\\text{Ad}_{w_{bos}}\n(D)=-D$ we obtain\n\\begin{equation}\n \\frac12\\text{Ad}_w(Q) = \\text{Ad}_w([D,Q]) = [\\text{Ad}_w(D),\\text{Ad}_w(Q)] =\n - [D,\\text{Ad}_w(Q)]\\ , \\nonumber\n\\end{equation}\ni.e.\\ when a supercharge $Q$ is acted upon by the Weyl inversion it is sent to a generator\nwhose conformal weight is $-1\/2$. Consequently, the Weyl inversion interchanges generators\nof supertranslations and super special conformal transformations. For superconformal\nalgebras of type I, see the final paragraph of the previous subsection for a definition,\none can similarly use that $\\text{Ad}_w(R) = w R w^{-1} = R$ to deduce\n\\begin{equation}\n \\text{Ad}_w(\\mathfrak{q}_\\pm) \\subset \\mathfrak{s}_\\pm \\ . \\label{odd-generators}\n\\end{equation}\nIn conclusion we have seen that the super Weyl inversion exists for all superconformal\nalgebras and we stated some of its most important properties. This is to be contrasted\nwith the fact that a supersymmetric analogue of the ordinary conformal inversion may\nactually not exist. Assuming that one could choose the superconformal group such that\nthe inversion $I$ belonged to the bosonic conformal subgroup, then the arguments\nleading to eq.\\ $(\\ref{odd-generators})$ with $w\\times e$ replaced by $I\\times e$ would\nremain valid. On the other hand, as the example $\\mathfrak{g} = \\mathfrak{sl}(4|1)$ shows,\nthe fact that $I$ commutes with rotations is inconsistent with eq. $(\\ref{odd-generators})$,\nbearing in mind that $\\mathfrak{q}_+$ and $\\mathfrak{s}_+$ are non-isomorphic modules of the\nrotation group. Fortunately for us, the existence of the super Weyl inversion will\nsuffice.\n\n\n\\medskip\n\n\\noindent\n{\\bf Example:} Let us briefly discuss super-conformal transformations and in\nparticular the super Weyl inversion for the Lie superalgebra $\\mathfrak{sl}(2|1)$.\nAs we discussed at the end of the previous subsection, this Lie superalgebra admits\na 3-dimensional representation. All generators have been spelled out in this\nrepresentations above. Within this representation, the supergroup element\n$m(x)$ takes the form \\eqref{eq:m-1d}. The subgroup $K$ is generated by dilations\nand $U(1)_R$ symmetry transformations which are generated by $D$ and $R$, i.e.\\ $k\n= \\exp(\\lambda D + \\vartheta R)$. Under global transformations with elements $k \\in K$\nthe superspace coordinates $x=(u,\\theta,\\bar \\theta)$ transform as\n\\begin{equation}\ny(x,k) = (e^\\lambda u, e^{\\frac12 \\lambda +\\vartheta}\\theta, e^{\\frac12\\lambda - \\vartheta} \\bar \\theta) \\ .\n\\end{equation}\nHere we can either think of $\\lambda$ and $\\vartheta$ as some real parameters of the\ntransformation or as coordinates on the supergroup, i.e.\\ as two generators of the\nstructure algebra. Supertranslations with an element $m(c) = m(v,\\eta,\\bar \\eta)$\nact as $m(c) m(x) = m(c(x))$ with\n\\begin{equation}\ny(x,c) = c(x) = (u+v + \\frac12 \\theta \\bar \\eta + \\frac12 \\bar \\theta \\eta, \\theta+\\eta,\n\\bar \\theta + \\bar \\eta)\\ .\n\\end{equation}\nThe components of $c = (v,\\eta,\\bar \\eta)$ are generators of the structure algebra.\nIt remains to discuss the Weyl inversion. Within the 3-dimensional representation\nit is straightforward to compute the Weyl inversion from eq.\\ \\eqref{Weyl-inversion},\n\\begin{equation} \\label{eq:wmatrix}\n w = e^{\\pi\\frac{K-P}{2}} = \\begin{pmatrix}\n 0 & -1 & 0\\\\\n 1 & 0 & 0\\\\\n 0 & 0 & 1\n \\end{pmatrix}.\\nonumber\n\\end{equation}\nNote that $w^2 = \\textit{diag}(-1,-1,1)$, i.e.\\ it squares to $-1$ within the\nbosonic conformal group and is trivially extended within the R-symmetry group.\nIt is now straightforward to compute the action of the Weyl inversion on superspace\nby decomposing the matrix $w m(x) = m(w(x)) p(x,w)$ with $w(x)\n= y(x,w)$ given by\n\\begin{equation}\n \\ w(u) = -\\frac{1}{u}\\ ,\\quad w(\\theta) = \\frac{\\theta}{u}\\ ,\\quad\n w(\\bar\\theta) = \\frac{\\bar\\theta}{u}\\ . \\label{w-action-1d}\n\\end{equation}\nNote that the action of $w$ on the bosonic coordinate $u$ is the same as in\nbosonic conformal field theory. This had to be the case, since in the chosen coordinate\nsystem on the superspace $\\mathcal{M}$ the action of the conformal algebra generators on\n$x$ is the same as in bosonic theory. Furthermore, we have $w(p,q_+,q_-)w^{-1}=(-k,-s_+,s_-)$,\nin accordance with the relations $w^{-1}(P,Q_+,Q_-)w = (-K,-S_+,S_-)$ satisfied by $3\\times3$\nmatrices. Often such conditions are used to derive the action of the inversion. In the\napproach here, this is not necessary as the action of $w$ can be computed directly.\nThis concludes our discussion of the 1-dimensional $\\mathcal{N} =2$ superspace and\nthe global action of the superconformal symmetry on it.\n\n\n\\section{Lifting Correlators to the Supergroup}\n\nThis section contains the first new result of the present work. We establish an isomorphism\nbetween the solutions of superconformal Ward identities that are satisfied by a four-point\nfunction of arbitrary spinning fields and certain covariant functions on the superconformal\ngroup, to which one may also refer as $K$-spherical functions. The construction finds\nroots in ideas from \\cite{Dobrev:1977qv}, and generalises that of \\cite{Buric:2019dfk} to\nthe superconformal setting. One key ingredient in our formula is a family of supergroup\nelements $g(x_i)$ that depends on the insertion points of the four fields in superspace.\nThese will play an important role in the following sections as well. In the first subsection\nwe state all this precisely before we illustrate the formulas at the example of $\\mathfrak{g}=\n\\mathfrak{sl}(2|1)$ in the second. The third subsection contains the proof of our\nstatement.\n\n\\subsection{Statement of the result}\n\nLet us now consider a four-point function in some superconformal field theory.\nTo each field we associate a copy of our superspace $\\mathcal{M}$. The generators\n$x_{ia}$ of these spaces carry a label $i=1, \\dots, 4$ in addition to\nthe label $a$ we introduced in the previous section. The corresponding supergroup\nelements $m_i = m(x_i)$ are given by\n\\begin{equation}\nm(x_i) = e^{x_{ia} X^a} .\n\\end{equation}\nHere the summation over $a$ is understood. Given any pair of labels $i,j$ we\ndefine the variables $x_{ij} = (x_{ija}) \\in \\mathcal{M}_i \\otimes \\mathcal{M}_j$\nthrough\n\\begin{equation} \\label{eq:xij}\nm(x_{ij}) = m(x_j)^{-1} m(x_i) \\ .\n\\end{equation}\nConcrete expressions for the components of $x_{ij}$ can be worked out from the\nanti-commutator relations of the supercharges $Q$. One may think of $m(x_i)$ as\na function on superspace with values in the universal enveloping algebra or,\nmore concretely, after evaluation in a fundamental representation of the Lie\nsuperalgebra $\\mathfrak{g}$, as a matrix valued function on superspace.\n\nIn the last section we also introduced the Weyl element $w$ through equation\n\\eqref{Weyl-inversion}. Note that $w$ is constructed out of generators of the\nbosonic conformal group only. In particular it acts trivially within the\nR-symmetry group $U$. We can think of $w$ as a grouplike element in the\nuniversal enveloping algebra or, after application of a fundamental\nrepresentation, as a concrete matrix such as in eq.\\ \\eqref{eq:wmatrix}.\nWith the help of the Weyl inversion, let us define a new family of\nsupergroup elements $n$ through\n\\begin{equation} \\label{eq:nx}\nn(x) = w^{-1} m(x) w\\ .\n\\end{equation}\nSince $m$ involves only generators $X^a \\in \\mathfrak{g}_{>0}$ of the superconformal\nalgebra that raise the conformal weight, i.e.\\ generators $P$ of translations\nand supercharges $Q$, the element $n$ is built using generators $Y^a$ from the\nalgebra $\\mathfrak{g}_{<0}$ that lower the conformal weight, see our previous discussion\nof the Weyl inversion. This means that $n$ involves special conformal generators\n$K$ as well as the fermionic generators $S$.\n\nIn order to proceed, let us introduce another supergroup element $k=k(t)$ using\nthe remaining generators $X \\in \\mathfrak{g}_0$ that commute with the generator of\ndilations and therefore neither appear in $n$ nor in $m$. It means that $k$ is\nbuilt from the generators of dilations, rotations and R-symmetry\ntransformations, all of which are even (bosonic). Given the three supergroup\nelements $m,n,k$ we can now decompose $w m(x)$ as\n\\begin{equation} \\label{eq:factorization}\nw m(x) = m(y(x))\\, n(z(x)) \\, k(t(x)) \\ ,\n\\end{equation}\nwhere the components of $y(x) = (y(x)_a)$, $z(x) = (z(x)_a)$ and $t(x)\n= (t(x)_\\varrho)$ are certain functions of the superspace coordinates $x_i$ that\ncan be worked out concretely on a case-by-case basis. We shall state concrete\nformulas in some examples below. Let us stress that it is through this factorization\n\\eqref{eq:factorization} that we introduce the action of the Weyl inversion\n$w$ on superspace, i.e. by definition $y(x) = w x$. We consider the\nfunctions $y,z$ and $t$ as given for now and use them to introduce\n\\begin{equation}\ny_{ij} = y(x_{ij}) = w x_{ij} \\ , \\quad z_{ij}= z(x_{ij}) \\ , \\quad\nt_{ij} = t(x_{ij}) \\ .\n\\end{equation}\nBy definition we have\n\\begin{equation} \\label{eq:factorizationij}\nw m(x_{ij}) = m(y_{ij}) \\, n(z_{ij}) \\, k(t_{ij}) \\ .\n\\end{equation}\nThe components of $x_{ij}, y_{ij}, z_{ij}$ and $t_{ij}$ are elements in the\nfour-fold tensor product $\\mathcal{M}^4 \\cong \\mathcal{M}^{\\otimes_4}$ of the\nsuperspace $\\mathcal{M}$, one copy for each insertion point. This is all we\nneed to know about the superconfiguration space of the four insertion points.\n\\medskip\n\nSo, let us now consider some four-point correlation function $G$ in a quantum\nfield theory with superconformal symmetry given by $\\mathfrak{g}$. The fields $\\Phi$ of our\ntheory are organized in supermultiplets. We label these supermultiplets through\nthe quantum numbers of their superprimaries. These consist of a conformal weight\n$\\Delta$, a spin $\\lambda$ and the R-charges $q$. The collection of these\nquantum numbers determine a finite dimensional irreducible representation\n$\\rho = \\rho_{\\Delta, \\lambda,q}$ on the Lie algebra $\\mathfrak{k} = \\mathfrak{g}_0$\nthat is spanned by dilations, rotations and R-symmetries. We denote the carrier\nspace of this representation by $V = V_\\rho$ and shall often refer to it as the\nspace of superpolarizations. Let us stress that elements of $V_\\rho$ are associated\nwith polarizations of the superprimary in the supermultiplet $\\Phi$. In our\nfour-point function we have four supermultiplets whose superprimary components\ntransform in representations $\\rho_i,\\ i = 1, \\dots, 4$. The polarizations of\nthese four superprimary fields span the vector spaces $V_i$.\n\nGiven these data, we now consider the space $\\mathcal{F}(\\mathfrak{g})\\otimes\nV_{1234}$ of ``functions $F$ on the supergroup'' that take values in the vector\nspace $V_{1234} = V_1 \\otimes \\dots \\otimes V_4$. Among its elements we restrict\nto those functions $F$ that possess the following covariance property\\footnote{\nMathematically minded readers should think of $g$ as a supergroup element $g \\in\nU(\\mathfrak{g}) \\otimes \\mathcal{F}$ where $\\mathcal{F}$ can be any graded commutative\nalgebra, see section 2.1. The object $F(g) \\in \\mathcal{F} \\otimes V_{1234}$ is\nthen obtained using the duality between $\\mathcal{F}(\\mathfrak{g})$ and $U(\\mathfrak{g})$,\nsee eq.\\ \\eqref{eq:duality}.}\n\\begin{align}\\label{eq:covariance}\n F(k_l g k_r)= \\Big(\\rho_1(k_l)\\otimes\\rho_2(w k_l w^{-1})\\otimes\n \\rho_3(k_r^{-1})\\otimes\\rho_4(w k_r^{-1}w^{-1})\\Big) F(g) \\ ,\n\\end{align}\nfor all $k_l,k_r \\in K$. In analogy with ordinary Lie theory, such an $F$ will\nbe called a $K$-spherical function. To digest the mathematical meaning of this\nformula a bit better, let us pretend for a moment that we are dealing with some\nordinary Lie algebra $\\mathfrak{g}$ rather than a superalgebra. In that case, $g$ as\nwell as $k_l, k_r$ are elements of the bosonic group $G$. When we write $F(g)$\nwe let the group element $g$ act as a global symmetry transformation on the\nspace $\\mathcal{F}(\\mathfrak{g})$ of functions on the group and evaluate the result\nat the group unit. Stated more directly we simply evaluate the vector valued\nfunction $F$ at the point $g$ of the group manifold. Almost the same is true\nfor superalgebras except that $g$ is a matrix whose matrix elements are taken\nfrom some Grassmann algebra and $F$ is a prescription that turns such a matrix\ninto a vector $F(g)$ whose components are elements of that Grassmann algebra.\nTo evaluate $F(k_l g k_r)$ we employ the left-right action of $K \\times K$ on\nthe space $\\mathcal{F}(\\mathfrak{g})$ of functions on the supergroup to transform $F$\ninto new element of $F^{(k_l,k_r)}$ of the space $\\mathcal{F}(\\mathfrak{g}) \\otimes\nV_{1234}$. When we apply this transformed $F^{(k_l,k_r)}$ to $g$ we obtain\nanother vector $F^{(k_l,k_r)}(g) = F(k_lgk_r)$ with Grassmann valued components.\nThe covariance condition \\eqref{eq:covariance} selects those elements $F$ for\nwhich the two vectors $F(g)$ and $F(k_lg k_r)$ are related by a specific matrix\nrotation that is obtained from representation matrices of $k_l$ and $k_r$ in the\nrepresentations $\\rho_i$. The precise construction of this matrix, which\nalso involves conjugation with the Weyl element $w$ in two of the four\ntensor factors will become clear in the third subsection.\n\nLet us now come back to our correlation function $G_4$. By construction, $G_4(x_i)$\nis a function on the four-fold tensor product $\\mathcal{M}^4$ of superspace that takes\nvalues in the space $V_{1234}$ of polarizations, i.e.\\ $G_4 \\in \\mathcal{M}^4 \\otimes\nV_{1234}$. Being the four-point function in some superconformal field theory, $G_4$\ntransforms in a very special way under superconformal transformations. This can be\nexpressed in terms of a set of superconformal Ward identities. As a consequence of\nthese covariance properties one may show that, given $G_4$, there exists a unique\nfunction $F \\in \\mathcal{F}(\\mathfrak{g}) \\otimes V_{1234}$ on the supergroup with\ncovariance property \\eqref{eq:covariance} such that\n\\begin{eqnarray}\nG_4(x_i) & = & \\Big(1\\otimes\\rho_2(k(t_{21}))^{-1}\\otimes1\\otimes\n\\rho_4(k(t_{43}))^{-1}\\Big) F(g(x_i))\\, , \\label{magic-formula}\\\\[2mm]\n& & \\textit{where}\\ g(x_i) = n(y_{21})^{-1} m(x_{31}) n(y_{43})\\ . \\label{eq:gxi}\n\\end{eqnarray}\nThe argument of $F$ is a product of supergroup elements, i.e.\\ an element of $U(\\mathfrak{g})\n\\otimes \\mathcal{M}^4$ or some matrix representation thereof. After the application of\n$F$ we obtain an element of $\\mathcal{M}^4 \\otimes V_{1234}$. We may think of this as\na vector valued function on the four-fold tensor product of superspaces which can be\ncompared to $G_4$. The factor in front of $F$, that relates $F(g(x_i))$ to $G_4(x_i)$\nis a certain matrix of functions on $\\mathcal{M}$ that acts non-trivially on the two\nfactors $V_2$ and $V_4$. We shall also refer to eq.\\ \\eqref{magic-formula} as the\nsupersymmetric \\textit{lifting formula}.\n\\smallskip\n\nLet us remark that there is a quick sanity check of our formula, namely one may\nverify that both sides of the lifting formula \\eqref{magic-formula} satisfy the same\nWard identities for infinitesimal transformations generated by elements in $X\n\\in \\mathfrak{g}_{\\geq 0}$. The latter is spanned by translations, supercharges $Q$,\nrotations, dilations and R-symmetry transformations. The key observation is\nthat\n\\begin{equation}\\label{eq:diffrel}\n\\sum_{j=1}^4 \\mathcal{R}_X^{(j)} g(x_i)\n= \\left[ X \\otimes \\textit{id}, g(x_i) \\right]\\ .\n\\end{equation}\nRecall that the argument $g(x_i)$ of $F$ may be considered as a matrix whose\nentries are functions on the four-fold product of superspace. On these matrix\nelements we act with the sum of right invariant vector fields $\\mathcal{R}_X$\nfor $X \\in \\mathfrak{g}_{\\geq 0}$, acting on one set of superspace coordinates each.\nThe differential operators $\\mathcal{R}$ were constructed in the previous section.\nOur claim is that the resulting matrix of functions on superspace is the same\nas for the matrix commutator of the representation matrix for $X$ with the\nproduct of supergroup elements. This property holds essentially by construction\nof the argument of $F$. This is not a full proof of our formula yet since the\nargument cannot easily be extended to special (super-)conformal transformations.\nWe give a complete derivation in the third subsection after we have\nillustrated the notations and constructions we introduced in this section\nfor the $\\mathcal{N}=2$ superconformal algebra in $d=1$ dimension.\n\n\\subsection{Illustration for 1-dimensional superconformal algebra}\n\nLet us continue to illustrate our constructions and statements in the example of the\n$\\mathcal{N}=2$ superconformal algebra in $d=1$. Recall that the fundamental representation\nof this algebra is 3-dimensional and hence we realize all our supergroup elements as\n$3\\times 3$ matrices with components in the superspace. The elements $m(x)$ were\nconstructed in eq.\\ \\eqref{eq:m-1d} already. The Weyl inversion $w$ and its action on\nsuperspace were worked out in eqs.\\ (\\ref{eq:wmatrix}) and \\eqref{w-action-1d},\nrespectively. It is easy to determine the $3 \\times 3$ matrices $n(x)$ to take\nthe form\n\\begin{equation} \\label{eq:n-1d}\n n(x) = w^{-1} m(x) w = \\begin{pmatrix}\n 1 & 0 & 0\\\\\n -X & 1 & -\\theta\\\\\n -\\bar\\theta & 0 & 1\n \\end{pmatrix}\\ ,\n\\end{equation}\nwhere $X = u - \\frac12 \\theta \\bar\\theta$ is the same even combination of\nsuperspace coordinates $(u,\\theta,\\bar \\theta)$ that appeared in our formula\n\\eqref{eq:m-1d} for $m(x)$. The central ingredient in our construction above is\nthe factorization formula \\eqref{eq:factorization} for $w m(x)$. In the case\nof $\\mathfrak{g} = \\mathfrak{sl}(2|1)$ this reads\n\\begin{equation}\n \\begin{pmatrix}\n 0 & -1 & 0\\\\\n 1 & X & \\theta\\\\\n 0 & -\\bar\\theta & 1\n \\end{pmatrix} = \\begin{pmatrix}\n 1 & -\\frac1u \\left(1+\\frac{\\theta\\bar\\theta}{2u}\\right) & \\theta\/u\\\\\n 0 & 1 & 0\\\\\n 0 & -\\bar\\theta\/u & 1\n \\end{pmatrix} \\begin{pmatrix}\n 1 & 0 & 0\\\\\n u+\\frac12\\theta\\bar\\theta & 1 & \\theta\\\\\n \\bar\\theta & 0 & 1\n \\end{pmatrix} \\begin{pmatrix}\n \\frac1u\\left(1-\\frac{\\theta\\bar\\theta}{2u}\\right) & 0 & 0\\\\\n 0 & u\\left(1-\\frac{\\theta\\bar\\theta}{2u}\\right) & 0\\\\\n 0 & 0 & 1-\\frac{\\theta\\bar\\theta}{u}\n \\end{pmatrix} \\ . \\label{eq:matrixfactorization-1d}\n\\end{equation}\nComparing the first of the three factors with the expression \\eqref{eq:m-1d} for\n$m(y)$ we deduce\n\\begin{equation}\ny(x) = (Y+\\frac12\\eta\\bar\\eta,\\eta,\\bar\\eta) = w(u,\\theta,\\bar\\theta) =\n\\left(\\frac{-1}{u},\\frac{\\theta}{u},\\frac{\\bar\\theta}{u}\\right)\\ . \\label{eq:wact-1d}\n\\end{equation}\nThis agrees of course with the result we found in eq. \\eqref{w-action-1d}. Turning\nto the second matrix factor in the factorization formula and comparing with eq.\\\n\\eqref{eq:n-1d} for $n(z)$ we conclude\n\\begin{equation} \\label{eq:zcoord-1d}\nz(x) = (Z+\\frac12\\zeta\\bar\\zeta,\\zeta,\\bar\\zeta) = (-u,-\\theta,-\\bar\\theta) \\ .\n\\end{equation}\nUsing the representation matrices for $D$ and $R$ that we spelled out in\neq.\\ \\eqref{eq:bosrep}, the third factor, finally, can be written as\n\\begin{equation}\nk(t(x)) = e^{-\\log u^2 D + \\frac{\\theta\\bar\\theta}{2u}R}\\ .\n\\end{equation}\nWe lift the matrix equation \\eqref{eq:matrixfactorization-1d} to the following\nfactorization identity for supergroup elements\n\\begin{equation}\n w m(x) = w e^{x\\cdot X} =\n e^{w(x)\\cdot X} e^{-x\\cdot X^w} e^{-\\log u^2 D + \\frac{\\theta\\bar\\theta}{2u}R} \\ ,\n \\label{fund-1d}\n\\end{equation}\nwhere $X^w = w^{-1}(P,Q_+,Q_-)w = (-K,-S_+,S_-)$. Given several points $x_i$ in superspace,\nwe can now compute the supercoordinates $x_{ij}$ by evaluating the product $m(x_j)^{-1}\nm(x_i)$. The result is given by $x_{ij} = (u_{ij},\\theta_{ij}, \\bar \\theta_{ij})$ with\n\\begin{equation}\n u_{ij} = u_i - u_j -\\frac12\\theta_i\\bar\\theta_j - \\frac12\\bar\\theta_i\\theta_j \\ ,\\quad\n \\theta_{ij} = \\theta_i - \\theta_j \\ ,\\quad\n \\bar\\theta_{ij} = \\bar\\theta_i - \\bar\\theta_j \\ .\n \\label{distance}\n\\end{equation}\nFor completeness let us also state how the Weyl inversion acts on $x_{ij}$\n\\begin{equation}\n w(x_{ij}) = (-u_{ij}^{-1},u_{ij}^{-1}\\theta_{ij},\n u_{ij}^{-1}\\bar\\theta_{ij})\\ . \\label{inverse}\n\\end{equation}\nOf course this coincides with the formula \\eqref{eq:wact-1d} applied to the\nsuperspace coordinates $x_{ij}$. At this point we have explained all the\ningredients that are needed to construct the supergroup elements $g(x_i)$\nthat were introduced in eq.\\ \\eqref{eq:gxi}.\n\\smallskip\n\nLet us now consider a four-point function $G_4$ of primary fields with conformal\nweights $\\Delta_i$ and R-charges $r_i$ for $i=1, \\dots, 4$. Given $\\Delta$ and\n$r$, the corresponding representation $\\rho_i$ of the group $K = SO(1,1) \\times\nU(1)$ reads\n\\begin{equation} \\label{eq:rho-1d}\n \\rho_{\\Delta,r}(e^{\\lambda D + \\kappa R}) = e^{-\\Delta\\lambda + r\\kappa}\\ .\n\\end{equation}\nSince the group $K$ is abelian, the space $V$ of polarizations is 1-dimensional\nand so is the tensor product $V_{1234} = V_1 \\otimes \\dots \\otimes V_4$. According\nto our general result \\eqref{magic-formula}, there exists a unique functional $F$\nwith the covariance properties\n\\begin{align}\n F(e^{\\lambda_l D + \\kappa_l R} g e^{\\lambda_r D + \\kappa_r R})=\n e^{(\\Delta_2-\\Delta_1)\\lambda_l+(r_1+r_2)\\kappa_l} e^{(\\Delta_3-\\Delta_4)\\lambda_r\n - (r_3+r_4)\\kappa_r} F(g) \\ ,\n\\end{align}\nsuch that the lifting formula reads\n\\begin{equation}\n G_4(x_i) = \\Omega(x_i) \\, F(e^{-w(x_{21})\\cdot X^w} e^{x_{31}\\cdot X}\n e^{w(x_{43})\\cdot X^w})\n \\label{magic-1d}\n\\end{equation}\nand the prefactor $\\Omega$ is given by\n\\begin{equation} \\label{eq:Omegasl2}\n\\Omega =\\Omega(x_i) = \\frac{e^{r_2\\frac{\\theta_{12}\\bar\\theta_{12}}{2u_{12}}+\n r_4\\frac{\\theta_{34}\\bar\\theta_{34}}{2u_{34}}}}{u_{12}^{2\\Delta_2}\n u_{34}^{2\\Delta_4}}\\ .\n\\end{equation}\nIt is instructive to verify that the commutation relations \\eqref{eq:diffrel}\nhold for the argument of $F$ and to evaluate $G_4$ for Weyl-inverted arguments\n$w(x_i)$, thereby showing that the right hand side of the lifting formula\n\\eqref{magic-1d} indeed satisfy the same conformal Ward identities as the\nfour-point function.\n\n\\subsection{Proof of the lifting formula}\n\nThe goal of this subsection is to prove the main result \\eqref{magic-formula} for\nan arbitrary superconformal group. Before doing that, let us give one more definition,\nan extension of the factorization formula \\eqref{eq:factorization}\n\\begin{equation}\n h m(x) = m(y(x,h)) n(z(x,h)) k(t(x,h)) \\ , \\label{matrix-identity}\n\\end{equation}\nfrom the Weyl inversion $h = w$ to arbitrary elements $h$ of the superconformal\ngroup. This formula also extends our analysis in section 2.3 where we studied the action\nof global conformal transformations on superspace. At the time we only cared about the\nfirst factor $m(y(x,h))$ in the product on the right hand side. The new formula\n\\eqref{matrix-identity} extends the action of global superconformal transformations\nto the whole superconformal group. Otherwise all the additional explanations we\nprovided in section 2.3. remain applicable. The extended factorization formula involves\nthree sets of functions $y(x,h) = (y(x,h)_a)$, $z(x,h) = (z(x,h)_a)$ and $t(x,h) =\n(t(x,h)_\\varrho)$. For $h=w$ we recover the functions we introduced in the\nprevious section.\n\nA four-point correlation function $G_4$ satisfies a set of Ward identities. For global\nsuperconformal transformations $h$ these may be written in the form\n\\begin{equation} \\label{eq:G4Wardid}\n G_4 (x_i^h) = \\Big(\\bigotimes_{i=1}^4 \\rho_i (k(t(x_i,h)))\\Big) G_4(x_i) \\ .\n\\end{equation}\nNote that correlation functions are essentially invariant under these transformations\nexcept some factors depending in the weight, spin and the R-charges. This dependence\nis encoded in the choice of representations $\\rho_i$, as we explained above. In a first\nstep we want to lift the correlator $G_4$ to and object $F_4\\in\\mathcal{F}_1\\otimes V_1\n\\otimes \\dots \\otimes \\mathcal{F}_4\\otimes V_4$, where $\\mathcal{F}_i$ are supercommuting\ncopies of the structure algebra $\\mathcal{F}(\\mathfrak{g})$ of functions on the supergroup.\nThis can be done in a unique way if we require\n\\begin{equation}\\label{eq:F4rightcov}\n F_4(m(x_i)) = G_4(x_i)\\ ,\\quad \\quad F_4 (g_i n_i k_i) =\n \\bigotimes_{i=1}^{4} \\rho_i (k_i^{-1})\n F_4(g_i) \\ .\n\\end{equation}\nHere our notations are the same as in section 3.1, see our extended discussion before\nequation \\eqref{magic-formula}. The Ward identities \\eqref{eq:G4Wardid} satisfied by\n$G_4$ imply the following invariance conditions satisfied by $F_4$ under simultaneous\nleft multiplication of its four arguments by an element $h$ of the superconformal group,\n\\begin{eqnarray}\\label{eq:F4leftinv}\n F_4(h m(x_i)) & = & F_4\\Big( m(x_i^h) n(z(x_i,h)) k(t(x_i,h)) \\Big) \\\\[2mm]\n & = & \\Big( \\bigotimes_{i=1}^4 \\rho_i (k(t(x_i,h))^{-1}) \\Big) G_4(x_i^h) =\n G_4(x_i) = F(m(x_i)) \\ .\n\\end{eqnarray}\nOther than the Ward identity, we have used the definitions $(\\ref{matrix-identity})$\nand $(\\ref{eq:F4rightcov})$. Given this element $F_4$ and the Weyl inversion $w$ we\ncan construct a new object $F\\in\\mathcal{F}(\\mathfrak{g})\\otimes V_{1234}$ through\nthe prescription\n\\begin{equation}\\label{eq:FfromF4}\n F(g) := F_4 (e,w^{-1},g,gw^{-1}) \\ .\n\\end{equation}\nWhile this might look a bit bizarre at first, it is easy to verify that it\ndefines a $K$-spherical function $F$, i.e. that $F$ satisfies the covariance\nlaw \\eqref{eq:covariance}. Indeed, from the definition \\eqref{eq:FfromF4} of\n$F$, the left invariance condition \\eqref{eq:F4leftinv} and the right\ncovariance law in eq.\\ \\eqref{eq:F4rightcov} of $F_4$ we obtain\n\\begin{align*}\n F(k_l g k_r) &= F_4(e,w^{-1},k_l g k_r, k_l g k_r w^{-1}) =\n F_4(k_l^{-1},w^{-1} w k_l^{-1}w^{-1},g k_r, g w^{-1} w k_r w^{-1} )\\\\[2mm]\n &= \\Big(\\rho_1(k_l)\\otimes\\rho_2(w k_l w^{-1})\n \\otimes\\rho_3(k_r^{-1})\\otimes\\rho_4(wk_r^{-1}w^{-1})\\Big) F(g) \\ .\n\\end{align*}\nIn conclusion we have shown that a correlation function $G_4$ provides us with a\n$K$-spherical function $F$. It is actually not difficult to invert the map and\nrecover $G_4$ from $F$. Suppressing the last two arguments and their corresponding\nprefactors for simplicity, we have\n\\begin{eqnarray*}\nF_4(m(x_1),m(x_2)) & = & \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\\right)\nF_4\\left(m(x_1) n(y_{21}), m(x_2) k(t_{21})^{-1} n(z_{21})^{-1} \\right) \\\\[2mm]\n& = & \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\\right)\nF_4\\left(m(x_1) n(y_{21}), m(x_1) m(x_{21}) k(t_{21})^{-1} n(z_{21})^{-1} \\right) \\\\[2mm]\n& = & \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\\right)\nF_4\\left(m(x_1) n(y_{21}), m(x_1) w^{-1} m(y_{21}) \\right) \\\\[2mm]\n& = & \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\\right)\nF_4\\left(m(x_1) n(y_{21}), m(x_1) n(y_{21}) w^{-1} \\right).\n\\end{eqnarray*}\nIn the first step we used the covariance property \\eqref{eq:F4rightcov} of $F_4$ in the\nfirst two arguments to multiply the first argument with $n(y_{21})$ and the second with\n$k(t_{21})^{-1} n(z_{21})^{-1}$. Since the latter contains a factor $k$ it needed to\nbe compensated by a rotation in the second factor of the space of superpolarizations.\nThen we inserted the definition of $m(x_{21})$ and used that\n$$ m(x_{21}) = w^{-1} m(y_{21}) n(z_{21}) k(t_{21})\\ . $$\nThis factorization formula is essentially the definition of $y_{21}, z_{21}$ and $t_{21}$.\nFinally we moved the Weyl element $w^{-1}$ through $m$ using that $n = w^{-1} m w$. We can now\napply the same steps to the third and fourth argument to obtain\n\\begin{equation}\\label{eq:F4ggwggw}\nF_4(m(x_i)) = \\left(1 \\otimes \\rho_2(k(t_{21})^{-1}) \\otimes 1 \\otimes \\rho_4(k(t_{43})^{-1})\\right)\nF_4\\left(g_{12}(x_i),g_{12}(x_i) w^{-1}, g_{34}(x_i), g_{34}(x_i) w^{-1} \\right),\n\\end{equation}\nwhere we introduced the elements\n$$ g_{ij}= m(x_i) n(y_{ji})\\ . $$\nFinally, we can use the invariance property \\eqref{eq:F4leftinv} of\n$F$ for $h = g_{12}^{-1}$ to obtain\n$$ F_4(m(x_i)) = \\left(1 \\otimes \\rho_2(k(t_{21})^{-1})\n \\otimes 1 \\otimes \\rho_4(k(t_{43})^{-1})\\right)\n F_4\\left(e, w^{-1}, g(x_i), g(x_i) w^{-1}\\right) \\ .\n$$\nHere $g(x_i)$ is the element we introduced in eq.\\ \\eqref{eq:gxi}. Using our definition of the functional\n$F$ in eq.\\ \\eqref{eq:FfromF4} and the relation between $F_4$ and $G_4$ we have thereby established the\nlifting formula \\eqref{magic-formula}.\n\\medskip\n\nFrom the above derivation, one may deduce the following transformation properties of $g_{ij}$ and $k(t_{ji})$\nunder superconformal transformations\n\\begin{eqnarray}\ng_{ij}(x^h) & = & h \\, g_{ij}(x)\\, k(t(x_i,h))^{-1} \\ , \\label{eq:gijtrafoh}\\\\[2mm]\nk(t_{ji}^h) & = & k^{w}(t(x_i,h))\\, k(t_{ji})\\, k(t(x_j,h))^{-1}\\ , \\label{eq:ktrafoh}\n\\end{eqnarray}\nwhere $k^{w} = w k w^{-1}$. Indeed these are necessary for the right hand\nside of eq. \\eqref{eq:F4ggwggw} to satisfy the same Ward identities as the left hand side. A complete proof\nof the two transformation laws can be found in appendix $A$. These two formulas will play a significant\nrole in the computation of the crossing factor to which we turn next.\n\n\n\\section{Tensor Structures and Crossing Symmetry Equations}\n\n\nHaving lifted the spinning four-point fucntion $G_4$ from superspace to the\nsuperconformal group through eq.\\ \\eqref{magic-formula} we can now employ\n(super)group theoretic constructions to study superconformal correlators.\nIn the first subsection we employ a supersymmetric version of the Cartan\nor KAK decomposition for superconformal groups of type I to factorize\nfour-point functions into the product of a tensor factor $\\Theta =\n\\Theta(x_i)$ and a function $\\Psi$ that depends on superconformal cross ratios\nonly.\\footnote{As explained in the introduction, our group theoretic factorization\n$G_4 = \\Theta \\Psi$ is reminiscent of the factorization $G_4 = \\Omega g$ used in\nmost of the CFT literature. The difference between the two factorizations can be\nquantified through the ratio $\\Theta \\Omega^{-1}$ which has a non-trivial\ndependence on cross ratios. As we shall see below, $\\Theta$ and $\\Psi$ are\nmore universal than the factors $\\Omega$ and $g$.} This part of our analysis\nextends constructions in \\cite{Buric:2019dfk} to the superconformal setting.\nWe can perform the factorization for different channels. The supercrossing\nfactor, i.e. the ratio of the corresponding tensor factors $\\Theta_s$ and\n$\\Theta_t$ for the $s$- and $t$-channel, is studied at the end of the first\nsubsection. There we establish its superconformal invariance and compute it\nfor bosonic conformal symmetries in any dimension $d$.\n\nAt this stage, all quantities depend on fermionic variables. In particular,\nthe function $\\Psi$ still depends on some number of nilpotent invariants.\nBy expanding all quantities in the Grassmann variables we construct the\ncrossing factor in the second subsection. This is then used to write the\ncrossing symmetry constraints in the independent coefficients of the\noperator product expansions in terms of functions of two bosonic cross\nratios only. As shown in \\cite{Buric:2019rms}, the latter may be expanded\ninto wave functions of some Calogero-Sutherland Hamiltonian. By collecting\nall the material we have put together through our discussion of the\nexample $\\mathfrak{g} = \\mathfrak{sl}(2|1)$ we can finally calculate the\ncrossing factor for $\\mathcal{N}=2$ superconformal field theories in\n$d=1$ dimension, see third subsection.\n\n\n\\subsection{Cartan coordinates, tensor and crossing factors}\n\nWe will now construct the tensor structures, starting from the lifting formula \\eqref{magic-formula}.\nNote that eq.\\ \\eqref{magic-formula} treats each of the four insertion points differently\nand hence it breaks the permutation symmetry of correlators in a Euclidean theory. Different\npermutations $\\sigma$ of the four points are associated with different channels. We refer to\nthe channel that is associated with the identity permutation $\\sigma = \\sigma_s = \\textit{id}$\nas the $s$-channel. Another important case for us is the permutation $\\sigma = \\sigma_t = (24)$\nwhich we call the $t$-channel. In any case, given the choice of the channel $\\sigma$, we can\nextend the lifting formula \\eqref{magic-formula} to become\n\\begin{equation} \\label{eq:magic-formulasigma}\nG_4(x_i) = \\rho_{\\sigma(2)}(k(t_{\\sigma(2)\\sigma(1)})^{-1})\n\\rho_{\\sigma(4)}(k(t_{\\sigma(4)\\sigma(3)})^{-1}) F_\\sigma(g(x_{\\sigma(i)})) \\ .\n\\end{equation}\nHere, the factor $\\rho_{\\sigma(i)}$ acts on the $\\sigma(i)^\\textit{th}$ tensor factor in\nthe space of superpolarizations and it acts trivially on all other tensor factors.\n\nIn order to proceed we adopt a new coordinate system that we refer to as\nCartan coordinates. So far we have decomposed supergroup elements $g$ into factors\n$m$, $n$ and $k$. Now we consider a different decomposition in which supergroup\nelements $g$ are written as\n\\begin{equation} \\label{eq:sCartan}\ng = k_l \\eta_l a \\eta_r k_r \\ ,\n\\end{equation}\nwhere $k_{l}$ and $k_{r}$ are associated with the subgroup $K$ that is obtained through\nexponentiation of rotations, dilations and R-symmetry transformations. Similarly, the\nfactors $\\eta_l$ and $\\eta_r$ are associated with fermionic generators. More specifically,\neach of these factors contains half of the generators in the odd subspace $\\mathfrak{g}$. In the\nfollowing we consider Lie superalgebra $\\mathfrak{g}$ of type I for which the internal symmetry\ngroup contains a $U(1)$-factor which allows us to decompose the fermionic generators according\nto the sign of the $U(1)$ R-charge. It turns out that half of the supercharges $Q$ possess\npositive R-charge while the others possess negative R-charge and similarly for the super\nspecial conformal transformations $S$. Let us agree that $\\eta_l$ uses generators of\nnegative charge while $\\eta_r$ is build from generators with positive charge. The central\nfactor $a = a(u_1,u_2)$, finally, depends on two bosonic coordinates $u_1,u_2$ only and\nit is assumed to take the form\n\\begin{equation}\n\\label{adef}\na(u_1,u_2) = e^{\\frac{u_1+u_2}{4}(P_1+K_1) -i \\frac{u_1-u_2}{4}(P_2 - K_2)}\\ .\n\\end{equation}\nLet us note that a factorization of supergroup elements $g$ in the form \\eqref{eq:sCartan}\nis not unique. In fact, given any such factorization we can produce another factorization\nof the very same form by the transformation\n\\begin{equation} \\label{eq:gaugeB}\n\\left(k_l,\\eta_l;k_r,\\eta_r\\right) \\rightarrow\n\\left(k_l b, b^{-1} \\eta_l b ; b^{-1} k_r, b^{-1} \\eta_r b\\right),\n\\end{equation}\nwhere $b$ are elements associated with the subalgebra $\\mathfrak{so}(d-2)\\oplus \\mathfrak{u}_r\n\\subset \\mathfrak{k}$ and therefore commute with $a = a(u_1,u_2)$. At the same time, the elements\n$b^{-1} \\eta_{l\/r} b$ can still be written as exponentials of fermionic generators with\nnegative(l) and positive(r) $U(1)$ R-charge, respectively. Hence our gauge transformation\n\\eqref{eq:gaugeB} respects the Cartan decomposition. Elements $b$ form the stabilizer\ngroup $B = SO(d-2) \\times U_r$ of the Cartan decomposition \\eqref{eq:sCartan}. For later\nuse we introduce a projector $P$ by integrating $b$ over the entire stabilizer group $B$,\n\\begin{equation}\\label{eq:Pdef}\nP = \\frac{1}{\\text{Vol}\\ B} \\int_B d\\mu b \\ =\n\\frac{1}{\\text{Vol}\\ B} \\int_B d\\mu(\\beta) b(\\beta) \\ ,\n\\end{equation}\nwhere $\\mu$ is the Haar measure on $B$. For pedagogical reasons we have introduced some\ncoordinates $\\beta$ on $B$ so that the element $b$ could be written explicitly as a function\n$b = b (\\beta)$ on $B$ with values in $U(\\mathfrak{b})$. As we indicated, $P$ can be considered\nas an element in the universal enveloping algebra $U(\\mathfrak{g})$. More concretely, after\nevaluation in some representation we can also think of $P$ is a matrix. By construction, this\nmatrix has two important properties\n\\begin{equation} \\label{eq:Pprop}\nP^2 = P \\quad , \\quad b P = P\\ .\n\\end{equation}\nWe can verify the second equation very easily using the integral representation of $P$ and\nthe left invariance of the Haar measure. The first property then follows from $bP = b(\\beta)\nP$ by performing an additional integration over $B$ since $P$ is a constant on $B$.\n\nIn our analysis below we will apply the projector $P$ to an a function $f(u_i,\\theta,\\bar \\theta)$\nthat takes values in the representation space $V_{1234}$ of $K$. The latter may be considered a\ncarrier space for a representation of $B$ by restriction from $K$ to its subgroup $B$. The\nrepresentation of $P$ on such an object $f$ is denoted by $\\mathcal{P}$, i.e.\n\\begin{equation} \\label{eq:calP}\n\\mathcal{P} [f(u_i,\\theta,\\bar \\theta)]= \\frac{1}{\\text{Vol}\\ B} \\int_B d\\mu \\chi(b)\nf(u_i,\\theta^b,\\bar\\theta^b)\\ ,\n\\end{equation}\nwhere $\\theta^b$ and $\\bar \\theta^b$ denotes the action of $b$ on the Grassmann coordinates\n$\\theta$ and $\\bar \\theta$ and $\\chi(b)$ is a shorthand for the action of $b$ on the finite\ndimensional vector space $V_{1234}$ of superpolarizations,\n\\begin{equation} \\label{eq:chi}\n\\chi(b)=\\rho_1(b)\\otimes \\rho_2(w b w^{-1})\\otimes \\rho_3(b)\\otimes \\rho_4(w b w^{-1})\\ .\n\\end{equation}\nIn practical computations it is convenient to make some specific choices for the\nCartan factors that remove the gauge freedom \\eqref{eq:gaugeB}. Such gauge fixing\nconditions are arbitrary and at the end of every calculation one has to check that\nthe result does not depend on them.\n\\medskip\n\nLet us now apply the Cartan factorization to the argument $g(x_{\\sigma(i)})$ of the\nfunctional $F_\\sigma$ in eq.\\ \\eqref{eq:magic-formulasigma},\n\\begin{equation}\\label{Cartan-factors}\n g(x_{\\sigma(i)}) = k_{\\sigma,l}(x_i) \\eta_{\\sigma,l}(x_i)a_\\sigma(x_i)\n \\eta_{\\sigma,r}(x_i) k_{\\sigma,r}(x_i) \\ .\n\\end{equation}\nThe formula \\eqref{eq:magic-formulasigma} and covariance properties of $F_\\sigma$ give\n\\begin{align}\nG_4(x_i) & = \\rho_{\\sigma(2)}(k(t_{\\sigma(2)\\sigma(1)})^{-1})\n\\rho_{\\sigma(4)}(k(t_{\\sigma(4)\\sigma(3)})^{-1}) F_\\sigma(g(x_{\\sigma(i)})) \\nonumber \\\\[2mm]\n& \\hspace*{-4pt} = \\rho_{\\sigma(2)}\\left(k(t_{\\sigma(2)\\sigma(1)})\\right)^{-1} \\rho_{\\sigma(4)}\n\\left(k(t_{\\sigma(4)\\sigma(3)})\\right)^{-1} F_\\sigma(k_{\\sigma,l}\\eta_{\\sigma,l}\na_\\sigma \\eta_{\\sigma,r}k_{\\sigma,r}) \\\\[2mm]\n& \\hspace*{-4pt} = \\rho_{\\sigma(1)}(k_{\\sigma,l})\\rho_{\\sigma(2)}\\left(k(t_{\\sigma(2)\\sigma(1)})^{-1}\nk_{\\sigma,l}^{w}\\right)\\rho_{\\sigma(3)}(k_{\\sigma,r}^{-1})\\rho_{\\sigma(4)}\n\\left(k(t_{\\sigma(4)\\sigma(3)})^{-1} (k^{-1}_{\\sigma,r})^{w}\\right)\nF_\\sigma(\\eta_{\\sigma,l}a_\\sigma \\eta_{\\sigma,r}) . \\nonumber\n\\end{align}\nFor simplicity, we dropped the dependence of Cartan factors on the insertion points, i.e.\\\nfor example $k_{\\sigma,l} = k_{\\sigma,l}(x_i) = k_l(x_{\\sigma(i)})$. We will\ndiscuss the concrete functional dependence of the insertion points a bit later.\n\nLet us spell out the previous formula for the $s$ and $t$-channel. In the\n$s$-channel one obtains\n\\begin{equation} \\label{eq:G4schannel}\nG_4(x_i) = \\rho_{1} (k_{s,l}) \\rho_{2}(k(t_{21})^{-1}\nk^{w}_{s,l}) \\rho_{3}(k_{s,r}^{-1})\n\\rho_{4}(k(t_{43})^{-1} (k^{w}_{s,r})^{-1})\n\\mathcal{P}_s F_s(\\eta_{s,l}a_s \\eta_{s,r})\\ ,\n\\end{equation}\nwhile the $t$-channel gives\n\\begin{equation} \\label{eq:G4tchannel}\nG_4(x_i) = \\rho_{1} (k_{t,l}) \\rho_{4}(k(t_{41})^{-1} k^{w}_{t,l}) \\rho_{3}(k_{t,r}^{-1})\n\\rho_{2}(k(t_{23})^{-1} (k^{w}_{t,r})^{-1}) \\mathcal{P}_t F_t(\\eta_{t,l}a_t \\eta_{t,r})\\ .\n\\end{equation}\nHere we introduced projector $\\mathcal{P}$ that was defined in eq.\\ \\eqref{eq:calP} explicitly\nto stress that $F(\\eta_l a\\eta_r)$ takes value in the space of $B$-invariants. Roughly\nspeaking, the two factors in front of $F_{s}$ and $F_t$ are the $s$- and $t$-channel tensor\nstructures.\n\nThe ratio of these $s$- and $t$-channel tensor structures is referred to as supercrossing\nfactor and we denote it by $\\mathcal{M}$. As we can read off the the previous two formulas\nthe supercrossing factor takes the form\n\\begin{equation} \\label{eq:crossingmatdef}\n\\mathcal{M}_{st}(x_i) = \\mathcal{P}_t \\, \\bigotimes_{i=1}^4 \\rho_i(\\kappa_i) \\, \\mathcal{P}_s\n\\ , \\end{equation}\nwhere the four elements $\\kappa_i$ are given by\n \\begin{eqnarray}\n\\kappa_1 = k_{t,l}^{-1}k_{s,l} \\quad & , & \\quad\n\\kappa_{2} = k^{w}_{t,r} k(t_{23})k(t_{21})^{-1} k^{w}_{s,l} \\\\[2mm]\n\\kappa_{3} = k_{t,r}k_{s,r}^{-1} \\quad & , & \\quad\n\\kappa_{4} = (k^{w}_{t,l})^{-1} k(t_{41})k(t_{43})^{-1} (k^{w}_{s,r})^{-1} \\ .\n\\end{eqnarray}\nIt is important to stress the two projectors in eq.\\ \\eqref{eq:crossingmatdef} make the supercrossing\nfactor independent of any gauge fixing conditions for our gauge symmetry \\eqref{eq:gaugeB}. In fact\none can easily check using eq. \\eqref{eq:Pprop} that any gauge transformation with some element $b$\nis absorbed by the projectors.\n\nOur main goal is to compute the matrix $\\mathcal{M}$ explicitly. Note that it depends on the\ninsertion points $x_i$ in superspace through the dependence of the factors $k(t_{ij}) = k(t(x_{ij}))$,\nthat were defined in eq. \\eqref{eq:factorizationij}, as well as through the factors $k_{l,r}$\nin the Cartan decomposition \\eqref{eq:sCartan} of the supergroup elements $g_{s,t}(x_i)$. In\norder to compute the matrix $\\mathcal{M}_{st}$ we first show that it is invariant under superconformal\ntransformation, i.e. $\\mathcal{M}_{st}(x_i^h) = \\mathcal{M}_{st}(x_i)$. This then implies that it is a\nfunction of cross ratios only and so it can be computed after moving the insertion points into\na special positions.\n\nTo see that $\\mathcal{M}_{st}$ is a conformal invariant we must study the dependence of the four\ntensor components one after another. We have already stated the transformation behavior of the\nfactors $k(t_{ij})$ at the end of the previous section, see eq.\\ \\eqref{eq:ktrafoh}. What we need\nto study now is the transformation behavior of the factors $k_{l,r}$ in the Cartan decomposition\n\\eqref{eq:sCartan}. To this end let us first note that, according to eq. \\eqref{eq:gijtrafoh},\nthe supergroup elements $g_\\sigma(x_i)$ transform as\n\\begin{equation}\ng_\\sigma(x_i^h) = k(t(x_{\\sigma(1)},h))\\, g_\\sigma(x_i)\\, k(t(x_{\\sigma(3)},h))^{-1}\\ .\n\\end{equation}\nBecause of the gauge freedom of the Cartan decomposition which we described in eq.\\ \\eqref{eq:gaugeB},\nknowing the behavior of $g_\\sigma(x_i)$ under conformal transformations does not allow us to uniquely\ndetermine the transformation law of the factors, but we can conclude that\n\\begin{equation}\nk_{\\sigma,l}(x^h_i) = k(t(x_{\\sigma(1)},h)) k_{\\sigma,l}(x_i) b_\\sigma(x_i,h) \\quad , \\quad\nk_{\\sigma,r}(x^h_i) = b^{-1}_\\sigma(x_i,h) k_{\\sigma,r}(x_i) k(t(x_{\\sigma(3)},h))^{-1} \\\n\\end{equation}\nfor some factor $b$ that may depend on the channel, the superspace insertion points $x_i$\nand the superconformal transformation $h$, yet must be the same for the left and right\nfactors $k_l$ and $k_r$. For the case of $s$- and $t$-channels, these become\n\\begin{equation}\nk_{s\/t,l}(x^h_i) = k(t(x_{1},h)) k_{s\/t,l} b_{s\/t}(x_i,h)\\quad , \\quad\nk_{s\/t,r}(x^h_i) = b_{s\/t}^{-1}(x_i,h) k_{s\/t,r} k(t(x_{3},h))^{-1} \\ .\n\\end{equation}\nWith these transformation laws it is now easy to verify that all four tensor components\n$\\kappa_i$ of the crossing factor $M$ are indeed invariant under superconformal\ntransformations, up to gauge transformation, i.e.\n\\begin{equation}\n\\kappa_i(x^h_k) = b^{-1}_t(x_k,h)\\, \\kappa_i(x_k)\\, b_s(x_k,h) \\quad , \\quad\n\\kappa_j(x^h_k) = w b^{-1}_t(x_kh) w^{-1}\\, \\kappa_j(x_k)\\, w b_s(x_k,h) w^{-1}\\ ,\n\\end{equation}\nwhere $i=1,3$ and $j=2,4$. To get the last two relations one employs the formula for\n$k(t^h_{ji})$ given in eq.\\ \\eqref{eq:ktrafoh}). Using the definition \\eqref{eq:calP}\nof the projectors $\\mathcal{P}_s =\\mathcal{P}_t$ and the property \\eqref{eq:Pprop} of\n$P \\in U(\\mathfrak{b})$ we see that $\\mathcal{M}_{st}(x_i)$ is indeed invariant under\nconformal transformations.\n\\medskip\n\nThe analysis we have performed in this section holds for conformal and superconformal\nsymmetries alike. It is actually quite instructive to evaluate the final formula\n\\eqref{eq:crossingmatdef} for the crossing factor for spinning correlators in bosonic\nconformal field theories. In this case it is in fact rather easy to obtain $\\mathcal{M}_{st}$\nsince we can effectively reduce the problem to one on the 2-dimensional conformal group. We\nwill deviate from previous notations and use $G$ to denote the bosonic conformal group\n$\\textit{SO}(d+1,1)$ and assume $d>2$.\n\nSince the crossing factor is conformally invariant, in computing $\\mathcal{M}(u,v)$ we may assume\nthat $x_i$ are any points that give the correct cross ratios $u$ and $v$. In particular,\nall points can be assumed to lie in the 2-dimensional plane $P$ that is spanned by the\nfirst two unit vectors $e_1,e_2$ of the $d$-dimensional space $\\mathbb{R}^d$. In this\ncase, the element $g_\\sigma(x_i)$ is seen to belong to the conformal group of the plane,\ni.e.\\ $g_\\sigma(x_i) \\in G_P=SO(3,1)\\subset G$. Within this group $g_\\sigma(x_i)$ admits\na unique Cartan decomposition, which can also serve as its Cartan decomposition in $G$,\nbearing in mind that the torus $A\\subset G_P \\subset G$ of the Cartan decomposition of\n$G$ is actually a subgroup of $G_P$. Put in another way, the Cartan decomposition of\n$G_P$ defines a particular gauge fixing for Cartan factors of $g(x_i)$. Note that all\nrelevant rotations are generated by the element $M_{12}$, which commutes with the Weyl\ninversion $w$ when $d>2$. Hence we conclude that the factors $\\kappa_i$ that arise\nin the transition from $s$- to $t$-channel must be of the form\n\\begin{equation}\\label{form-of-kappas}\n \\kappa_i = e^{\\gamma_i D} e^{\\varphi_i M_{12}} \\ ,\n\\end{equation}\nfor some functions $\\gamma_i$ and $\\varphi_i$ that depend on the insertion points $x_i$\nof the four fields through their two cross ratios. Having determined the general from of\n$\\kappa_i$, we can find the undetermined coefficients by a direct calculation. Since we\ncan perform the calculation in any conformal frame we set for convenience,\n\\begin{align}\\label{point-configurations-1}\n & x_1 = \\frac{\\cosh^2\\frac{u_1}{2}+\\cosh^2\\frac{u_2}{2}}{2\\cosh^2\\frac{u_1}{2}\\cosh^2\\frac{u_2}{2}} e_1 - i\n \\frac{\\cosh^2\\frac{u_1}{2}-\\cosh^2\\frac{u_2}{2}}{2\\cosh^2\\frac{u_1}{2}\n \\cosh^2\\frac{u_2}{2}} e_2\\ ,\\ x_2 = 0\\ ,\\ x_3 = e_1\\ ,\\ x_4 = \\infty e_1\\ .\n\\end{align}\nThen it follows\n\\begin{equation}\n \\kappa_1 = \\kappa_3 = e^{\\gamma D + \\alpha M_{12}}\\, , \\quad\n \\kappa_2 = \\kappa_4 = e^{\\gamma D - \\alpha M_{12}}\\ ,\n\\end{equation}\nwhere\n\\begin{equation}\n e^{4\\gamma} = \\frac{x_{12}^2 x_{34}^2}{x_{14}^2 x_{23}^2}\\, ,\\quad\n e^{2i\\alpha} = \\frac{\\cosh{\\frac{u_1}{2}}}{\\cosh{\\frac{u_2}{2}}}\\ .\n\\end{equation}\nTo complete this description let us also quote from \\cite{Buric:2019dfk} that\n\\begin{eqnarray}\n\\label{eq:uz}\ne^{u_i} & = & 1 - \\frac{2}{z_i}\\left(1+\\sqrt{1-z_i}\\right)\\ , \\\\[2mm]\n\\textit{where} \\quad u = z_1 z_2 & = & \\frac{x_{12}^2x_{34}^2}{x_{13}^2x_{24}^2}\n\\quad , \\quad v = (1-z_1)(1-z_2) = \\frac{x_{14}^2x_{23}^2}{x_{13}^2 x_{24}^2}\\ .\n\\end{eqnarray}\nLet us note that $\\mathcal{M}$ was originally defined using representations of $K = SO(1,1)\n\\times SO(d)$, but is computed using only representation theory of $SO(1,1)\\times SO(2)$.\n\\medskip\n\n\\noindent\n{\\bf Example:} To make the last point manifest, let us give some more details for\nconformal theories in $d=3$ dimensions. Let us decompose the factors $k_l = d_l r_l$\nand $k_r = d_r r_r$ into dilations $d_{l\/r}$ and rotations $r_{l\/r}$. Following\n\\cite{Schomerus:2016epl,Schomerus:2017eny,Buric:2019dfk} we parametrize the elements\n$r$ of the 3-dimensional rotation group through Euler angles,\n\\begin{equation}\n r(\\phi,\\theta,\\psi) = e^{-\\phi M_{12}} e^{-\\theta M_{23}} e^{-\\psi M_{12}} \\ .\n\\end{equation}\nWith this choice of coordinates, the elements $\\kappa_i$ have $\\phi =\\pm\\alpha$ and\n$\\theta = \\psi = 0$. Next let us recall that matrix elements of the spin-$j$\nrepresentation of $SU(2)$ read\n\\begin{equation}\nt^j_{m n} (\\phi,\\theta,\\psi) = \\langle j,m| g(\\phi,\\theta,\\psi) | j,n\\rangle =\ne^{-i(m\\phi+n\\psi)} d^j_{m n}(\\theta) \\ .\n\\end{equation}\nHere, the function $d^j_{m n}$ is known as Wigner's $d$-function. It is expressed\nin terms of Jacobi polynomials $P^{(\\alpha,\\beta)}_n$ as\n\\begin{equation}\nd^j_{m n}(\\theta) = i^{m-n} \\sqrt{\\frac{(j+m)!(j-m)!}{(j+n)!(j-n)!}}\n\\Big(\\sin\\frac{\\theta}{2}\\Big)^{m-n} \\Big(\\cos\\frac{\\theta}{2}\\Big)^{m+n}P^{(m-n,m+n)}_{j-m}(\\cos\\theta) \\ .\n\\end{equation}\nFor $\\theta=0$, the only non-zero matrix elements are those with $m=n$. Furthermore\n\\begin{equation}\nt^j_{n n}(\\pm\\alpha,0,0) = e^{\\mp in\\alpha} P^{(0,2n)}_{j-n}(1) =\ne^{\\mp in\\alpha} = \\left(\\frac{\\cosh\\frac{u_1}{2}}{\\cosh\\frac{u_2}{2}}\\right)^{\\mp\\frac{n}{2}} \\ .\n\\end{equation}\nSince the stabilizer group $B = SO(d-2)$ for a bosonic conformal field theory in $d=3$\ndimensions is trivial, so it the projector $P$. Putting all this together we conclude\nthat the crossing factor reads\n\\begin{equation}\n (\\mathcal{M}_{st})^{ijkl}_{pqrs} = \\left(\\frac{u}{v}\\right)^{-\\frac14\\sum\\Delta_i}\n \\left(\\frac{\\cosh\\frac{u_1}{2}}{\\cosh\\frac{u_2}{2}}\\right)^{\\frac12(i+k-j-l)}\n \\delta^i_p \\delta^j_q \\delta^k_r \\delta^l_s\\ ,\n\\end{equation}\nwhere $u,v$ are the usual $s$-channel cross ratios and $u_i = u_i(u,v)$ are functions\nthereof, see eq.\\ \\eqref{eq:uz}. The first factor in this result for the spinning crossing\nfactor is well known from scalar correlators. The correction it receives for spinning\ncorrelators are diagonal in the space of polarizations but depend on the eigenvalues\nof the generator $J_z$ for rotations around one particular direction $e_z$.\n\n\\subsection{Blocks and crossing symmetry equation}\n\nIn case of bosonic conformal theories, the crossing factor we have just computed\nalong with spinning conformal blocks is all it takes to write down crossing symmetry\nconstraints. For superconformal symmetries of type I, some more work is needed in order\nto spell out these equations. We describe the additional\nelements in this subsection before we illustrate the entire formalism at the\nexample for $\\mathcal{N}=2$ superconformal theories in $d=1$ dimensions in the\nnext. Along the way we also review the construction of conformal blocks from\n\\cite{Buric:2019rms}. In order not to clutter the presentation too much, the first\npart of our discussion focuses on the $s$-channel. Other channels can be dealt with\nsimilarly.\n\nIn Subsection 4.1 we have shown that the four-point function of primary fields in an\narbitrary representations of a conformal superalgebra of type I can we written as\n\\begin{equation} \\label{eq:GThetaPsi}\nG_4(x_i) = \\Theta_{s}(x_i) \\Psi_s(u_i;\\theta,\\bar \\theta)\\ ,\n\\end{equation}\nwhere the supertensor factor $\\Theta_{s}(x_i)$ depends on the insertion points of\nthe fields through\n\\begin{equation} \\label{eq:defOmegas}\n\\Theta_{s} (x_i) = \\omega^{-1\/2}(u_1,u_2)\n\\rho_{1} (k_{s,l}) \\rho_{2}(k(t_{21})^{-1}k^{w}_{s,l})\n\\rho_{3}(k_{s,r}^{-1}) \\rho_{4}(k(t_{43})^{-1} (k^{w}_{s,r})^{-1}) P_s\\ ,\n\\end{equation}\nand $\\Psi_s$ is a function of the cross ratios, including all nilpotent\/fermionic superconformal\ninvariants, that is given by\n\\begin{equation} \\label{eq:deffs}\n\\Psi_s(u_i;\\theta,\\bar \\theta) = \\omega^{1\/2}(u_1,u_2) F_s(\\eta_{s,l}a_s \\eta_{s,r})\\ .\n\\end{equation}\nIn splitting eq.\\ \\eqref{eq:G4schannel} into a product of a supertensor factor and a\nfunction $\\Psi_s$ of the cross ratios we have included a scalar factor\n\\begin{equation}\n\\omega(u_1,u_2) = 4(-1)^{2-d}(\\sinh\\frac{u_1}{2} \\sinh\\frac{u_2}{2})^{2d-2}\\coth\\frac{u_1}{2}\n\\coth\\frac{u_2}{2} |\\sinh^{-2}\\frac{u_1}{2}-\\sinh^{-2}\\frac{u_2}{2}|^{d-2}\\ ,\n\\end{equation}\nwhich depends on the bosonic cross $u_1,u_2$ ratios only and may in fact be interpreted as\nthe volume of \\(K\\times K\\) bosonic orbits on the conformal group, see \\cite{Schomerus:2017eny}\nfor details. The factor $\\omega$ is conventional, but has some advantages that will be pointed\nout below.\n\nLet us now further analyse the factor $F_s$ in formula \\eqref{eq:deffs} by expanding it in\nthe fermionic variables. The Grassmann variables $\\theta$ that multiply the odd generators\nof negative $U(1)$ R-charge in the exponent of $\\eta_l$ generate an algebra $\\Lambda_\\theta$\nwhile those variables $\\bar\\theta$ that multiply the positively charged odd generators in the\nexponent of $\\eta_r$ give rise to a Grassmann algebra $\\Lambda_{\\bar \\theta}$. Before the\nexpansion, the wave functions $\\Psi_s(u_1,u_2;\\theta,\\bar\\theta)$ are vector valued, with\ntwo copies of the bosonic subgroup $K$ acting in the image of $F$.\nThe first copy, which we refer to as $K_l$ acts on $V_{(12)} = V_1 \\otimes V'_2$. Except\nfor the conjugation with the Weyl inversion in the second tensor component, one may think\nof $V_{(12)}$ as the space of superpolarizations for the first two fields. Similarly, the\nsecond copy $K_r$ acts on $V_{(34)} = V_3 \\otimes V'_4$. When we perform the fermionic\nexpansion, the coefficients sit in the representation spaces\n\\begin{equation}\nV_l = V_{(12)} \\otimes \\Lambda_\\theta \\ , \\quad V_r = V_{(34)} \\otimes\n\\Lambda_{\\bar \\theta} \\\n\\end{equation}\nof $K_l$ and $K_r$. Note that the bosonic subgroup $K$ acts on the two Grassmann algebras\nso that indeed both spaces form a representation of $K$. We also refer to the spaces\n$V_l$ and $V_r$ as spaces of polarizations, as opposed to $V_{(12)}$ and $V_{(34)}$ which\nwe have called the spaces of superpolarizations.\n\nAs we explained before, the covariance properties of $F$ imply that $\\Psi$ takes\nvalues in the subspace of $B$-invariants, i.e. in the space\n$$ \\mathcal{T} = \\left(V_{l} \\otimes V_r \\right)^B \\ , $$\nwhich we also refer to as the space of tensor structures. One may think of its\nelements as $B$-invariant elements in the space of function of the Grassmann variables\n$\\theta$ and $\\bar \\theta$ that take values the space of superpolarizations. Let us fix\nsome basis of elements $\\omega^I$ in $\\mathcal{T}$ and denote the dual basis by $\\hat{\\omega}_I$.\nWe can collect these elements into two objects\n\\begin{equation} \\label{eq:vdef}\nv_s(x_i) = (\\omega^1(x_i), \\dots, \\omega^T(x_i)) \\, , \\quad \\ \\hat v_s(x_i) = (\\hat \\omega_1(x_i),\n\\dots, \\hat \\omega_T(x_i)) \\ ,\n\\end{equation}\nwhere $T = \\textit{dim} \\mathcal{T}$ is the number of tensor structures. One may think of\n$v_s$ as a rectangular matrix from the space $\\mathcal{T}$ of tensor structures to the space\n$V_{(12)} \\otimes V_{(34)}$ with matrix elements in the Grassmann algebra $\\Lambda_\\theta \\otimes\n\\Lambda_{\\bar\\theta} = \\Lambda \\mathfrak{g}_{\\bar 1}$. Through the Cartan decomposition\nof $g(x_i)$, the Grassmann variables $\\theta$ and $\\bar \\theta$ are concrete functions\non the superspace $\\mathcal{M}^\\otimes_4$. We have displayed this dependence on the\nsupercoordinates $x_i$ explicitly.\n\nThe coefficients in the fermionic expansion of $\\Psi$ are functions $\\psi_I$ of the two\nbosonic cross ratios $u_1,u_2$ that take values in the space of $V_{(12)} \\otimes V_{(34)}$\nof superpolarizations. We can write this in the form\n\\begin{equation} \\label{Psivpsi}\n\\Psi(u_1,u_2;\\theta,\\bar \\theta) = v_s(x_i) \\cdot \\psi(u_1,u_2) = v_s(x_i) P_s\n\\psi(u_1,u_2) \\ .\n\\end{equation}\nPutting eqs.\\ \\eqref{eq:GThetaPsi} and \\eqref{Psivpsi} together we can now write the\nfour-point function as\n\\begin{equation}\nG_4(x_i) = \\Theta_s(x_i) v_s(x_i) \\cdot \\psi(u_1,u_2) \\ .\n\\end{equation}\nWe want to expand $G_4$ into superblocks, i.e. eigenfunctions of the super Casimir\noperator. The latter turns out to take a particularly simple form when evaluated on\n$\\psi(u_1,u_2)$. As we have shown in \\cite{Buric:2019rms}, one finds that\n\\begin{equation} \\label{eq:sCasimir}\n\\textit{Cas}_s G_4(x_i) = \\Theta_s(x_i) v_s(x_i)\\cdot (H^{V_l,V_r}_0 + A) \\psi(u_1,u_2)\\ .\n\\end{equation}\nHere $H_0$ is the spinning Calogero-Sutherland Hamiltonian for bosonic blocks, i.e.\n$H_0$ takes the form\n\\begin{equation}\nH_0 = - \\frac{\\partial_2}{\\partial u_1^2} - \\frac{\\partial_2} {\\partial u_2^2} +\nV(u_1,u_2)\\ ,\n\\end{equation}\nwhere $V(u_1,u_2)$ is a potential that takes values in the space of $T\\times T$\nmatrices. The precise form of the potential depends on the pair $(V_l,V_r)$ of\nrepresentations of $K$, but it is the same one obtains for the spinning Casimir\noperator of the bosonic conformal algebra in Calogero-Sutherland gauge. In\n$d=3,4$ dimensions such matrix potentials were worked out explicitly in\n\\cite{Schomerus:2016epl,Schomerus:2017eny}. The second term $A$ is a matrix\nvalued potential that was shown to be nilpotent and the precise form of these\nterms is remarkably simple, see \\cite{Buric:2019rms}.\n\nThe eigenfunctions of the Hamiltonian $H_0 = H_0^{V_l,V_r}$ we have just described\nwill be denoted by $\\psi_0(\\lambda_i;u_i) = \\psi^{V_l,V_r}_0(\\lambda_i,u_i)$. Here\n$\\lambda_i$ denote the eigenvalues of the (second and higher order) Hamiltonians\nwhich are directly related to the (spin and weight) quantum numbers of the intermediate\nfields in the conformal field theory. Functions $\\psi_0(u_i)$\nare well studied, and explicit expressions exist at least in dimension $d \\leq 4$, see in\nparticular \\cite{Echeverri:2016dun}. Eigenfunctions of the full Hamiltonian $\\mathcal{H}$\nwill denoted by $\\psi(\\lambda_i;u_i)$. Nilpotency of $A$ guarantees that quantum mechanical\nperturbation theory truncates at some order $N-1\\leq\\text{dim}\\mathfrak{g}_+$, so that we\ncan obtain exact results by summing just a few orders of the perturbative expansion. It\nturns out that, at any order of the expansion, the perturbation may be evaluated explicitly\nwith some input from the\nrepresentation theory of $SO(d+2)$. It results in expressions superconformal blocks as\nfinite linear combinations of spinning bosonic blocks. In this sense our results provide\na complete solution of the Casimir equations for type I superconformal symmetry and in\nparticular for 4-dimensional conformal field theories with any number $\\mathcal{N}$ of\nsupersymmetries.\n\nHere we have described the Casimir equation and its solution for the $s$-channel but it\nis clear that similar discussions apply to all channels. Reinstating the subscripts $s$\nand $t$ we end up with blocks $\\psi_s(\\lambda_i;u^s_i)$ and $\\psi_t (\\lambda_i;u^t_i)$.\nThe eigenvalues $\\lambda_i = \\lambda_i(\\mathcal{O})$ are related to the quantum numbers\n(weight, spin, $R$ charges) of the intermediate supermultiplets $\\mathcal{O}$. Let us\nalso stress that these blocks $\\psi$ are multi-component objects with $T$ components\n$\\psi^I, I=1, \\dots, T$ labeled by a basis of four-point tensor structures. For each\neigenvalue one can actually find $T$ independent solutions which are usually labeled\nby pairs $(a,b)=ab$ of three-point tensor structures for the relevant operators\nproducts. Consequently, the blocks $\\psi = (\\psi^{I,ab})$ carry two sets of labels,\nan index $I$ running over four point tensor structures and an index $ab$ that enumerates\npairs of three point structures. The arguments $u_i^{s\/t}$ of the blocks are functions\non superspace that are invariant under superconformal transformations. They are related\nby an exchange of the labels $2$ and $4$ and we can express one in terms of the other.\nEquating the $s$- and $t$-channel expansion of the four-point function $G_4$ one finds\nthat\n\\begin{equation}\\label{eq:crossing}\n\\sum_I\\sum_{\\mathcal{O}} \\lambda_{12\\mathcal{O},a}\n \\lambda_{34\\mathcal{O},b} M^{JI}_{st}(u^s_i)\n\\psi^{I,ab}_s(\\lambda_i(\\mathcal{O});u^s_i)\n=\n\\sum_{\\mathcal{O}} \\lambda_{14\\mathcal{O},a} \\lambda_{23\\mathcal{O},b}\n\\psi^{J,ab}_t(\\lambda_i(\\mathcal{O});u^t_i)\\ ,\n\\end{equation}\nwhere the indexes $a,b$ and $I,J$ numerate three and four-point tensor structures,\nrespectively, and the crossing factor $M_{st}= M_{st}(u_i)$ is given by\n\\begin{equation} \\label{eq:crossingmatrix}\nM_{st} = \\hat v_t(x_i)^t \\sqrt{\\frac{\\omega(u^t_i)}{\\omega(u^s_i)}}\n\\mathcal{M}_{st}(u_i,\\theta_s,\\bar \\theta_s)\nv_s(x_i)\\ .\n\\end{equation}\nNote that the matrix elements of $M_{st}$ depend on the bosonic cross ratios\nonly. The summation in eq.\\ \\eqref{eq:crossing} runs over all superprimary\nfields $\\mathcal{O}$ in the theory. Here we have expressed the crossing factor\n$\\mathcal{M}_{st}(x_i)$ which we defined in eq.\\ \\eqref{eq:crossingmatdef}\nin terms of the $s$-channel invariants and we think of the $t$-channel\ninvariants on the right hand side as functions of the $s$-channel ones.\nPractically, it is easier to relate the $t$- and $s$-channel invariants\n$u^t_i$ and $u^s_i$ to the usual bosonic cross ratios and expand on both \nsides in the nilpotent invariants. \n\nThe additional factors $v_s$ and $v_t$ express the supercrossing factor\n$\\mathcal{M}$ in terms of its action in the space $\\mathcal{T}$ of tensor\nstructures. Let us note that all three factors that appear in eq.\\\n\\eqref{eq:crossingmatrix} are well defined and straightforward to\ncompute explicitly, even though the computations can be a bit cumbersome.\nThe only additional information one then needs in order to evaluate the\ncrossing symmetry constraint \\eqref{eq:crossing} is the relation\nbetween the bosonic cross ratios $u^s_i$ and $u^t_i$ in the two\ndifferent channels. These are not difficult to determine from the\nCartan decomposition. Let us stress, however, that the relations\nbetween $u^s_i$ and $u^t_i$ involve fermionic invariants so that\nthere is an additional fermionic Taylor expansion to be performed\non the right hand side of eq.\\ \\eqref{eq:crossing} when we\nexpress $u^t_i$ in terms of $u^s_i$. We will illustrate all this\nnow in the case of $\\mathfrak{g} = \\mathfrak{sl}(2|1)$.\n\n\n\n\\subsection{Illustration for 1-dimensional superconformal algebra}\n\nWe can now put all the above together and compute the crossing factor between the $s-$ and the $t$-\nchannel for the $\\mathcal{N} = 2$ superconformal algebra in one dimension. To this end, the first step\nis to find the group elements $g_s(x_i)$ and $g_t(x_i)$ which appear in the argument of the covariant\nfunction $F$. In turn, this requires the supergroup elements $m(x_i)$, see eq.\\ \\eqref{eq:m-1d}, and\nthe Weyl element \\eqref{eq:wmatrix}. Then one computes the products $m(x_j) m(x_i)$ and $w\nm(x)$ to construct the variables $x_{ij}$ and the action of the Weyl inversion on superspace. For the\nexample at hand this was carried out in subsection 3.2.\n\nThese calculations provide all the input that is needed to determine $g_s(x_i)$. The supergroup elements\n$g_t(x_i)$ for the $t$-channel are obtained by exchanging the labels $2$ and $4$. At this point, $g_s$\nand $g_t$ depend on four sets of superspace variables, i.e. they are $3\\times3$ matrices whose elements\nare functions in all the $u_i,\\theta_i,\\bar\\theta_i$ for $i=1, \\dots, 4$. Since the crossing factor is\na superconformal invariant, we can apply superconformal transformations to gauge fix the coordinates of\nthe four insertion points. The following choice turns out to be convenient\n\\begin{equation} \\label{eq:xigauge}\n x_1 = (x,\\theta_1,\\bar\\theta_1),\\ x_2 = (0,0,0),\\ x_3 =\n (1,\\theta_3,\\bar\\theta_3),\\ x_4 = (\\infty,0,0)\\ .\n\\end{equation}\nWith this gauge choice, the entries of the matrices $g_s(x_i)$ and $g_t(x_i)$ depend on the bosonic\ncoordinate $x$ and the four Grassmann variables $\\theta_{1,3}$ and $\\bar \\theta_{1,3}$ only.\n\\medskip\n\nIn the second step we have to find the Cartan decomposition for both families $g_s$ and $g_t$. For our\n1-dimensional theory, the Cartan coordinates are introduced as\n\\begin{gather}\ng=e^{\\kappa R}e^{\\lambda_l D}e^{\\bar q Q_-+\\bar s S_-}e^{\\frac{u}{2}(P+K)}e^{qQ_++sS_+}e^{\\lambda_r D}\\ .\n\\end{gather}\n\nThis agrees with the general prescription \\eqref{eq:sCartan}, except that the torus of\nelements $a$ is parametrized by a single variable $u$ in this case. Through straightforward\nmanipulations of supermatrices one finds the following expressions for the Cartan coordinates\nof $g_s$ and $g_t$ in our gauge \\eqref{eq:xigauge}. For the bosonic Cartan coordinates in\n$s$-channel one has\n\\begin{align}\\label{eq:kls}\n & \\cosh^2 \\frac{u_s}{2} = \\frac1x \\Big( 1 - \\frac12\\theta_3\\bar\\theta_3 -\\frac{\\theta_1\\bar\\theta_1}{2x} +\n\\frac{\\theta_1\\bar\\theta_3}{x} +\\frac{\\theta_1\\bar\\theta_1\\theta_3\\bar\\theta_3}{4x} \\Big)\\ ,\\quad e^{-2\\kappa_s} = 1 +\n\\frac{\\theta_1}{x}(\\bar\\theta_1 - \\bar\\theta_3)\\ ,\\\\[2mm]\n& e^{\\lambda_{s,l}-\\lambda_{s,r}} = \\Big(1-x-\\frac12\\theta_1\\bar\\theta_1-\\frac12\\theta_3\\bar\\theta_3+\n\\theta_1\\bar\\theta_3\\Big) \\Big(x-\\frac12\\theta_1\\bar\\theta_1\\Big)\\ ,\\label{eq:lsm}\\\\[2mm]\n& e^{\\lambda_{s,l}+\\lambda_{s,r}} = \\Big(1+\\frac12\\theta_3\\bar\\theta_3\\Big)\\Big(x-\\frac12\\theta_1\\bar\\theta_1\\Big)\\ ,\n\\label{eq:lsp}\n\\end{align}\nwhile in the $t$-channel these coordinates read\n\\begin{align}\\label{eq:klt}\n& \\cosh^2 \\frac{u_t}{2} = x\\Big( 1 + \\frac12\\theta_3\\bar\\theta_3 + \\frac{\\theta_1\\bar\\theta_1}{2x}\n- \\theta_1\\bar\\theta_3 + \\frac{\\theta_1\\bar\\theta_1\\theta_3\\bar\\theta_3}{4x}\\Big)\\ ,\\quad\ne^{-2\\kappa_t} = 1 + \\bar\\theta_3 (\\theta_3 - \\theta_1)\\ , \\\\[2mm]\n& e^{\\lambda_{t,l}-\\lambda_{t,r}} = - \\Big(1-x-\\frac12\\theta_1\\bar\\theta_1-\\frac12\\theta_3\\bar\\theta_3+\n\\theta_1\\bar\\theta_3\\Big)\\Big(1+\\frac12\\theta_3\\bar\\theta_3\\Big)\\ ,\\label{eq:ltm} \\\\[2mm]\n& e^{\\lambda_{t,l}+\\lambda_{t,r}} = \\Big(1-\\frac12\\theta_3\\bar\\theta_3\\Big)\n\\Big(x+\\frac12\\theta_1\\bar\\theta_1\\Big)\\ .\\label{eq:ltp}\n\\end{align}\nIn order to extract $\\sinh(u_t\/2)$ from the first and $\\exp \\lambda_{t,l}\\ ,\\ \\exp\\lambda_{t,r}$ from the\nlast two lines one has to take some square roots. Here we use the following convention\n\\begin{equation}\ne^{\\lambda_{t,r}}=i (\\frac{x}{1-x})^\\frac{1}{2}- \\dots\\ , \\quad\ne^{\\lambda_{t,l}}=-i \\sqrt{x(1-x)}- \\dots \\ , \\quad\n\\sinh(u_t\/2)= i\\sqrt{1-x}-\\dots \\ .\n\\end{equation}\nThe fermionic Cartan coordinates, on the other hand, are given by the following expressions\n\\begin{align} \\label{eq:sqs}\n & q_s = e^{\\frac12\\lambda_{s,r}}\\Big(\\theta_3 -\\frac{\\theta_1}{x} \\Big( 1-\\frac12\\theta_3\\bar\\theta_3\\Big)\\Big)\\ ,\n \\quad s_s = e^{-\\frac12\\lambda_{s,r}}\\frac{\\theta_1}{x}\\ ,\\\\[2mm]\n & {\\bar q_s = e^{-\\frac12\\lambda_{s,l}}(\\bar\\theta_3 -\\bar\\theta_1)\\ ,\n \\quad \\bar s_s = -e^{\\frac12\\lambda_{s,l}}\\frac{\\bar\\theta_3}{x}}\\ ,\\\\[2mm]\n & q_t = e^{\\frac12\\lambda_{t,r}}(\\theta_3 - \\theta_1)\\ ,\n \\quad s_t = -e^{-\\frac12\\lambda_{t,r}}\\theta_1\\Big(1-\\frac12\\theta_3\\bar\\theta_3\\Big)\\ ,\\\\[2mm]\n & {\\bar q_t = -e^{-\\frac12\\lambda_{t,l}}\\Big(\\bar\\theta_1 - \\bar\\theta_3\\Big(x+\\frac12\\theta_3\\bar\\theta_1\\Big)\\Big)}\\ ,\n \\quad \\bar s_t = e^{\\frac12\\lambda_{t,l}}\\bar\\theta_3\\ . \\label{eq:sqt}\n\\end{align}\nThis concludes the second step of the construction, namely the determination of the Cartan coordinates\nin the two channels.\n\\medskip\n\nAs a third step we want to compute the supercrossing factor $\\mathcal{M}_{st}$ between the two channels\nthat was defined in eq.\\ \\eqref{eq:crossingmatdef}. Note that for our superconformal algebra the group\n$K$ is generated by dilations $D$ and R-symmetry transformations $R$ only. It is abelian and hence\nall its irreducible representations are 1-dimensional. Therefore, the supercrossing factor $\\mathcal{M}_{st}$\nconsists just of a single function in the variables $x, \\theta_{1,3}$ and $\\bar \\theta_{1,3}$. It depends,\nof course, on the choice of representations for the external superfields. We shall pick four such\nrepresentations $(\\Delta_i,r_i)$, corresponding to the conformal weight and the R-charges of the\nsuperprimaries, as before. The associated representations $\\rho$ of $K = \\textit{SO}(1,1) \\times U(1)$\nwere introduced in eq.\\ \\eqref{eq:rho-1d}. Note that in our gauge \\eqref{eq:xigauge} the factors\n$k(t_{41})$ and $k(t_{43})$ are trivial. Therefore, we have\n\\begin{align}\n & \\kappa_1 = e^{(\\lambda_{s,l}-\\lambda_{t,l})D + (\\kappa_s - \\kappa_t) R}\\ ,\\quad\n \\kappa_4 = e^{(\\lambda_{t,l}+\\lambda_{s,r})D-\\kappa_t R},\\\\[2mm]\n & \\kappa_3 = e^{(\\lambda_{t,r}-\\lambda_{s,r})D}\\ ,\\quad \\kappa_2 =\n e^{-(\\lambda_{t,r}+\\lambda_{s,l}-\\log x^2)D + (\\kappa_s - \\frac12\\theta_3\\bar\\theta_3 +\n \\frac{\\theta_1\\bar\\theta_1}{2x})R}.\n\\end{align}\nThe matrix ${\\mathcal{M}}$ can be written in terms of superspace coordinates by inserting our\nexplicit formulas \\eqref{eq:kls}-\\eqref{eq:ltp} for the Cartan coordinates in the $s$- and\n$t$-channel. This gives\n\\begin{align}\n \\mathcal{M}_{st} & = e^{\\frac{i\\pi}{2}(\\Delta_2+\\Delta_4-\\Delta_1-\\Delta_3)} x^{-2\\Delta_1}\n \\alpha^{\\frac32\\Delta_1-\\frac12\\Delta_2-\\frac12\\Delta_3-\\frac12\\Delta_4} \\times \\nonumber \\\\[2mm]\n & \\hspace*{2cm} \\times \\beta^{\\frac12\\Delta_1+\\frac12\\Delta_2-\\frac32\\Delta_3+\\frac12\\Delta_4}\n e^{r_1(\\kappa_s-\\kappa_t) +r_2(\\kappa_s-\\frac12\\theta_3\\bar\\theta_3+\n \\frac{\\theta_1\\bar\\theta_1}{2x})-r_4\\kappa_t},\n\\end{align}\nwhere $\\alpha$ and $\\beta$ denote the following superspace elements\n\\begin{equation}\\label{alpha-beta}\n \\alpha = x + \\frac12\\theta_1\\bar\\theta_1,\\ \\beta = 1 - \\frac12\\theta_3 \\bar\\theta_3\\ .\n\\end{equation}\nIn order to compute the crossing factor $M_{st}$ we are now instructed to find the map \\eqref{eq:vdef}\nin both $s-$ and $t$-channel. The general construction of $v$ is easy to implement since all representations\nare 1-dimensional. One finds\n\\begin{align}\n v = (1,q\\bar q,q\\bar s,s\\bar q,s\\bar s,qs\\bar q\\bar s)\\ .\n\\end{align}\nOnce we insert the expressions \\eqref{eq:kls}-\\eqref{eq:sqt} for Cartan coordinates in the two channels\nwe obtain\n\\begin{align} \\label{eq:vs-1d}\n & v_s = \\Big(1,-\\frac{(\\bar\\theta_1 - \\bar\\theta_3)(\\theta_1-x\\theta_3)}{x^{3\/2}\\sqrt{1-x}},\n \\frac{(\\theta_1 - x\\theta_3)\\bar\\theta_3+\\frac14\\Omega}{x^{3\/2}},\\frac{(\\bar\\theta_1-\\bar\\theta_3)\n \\theta_1+\\frac14\\Omega}{x^{3\/2}},\\frac{- \\theta_1\\bar\\theta_3\\sqrt{1-x}}{x^{3\/2}}, \\frac{\\Omega}{x^2} \\Big)\\ ,\\\\[2mm]\n & v_t = \\Big(1, i\\frac{(\\theta_1-\\theta_3)(\\bar\\theta_1-x\\bar\\theta_3)}{\\sqrt{1-x}},\n \\frac{x\\bar\\theta_3(\\theta_1 - \\theta_3)+\\frac14\\Omega}{\\sqrt{x}}, \\frac{\\theta_1 (\\bar\\theta_1 - x\\bar\\theta_3)+\n \\frac14\\Omega}{\\sqrt{x}}, i\\theta_1\\bar\\theta_3\\sqrt{1-x}\\ , \\Omega\\Big),\n \\label{eq:vt-1d}\n\\end{align}\nwhere $\\Omega = \\theta_1\\bar\\theta_1\\theta_3\\bar\\theta_3$. Now we have all the elements that are\nneeded to compute the crossing factor $M_{st}$ which we defined in eq.\\ \\eqref{eq:crossingmatrix} as\n\\begin{equation} \\label{eq:crossingmatrix-1d}\nM_{st} = \\hat v_t^T \\sqrt{\\frac{\\sinh u_t}{\\sinh u_s}}\n\\mathcal{M}_{st} v_s \\ ,\n\\end{equation}\nwhere $\\sinh u$ is a special instance of the function \\(\\omega(u)\\) that we introduced in eq.\\ \\eqref{eq:deffs}.\nAll factors that enter our expression for $\\mathcal{M}_{st}$ belong to the algebra $\\mathbb{C}[x,x^{-1}]\n\\otimes\\mathcal{A}$ where $\\mathcal{A}$ is the 6-dimensional algebra that is spanned by the elements\n\\begin{equation}\n e_1 = 1\\ ,\\ e_2 = \\theta_1 \\bar\\theta_1\\ ,\\ e_3 = \\theta_1 \\bar\\theta_3\\ ,\\ e_4 = \\theta_3 \\bar\\theta_1\\ ,\\\n e_2 = \\theta_3 \\bar\\theta_3\\ ,\\ e_6=\\Omega \\ .\n\\end{equation}\nIf we represent the $e_i$ by the canonical (column) vector, the row vectors $v_{s\/t}$ become $6\\times 6$\nmatrices whose entries are functions of $x$. Similarly we can also turn the factor $\\sqrt{\\frac{\\sinh u_t}\n{\\sinh u_s}}\\mathcal{M}_{st}$ into a $6\\times 6$ matrix if we replace the elements $e_i$ by their matrix\nrepresentation in the left regular representation of $\\mathcal{A}$. Multiplying all these matrices the\nfinal result is a $6 \\times 6$ matrix of functions in $x$ which is given by\n\\begin{equation} \\label{eq:crossingmatrix-1dvs2}\nM_{st} = v_t^{-1} \\sqrt{\\frac{\\sinh u_t}{\\sinh u_s}}\n\\mathcal{M}_{st} v_s \\ .\n\\end{equation}\nHaving computed the crossing factor between $s$- and $t$-channel there is only one final step left,\nnamely to relate the $s$- and $t$-channel cross ratios. Since the arguments of the functions $f$ in\nthe two channels are related by a change of variables that involves Grassmann coordinates, we need\nto perform a fermionic Taylor expansion in order to write the crossing equation in terms of functions\nof the bosonic cross ratio $x$ only, e.g. in the $t$-channel this expansion of \\(f_t(\\cosh^2\\frac{u_t}{2})\\)\ntakes the following form\n\\begin{equation}\n f_t = \\left(1 + x\\Big(\\frac12\\theta_3\\bar\\theta_3 + \\frac{\\theta_1\\bar\\theta_1}{2x} - \\theta_1\\bar\\theta_3\n + \\frac{\\theta_1\\bar\\theta_1\\theta_3\\bar\\theta_3}{4x}\\Big)\\partial + \\frac14 x\\Omega\\partial^2\\right) f_t(x)\\ .\n\\end{equation}\nUpon substitution, the crossing factor is a $6\\times6$ matrix of second order differential operators\nin $x$. This concludes our construction of the crossing symmetry equations for long multiplets of\n$\\mathcal{N}=2$ superconformal field theories in $d=1$ dimension.\n\n\\section{Conclusions and Outlook}\n\n In this work we have laid out a systematic theory that allows to decompose four-point functions\nof local operators into superconformal blocks. It applies to all superconformal field theories\nin which the superconformal symmetry is of type I, i.e.\\ for which the R-symmetry contains a\n$U(1)$ subgroup. This is the case for all superconformal field theories in $d=4$ dimensions and\na few other cases, in particular in $d=1,2$ and also 3-dimensional $\\mathcal{N}=2$ theories.\nIn a first step we lifted the four point correlation function of arbitrary (long) operators to\na function on the conformal supergroup, see eq.\\ \\eqref{magic-formula}. A crucial ingredient in\nthis auxiliary step was to assign a special family of supergroup elements $g(x_i)$ to the\nsuperspace insertion points $x_i$ of the four fields. Let us stress that this first step is\nstill entirely general in that it applies to all superconformal algebras and not just those of type\nI. The specialization became necessary for the second step in which we introduced a special\nsupersymmetric version of the Cartan or KAK coordinates on the supergroup. As we had shown in\nour previous work \\cite{Buric:2019rms}, these coordinates are chosen to bring the Casimir\nequations into a remarkably simple form that allows to construct all superblocks as finite\nlinear combinations of spinning bosonic blocks. The main purpose of the present work was to\ndetermine the associated tensor factors that map functions of two cross ratios back to\nthe original correlation function $G_4(x_i)$ on superspace. These tensor factors consist\nof two factors, a map $\\Theta(x_i)$ on the space of superpolarizations, see eq.\\\n\\eqref{eq:defOmegas}, and a map $v(x_i)$ from the space of superpolarizations to the\nspace of tensor structures that was defined in eq.\\ \\eqref{eq:vdef}. The full evaluation of\nthese two factors required to perform the Cartan decomposition of $g(x_i)$ explicitly. This\nis in principle straightforward, but can be a bit cumbersome, in particular for higher\ndimensions $d>2$. We have illustrated the explicit computation at the example of the\n$\\mathcal{N}=2$ superconformal algebra in $d=1$ dimension in the last subsection. Higher\ndimensional examples will be treated in forthcoming work.\n\nFor some applications and in particular in order to write down crossing symmetry equations, the\ntensor factors are actually not that important. What is needed, in addition to the conformal\nblocks, of course, is only the ratio of tensor factors between different channels. This\nquantity, which we dubbed the \\textit{crossing factor} is a superconformal invariant and hence it\ncan be computed in any (super)conformal frame. Here we computed it for the $\\mathcal{N}=2$ superconformal\nalgebra in $d=1$. Along with our previous results on conformal blocks for this symmetry algebra, see\n\\cite{Buric:2019rms}, this allows to write down crossing symmetry constraints for long multiplets,\nrecovering results from \\cite{Cornagliotto:2017dup}. In the latter paper it was shown that the\nnumerical (super-)conformal bootstrap involving long multiplets is significantly more constraining\nthan the boostrap with the short or the superprimary components of long multiplets. Our new\nderivation of these constraints, however, is now entirely algorithmic and it can be extended\nwithout any significant additional difficulty to higher dimensional superconformal algebras of\ntype I. Let us also stress once again that the computation of the crossing factor in higher\ndimensional theories is significantly simpler that the computation of tensor factors. We\nhave illustrated this at the example of bosonic conformal algebras where the computation of the\ncrossing factor was reduced to computations in the subgroup $\\textit{SO}(1,3)$ of the $d$-dimensional\nconformal group $\\textit{SO}(1,d+1)$.\n\nOur focus here was on developing the general theory. Concrete applications in particular to\n4-dimensional superconformal theories will be addressed in forthcoming work. In particular\nwe will spell out the crossing symmetry constraint between two channels of a four-point\nfunction involving two half-BPS and two long operators in a 4-dimensional $\\mathcal{N}=1$\nsuperconformal theory. This requires to combine all the elements of our apprroach. On the\none hand we apply the constructions and results of \\cite{Buric:2019rms} to spell out the\nCasimir-equations in Calogero-Sutherland gauge and we use them to construct analytic expressions\nfor the conformal blocks as finite sums of spinning bosonic blocks. When restricted to the\nsuperprimary fields at the bottom of the long multiplets, our new blocks coincide with those\nconstructed in \\cite{Li:2017ddj}. On the other hand, we evaluate our formula\n\\eqref{eq:crossingmatrix} for the crossing factor in the example of $\\mathfrak{sl}(1|4)$.\nCombining these two types of input we obtain crossing equations that can be exploited with\nexisting numerical techniques. Since the superblocks are finite linear combinations of\nspinning bosonic blocks with coefficients whose analytical form is known, the evaluation\nof the superblocks only requires the numerical evaluation of 4-dimensional spinning\nbosonic blocks which has been developed in the past, see in particular \\cite{Karateev:2019pvw}.\nGiven the experience with the long multiplet bootstrap in $d=2$ dimensions, see\n\\cite{Cornagliotto:2017dup}, we expect that numerical studies of the extended crossing\nequations can improve on the constraints obtained from the restricted equations in\n\\cite{Li:2017ddj}. This may provide new clues also on the elusive minimal\n$\\mathcal{N}=1$ minimal superconformal field theory.\n\nOf course it would also be interesting to spell out crossing symmetry constraints for\nother correlators in superconformal theories such as the multiplets of R-currents or\nthe stress tensor multiplets. In principle our approach applies to such quantities as\nwell, as long as the superconformal algebra is of type I. Of course, applications to\nvarious types of shorter multiplets containing conserved operators should provide\nsome simplifications which we did not address in this work. It would be interesting to \nstudy these in more detail with a view on possible extensions of recent results in \n\\cite{Manenti:2019jds}. Similarly, when summing over all operators in the crossing \nequations, one has to take shorting conditions for the blocks of short intermediate \nexchange into account. Usually it is done on the case by case basis \n\\cite{Arutyunov:2002fh,Doobary:2015gia,Aprile:2017bgs,Sen:2018del}. It would be \ntempting to adopt the Calogero-Sutherland approach for a systematic analysis, at \nleast in \\(d=4\\). Let us also mention that the situation is getting even more \ncomplicated in the case of non-unitary theories \\cite{Yamazaki:2019yfd}. \n\nAnother interesting direction concerns correlation functions involving non-local\noperators such as boundaries, interfaces and (line, surface, $\\dots$) defects\nBlock expansions for a large class of defect two-point functions are known, see e.g.\n\\cite{Liendo:2012hy,Billo:2016cpy,Lauria:2017wav,Lauria:2018klo}. A Calogero-Sutherland\ntheory of such blocks was developed in \\cite{Isachenkov:2018pef}. It would be\ninteresting to supplement this by a theory of tensor structures and to extend\nboth ingredients, blocks and tensor structures, to the superconformal algebras.\nWe will return to this issue in future work. An example of physically relevant\n1-dimensional defects are superconformal light-ray operators which model\nhigh-energy scattering in supersymmetric gauge theories. Their two and\nthree-point correlation functions in the BFKL limit was already calculated in\n\\cite{Balitsky:2013npa,Balitsky:2015tca,Balitsky:2015oux}. These results may\nbe considered as a first step in the realisation of the bootstrap programme\nfor super lightray operators. Block expansions and bootstrap equations for\nsupersymmetric defects have also been studied and applied e.g.\\ in\n\\cite{Liendo:2016ymz,Liendo:2018ukf,Bianchi:2018zpb,Gimenez-Grau:2019hez,\nBianchi:2019sxz}.\n\nThe restriction to type I superalgebras is certainly a limiting one that we would\nlike to overcome in view of possible applications of the bootstrap e.g. to the\n6-dimensional $(2,0)$ theory \\cite{Beem:2015aoa} or to many relevant examples of\nsuperconformal field theories in $d=3$, see \\cite{Abl:2019jhh,Alday:2020tgi,Rong:2018okz,\nAtanasov:2018kqw,Agmon:2019imm} for some recent work and further references. With the\nexception of the $\\mathcal{N=2}$ superconformal algebra in $d=3$, non of the\nsuperalgebras in these examples is of type I. While it is possible to treat some\nspecial cases with methods similar to those described here, in particular in low\ndimensions, it is not clear to us whether our approach does admit a systematic\nextension. This remains an interesting challenge for future research.\n\\bigskip\n\n\\noindent\n{\\bf Acknowledgements:} We thank James Drummond, Aleix Gimenez-Grau, Paul Heslop, Mikhail Isachenkov,\nMadalena Lemos, Pedro Liendo, Junchen Rong, Philine van Vliet for comments and fruitful discussion. The work of ES was supported\nby ERC grant 648630 IQFT. VS and IB acknowledge support by\nthe Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's\nExcellence Strategy \u2013 EXC 2121 ,,Quantum Universe'' \u2013 390833306.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nDeep convolutional neural networks (CNN) are now arguably the most popular computer vision algorithms. Models such as VGG \\cite{vgg} and ResNet \\cite{he2016deep} are widely used. However, these models contain up to hundreds of millions of parameters, resulting in high memory footprint, long inference time and even longer training time. \n\nThe memory footprint and inference time of deep CNNs directly translate to application size and latency in production. Popular techniques based on model sparsification are able to deliver orders of magnitude reduction in the number of parameters in the network. \\cite{han2015deep} Together with emerging efficient sparse convolution kernel implementations, deep CNNs can now be realistically used in production after training \\cite{gray2017gpu, park2016faster,chen2018escort}. \n\nHowever, the training of these deep CNNs is still a lengthy and expensive process. The hundreds of millions of parameters in the model must all be iteratively updated hundreds of thousands of times in a typical training process based on back-propagation. Recent research has attempted to address the training time issue by demonstrating effective training on large scale computing clusters consisting of thousands of GPUs or high-end CPUs \\cite{you2018imagenet,akiba2017extremely,jia2018highly}. However, these computing clusters are still extremely expensive and labor-intensive to set up or maintain, even if the actual training process is reduced to minutes. \n\nAn alternative to using large computing clusters is to accelerate the computations of the gradients themselves. One option is to introduce highly optimized software \\cite{chetlur2014cudnn} or new hardware \\cite{markidis2018nvidia, jouppi2017datacenter}. The training can also be performed in lower precision, which can lead to massive speedups with appropriate hardware support \\cite{micikevicius2017mixed}. Another less pursued option, complementary to the previous two, is to approximate the actual gradient computation themselves \\cite{sun2017meprop, sun2018training, wei2017minimal,adelman2018faster}. Other recent works have also suggested that the exact gradient might not be necessary for efficient training of deep neural networks. Studies have shown that only the sign of the gradient is necessary for efficient back propagation \\cite{xiao2018biologically,wen2017terngrad}. Surprisingly, even random gradients can be used to efficiently train neural networks \\cite{lillicrap2016random,nokland2016direct}. However, these findings are mostly limited to small fully connected networks on smaller datasets. The approximation algorithms proposed also cannot directly translate into real wall-clock speedups in training time due to lack of efficient GPU implementation.\n\nIn this work, we hypothesize that we can extend gradient approximation methods to deep neural networks to speed up gradient computations in the training process. We hypothesize that we can apply these approximations to only a subset of the layers and maintain the validation accuracy of the trained network. We validate our hypotheses on three deep CNNs (2-layer CNN \\cite{krizhevsky2009learning}, ResNet-20 \\cite{he2016deep} VGG-19 \\cite{vgg}) on CIFAR-10. Our methods are fully compatible with classic deep CNN architectures and do not rely on explicit sparsity information that must be input to the network, like approaches such as SBnet and Sub-manifold networks \\cite{ren2018sbnet,graham2017submanifold}. \n\nWe summarize our contributions as follows: \n\\begin{itemize}\n \\item We present three gradient approximation methods for training deep CNNs, along with an efficient GPU implementations for one of them. \n \\item We explore the application of these methods to deep CNNs and show that they allow for training convergence with minimal validation accuracy loss.\n \\item We describe the concept of approximation schedules, a way to reason about applying different approximation methods across different layers and training batches.\n\\end{itemize}\n\n\\section{Approximation Methods}\n\nIn a forward-backward pass of a deep CNN during training, a convolutional layer requires three convolution operations: one for forward propagation and two for backward propagation, as demonstrated in Figure \\ref{fig:back}. We approximate the convolution operation which calculates the gradients of the filter values, which constitutes roughly a third of the computational time. We aim to apply the approximation a quarter of the time across layers\/batches. This leads to a theoretical maximum speedup of around 8 percent. \n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.6\\linewidth]{forwardprop.png}\n\\includegraphics[width=0.8\\linewidth]{backprop.png}\n\\end{center}\n \\caption{Forward and backward propagation through a convolutional layer during training. Asterisks indicate convolution operations and the operation in the red box is the one we approximate.}\n\\label{fig:back}\n\\label{fig:onecol}\n\\end{figure}\n\n\\subsection{Zero Gradient}\nThe first method passes back zero as the weight gradient of a chosen layer for a chosen batch. If done for every training batch, it effectively freezes the filter weights. \n\n\\subsection{Random Gradient}\nThe second method passes back random numbers sampled from a normal distribution with mean 0 and standard deviation $\\frac{1}{128}$ (inverse of batch size) as the weight gradient of a chosen layer for a chosen batch. Different values in the weight gradient are chosen independently. Importantly, this is different from the random feedback alignment method discussed in \\cite{lillicrap2016random} and \\cite{nokland2016direct} as we regenerate the random numbers every training batch. We implement this using tf.py\\_func, where np.random.normal is used to generate the random values. This approach is extremely inefficient, though surprisingly faster than a naive cuRAND implementation in a custom tensorflow operation for most input cases. We are working on a more efficient implementation. \n\n\\subsection{Approximated Gradient}\nThe third method we employ is based on the top-k selection algorithms popular in literature. \\cite{wei2017minimal} In the gradient computation for a filter in a convolutional layer, only the largest-magnitude gradient value is retained for each output channel and each batch element. They are scaled according to the sum of the gradients in their respective output channels so that the gradient estimate is unbiased, similar to the approach employed in \\cite{wangni2018gradient}. All other gradients are set to zero. This results in a sparsity ratio of $1-\\frac{1}{HW}$, where $H$ and $W$ are the height and width of the output hidden layer. The filter gradient is then calculated from this sparse version of the output gradient tensor with the saved input activations from the forward pass. The algorithm can be trivially modified to admit the top-k magnitude gradient values with an adjustment of the scaling parameter, a direction of future research. Similar to the random gradient method, we find that we need to scale our approximated gradient by a factor proportional to the batch size for effective training. In the experiments here, we scale them by $\\frac{1}{128}$.\n\n\\subsection{Efficient GPU Implementation}\n\nA major contribution of this work is an implementation of the approximated gradient method in CUDA. This is critical to achieve actual wall-clock training speedups. A naive Tensorflow implementation using tf.image.extract\\_glimpse does not use the GPU and results in significantly slower training time. \n\nEfficient GPU implementations for dense convolutions frequently use matrix lowering or transforms such as FFT or Winograd \\cite{chetlur2014cudnn,liu2018efficient}. However, the overheads of these transformations might not be worth the benefit in a sparse setting. Recent approaches have sought to perform the sparse convolution directly on CPU or GPU \\cite{park2016faster,chen2018escort}. Here we also opt for the latter approach. We interpret the sparse convolution in the calculation of the filter gradient as a patch extraction procedure, as demonstrated in Figure \\ref{fig:algo}.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{algo.png}\n\\end{center}\n \\caption{The approximation algorithm illustrated for an example with two filters and three input elements. For each filter, we extract a patch from each batch element's input activations and accumulate the patches.}\n\\label{fig:algo}\n\\end{figure}\n\nFormally, let's assume we have an input tensor $I$ and output gradient tensor $dO$ in $NCHW$ format, where $N$ is the batch dimension, $C$ the channel dimension and $H$, $W$ the height and width of the hidden layer. The filter tensor, $f$, has dimension $KKC_iC_o$, where $K$ is the filter size, $C_i$ is the number of channels in $I$ and $C_o$ is the number of channels in $dO$. We will use the symbol $\\ast$ to denote the convolution operation. In order to compute $df$, we have to convolve $I$ with $dO$. If we zero out all elements in $dO$ except one for each output channel dimension, then convolution becomes a collection of $C_o$ patches of shape $KKC_i$ from $I$, as specified below in Algorithm 1. \n\n\\begin{algorithm}\n\\caption{Max Gradient Approximation}\\label{euclid}\n\\begin{algorithmic}[1]\n\n\\State $df[:,:,:,:] = 0$\n\\For {$c = 1:C_o$}\n\\For {$n = 1:N$}\n\\State $row, col\\gets \\arg\\max abs(dO[n,c,:,:])$ \n\\State $sum \\gets \\sum dO[n,c,:,:]$ (sum is a scalar)\n\n\\State $df[:,c,:,:] += I[n,:,row:row+K,col:col+K] * sum$\n\\EndFor\n\\EndFor\n\n\\end{algorithmic}\n\\end{algorithm}\n\nOur kernel implementation expects input activations in $NHWC_i$ format and output gradients in $NC_oHW$ format. It produces the output gradient in $C_oKKC_i$ format. In $NHWC$ format, GPU global memory accesses from the patch extractions can be efficiently coalesced across the channel dimension, which is typically a multiple of 8. Each thread block is assigned to process several batch elements for a fixed output channel. Each thread block first computes the indices and values of the nonzero weight values from the output gradients. Then, they extract the corresponding patches from the input activations and accumulate them to the result. \n\nWe benchmark the performance of our code against NVIDIA cuDNN v7.4.2 library apis. Approaches such as cuSPARSE have been demonstrated to be less effective in a sparse convolution setting and are not pursued here \\cite{chen2018escort}. All timing metrics are obtained on a workstation with a Titan-Xp GPU and 8 Intel Xeon CPUs at 3.60GHz.\n\nAll training experiments are conducted in $NCHW$ format, the preferred data layout of cuDNN. As a result, we incur a data transpose overhead of the input activations from $NCHW$ to $NHWC$. In addition, we also incur a slight data transpose overhead of the filter gradient from $C_oKKC_i$ to $KKC_iC_o$. \n\n\\subsection{Approximation Schedules}\n\nHere, we introduce the concept of approximation schedules. This concept allows us to specify when particular approximations are applied in the training process and how to combine different approximations. Existing approximation methods such as DropBack \\cite{golub2018dropback} and PruneTrain \\cite{lym2019prunetrain} can be applied to a specific layer, but are applied over all training batches since their application at a particular training batch changes the structure of the network, thus affecting all subsequent batches. The three approximation methods we have mentioned approximate the gradient computation of a weight filter for a single training batch. They can be thus applied to a specific layer for a specific training batch. We refer to the term \"approximation schedule\" as a specification of what approximation method to apply for each layer and each training batch that is consistent with the above rules. An example of an approximation schedule is shown in Figure \\ref{fig:schedule}. More aggressive approximation schedules might lead to a higher loss in accuracy, but would also result in higher speedups. Here, we demonstrate that simple heuristics to pick approximation schedules can lead to good results on common networks such as ResNet-20 and VGG-19. While the efficacy of simple heuristics is crucial for the applicability of the proposed approximation methods in practice, determining the optimal approximation schedule for different neural network architectures is an interesting direction of future research.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{schedule.png}\n\\end{center}\n \\caption{Example approximation schedule for a 5-layer network over 4 training batches. Full Grad denotes regular gradient computation without approximation.}\n\\label{fig:schedule}\n\\end{figure}\n\n\\section{Evaluation}\nWe test our approach on three common neural network architectures (2-layer CNN \\cite{krizhevsky2009learning}, VGG-19 \\cite{vgg} and ResNet-20 \\cite{he2016deep}) on the CIFAR-10 dataset. The local response normalization in the 2-layer CNN is replaced by the more modern batch normalization method \\cite{ioffe2015batch}. For all three networks, we aim to use the approximation methods 25 percent of the time. In this work, we test all three approximation methods separately and do not combine. On the 2-layer CNN, we apply the selected approximation method to the second convolutional layer every other training batch. On VGG-19 and ResNet-20, we apply the selected approximation method to every fourth convolutional layer every training batch, starting from the second convolutional layer. For example, the three approximation schedules for the 2-layer CNN are shown in Figure \\ref{fig:2-layer-schedule}. We start from the second layer because recent work has shown that approximating the first convolutional layer is difficult \\cite{adelman2018faster}. This results in four approximated layers for VGG-19 and five approximated layers for ResNet-20. For the ResNet-20 model, we train a baseline ResNet-14 model as well. Training a smaller model is typically done in practice when training time is of concern. Ideally, our approximation methods to train the larger ResNet-20 model should result in higher validation accuracy than the ResNet-14 model. For ResNet-20, we also experiment with other approximation schedules to show that our approximation methods are robust to schedule choice. \n\nWe train the networks until the validation accuracy stabilizes. It took around 500 epochs for the 2-layer CNN, 250 epochs for the ResNet-20 model, and 200 epochs for the VGG-19 model. We use an exponentially decaying learning rate with the Adam optimizer for all three models. We apply typical data augmentation techniques such as random cropping and flipping to each minibatch. All training was performed within Tensorflow 1.13 \\cite{abadi2016tensorflow}. \n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.7\\linewidth]{2-layer-schedules.png}\n\\end{center}\n \\caption{The three approximation schedules studied for the 2-layer network using a) zero gradient method b) random gradient method c) approximated gradient method.}\n\\label{fig:2-layer-schedule}\n\\end{figure}\n\n\\subsection{Performance Comparisons}\n\nWe compare the performance of our GPU kernel for the approximated gradient method with the full gradient computation for the weight filter as implemented in cuDNN v7.4.2. cuDNN offers state-of-the-art performance in dense gradient computation and is used in almost every deep learning library. For each input case, cuDNN tests several hand-assembled kernels to pick the fastest one. The kernels fully utilize the high floating point throughput of the GPU to perform the dense gradient computations. In contrast, sparse approximations of the gradient usually involve less arithmetic\/memory ratio and do not admit as efficient kernel implementations on GPU. It is oftentimes necessary to impose structure or high sparsity ratio to achieve actual performance gain \\cite{zhu2018structurally}. Here we demonstrate that our gradient approximation method does yield an efficient GPU implementation that can lead to actual speedups compared to cuDNN. \n\nWe present timing comparisons for a few select input cases encountered in the network architectures used in this work in Table \\ref{table:perf}. We aggregate the two data transpose overheads of the input activations and the filter gradients. (In almost every case, the data transpose overhead of the input activations dominates.) We make three observations. \n\nFirstly, in most cases, the gradient approximation, including data transposition, is at least three times as fast as the cuDNN baseline. Secondly, we observe that cuDNN timing scales with the number of input channels times the height and width of the hidden layer, whereas our approximation kernel timing scales with the number of input channels alone. This is expected from the nature of the computations involved: the performance bottleneck of our kernel is the memory intensive patch extractions, the sizes of which scale with the number of input channels times filter size. Thirdly, we observe that in many cases, the data transposition overhead is over fifty percent of the kernel time, suggesting that our implementation can be further improved by fusing the data transpose into the kernel as in SBNet \\cite{ren2018sbnet}. This is left for future work. \n\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{performance.png}\n\\end{center}\n \\caption{Performance comparisons. All timing statistics in microseconds. Approx. total column is the sum of the CUDA Kernel time and the transpose overhead.}\n\\label{table:perf}\n\\end{table}\n\n\\subsection{Training Convergence}\n\nWe present convergence results for the training of our three neural networks using the chosen approximation schedules with two metrics, training loss and validation accuracy. In Figure \\ref{fig:2acc}, we see that for the 2-layer CNN, all approximation methods result in training loss curves and validation accuracy curves similar to the ones obtained by full gradient computation. We can even see that the random gradient method surpasses full gradient computation in terms of validation accuracy. The zero gradient method is very similar to full gradient computation while the approximated gradient method does slightly worse. \\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{2-layer-results2.png}\n\\end{center}\n \\caption{a) Training loss of 2-layer CNN with different approximation methods. b) Validation accuracy of 2-layer CNN with different approximation methods.}\n\\label{fig:2acc}\n\\end{figure}\n\nThe approximation methods remain robust on larger networks, such as ResNet-20, shown in Figure \\ref{fig:racc}. In this case, we can see from both the loss curves and the validation accuracy that our approximated gradient methods do slightly worse than full gradient computation, but better than both random gradient or zero gradient methods. Curiously, the random gradient method maintains a high training loss throughout the training, but is still able to achieve good validation accuracy. \n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{resnet_results_2.png}\n\\end{center}\n \\caption{a) Training loss of ResNet-20 with different approximation methods. b) Validation accuracy of ResNet-20 with different approximation methods. The loss curve of the random gradient method stagnates but the validation accuracy is competitive.}\n\\label{fig:racc}\n\\end{figure}\n\nFor VGG-19, shown in Figure \\ref{fig:vacc}, we see that full gradient descent actually lags that of the approximated methods in reaching target validation accuracy. In this case, all three approximation methods perform very well. However, full gradient descent does overtake all approximation methods finally in terms of validation accuracy. This suggests that perhaps a fruitful approach to explore, at least for networks similar to VGG, would be to use approximations early on in training and switch to full gradient computation when a validation accuracy plateau has been reached.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{vgg_results_2.png}\n\\end{center}\n \\caption{a) Training loss of VGG-19 model with different approximation methods. b) Validation accuracy of VGG-19 model with different approximation methods.}\n\\label{fig:vacc}\n\\end{figure}\n\n\\subsection{Speedup-Accuracy Tradeoffs}\n\nHere, we present the wall-clock speedups achieved for each network and approximation method. We compare the speedups against the validation accuracy loss, measured from the best validation accuracy achieved during training. Validation accuracy was calculated every ten epochs. As aforementioned, the random gradient implementation is quite inefficient and is pending future work. The speedup takes into account the overhead of defining a custom operation in Tensorflow, as well as the significant overhead of switching gradient computation on global training step. For the 2-layer CNN, we are unable to achieve wall-clock speedup for all approximation methods, even the zero gradient one, because of this overhead. (Table \\ref{tab:2acc}) However, all approximation methods achieve little validation accuracy loss. The random gradient method even outperforms full gradient computation by 0.8\\%. \n\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{tradeoff_2layer.png}\n\\end{center}\n \\caption{Training speedup and validation accuracy loss for the approximation methods on 2-layer CNN. Negative speedup indicates a slowdown.}\n\\label{tab:2acc}\n\\end{table}\n\nFor ResNet-20, the approximation schedule we choose does not involve switching gradient computations. We avoid the switching overhead and can achieve speedups for both the zero gradient method and the approximated gradient method. As shown in Table \\ref{tab:racc}, the zero gradient method achieves roughly a third of the speedup compared to training the baseline ResNet-14 model. The approximated gradient method also achieves a 3.5\\% wall-clock speedup, and is the only method to suffer less accuracy loss than just using a smaller ResNet-14. In the following section, we demonstrate that with other approximation schedules, the approximated gradient method can achieve as little as 0.1\\% accuracy loss. \n\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{tradeoff_resnet.png}\n\\end{center}\n \\caption{Training speedup and validation accuracy loss for the approximation methods on ResNet-20. Negative speedup indicates a slowdown.}\n\\label{tab:racc}\n\\end{table}\n\nFor VGG-19, despite being quicker to converge, the approximation methods all have worse validation accuracy than the baseline method. (Table \\ref{tab:vacc}) The best approximation method appears to be the random gradient method, though it is extremely slow due to our inefficient implementation in Tensorflow. The other two methods also achieve high validation accuracies, with the approximated gradient method slightly better than the zero gradient method. Both methods are able to achieve speedups in training. \n\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{tradeoff_vgg.png}\n\\end{center}\n \\caption{Training speedup and validation accuracy loss for the approximation methods on VGG-19. Negative speedup indicates a slowdown.}\n\\label{tab:vacc}\n\\end{table}\n\n\n\\subsection{Robustness to Approximation Schedule}\n\nHere, we explore two new approximation schedules for ResNet-20, keeping the total proportion of the time we apply the approximation to 25 percent. We will refer to the approximation schedule presented in the secion above as schedule 1. Schedule 2 applies the selected approximation method every other layer for every other batch. Schedule 3 applies the selected approximation method every layer for every fourth batch. We also present the baseline result of the ResNet-14 model. \n\nAs we can see from Figure \\ref{fig:robust} and Table \\ref{tab:robust-2}, under schedules 2 and 3, both the zero gradient and the approximated gradient method perform well. In fact, for the approximated gradient and the zero gradient methods the validation accuracy loss is smaller than schedule 1. Indeed, in schedule 3, the approximated gradient's best validation accuracy is within 0.1\\% of that of the full gradient computation. The random gradient method's validation accuracy is in now line with its poor loss curve for these two approximation schedules. This suggests that the random gradient method does not work well for ResNet-20 architecture.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{robust.png}\n\\end{center}\n \\caption{a) Training loss of ResNet-20 with different approximation methods for approximation schedule 2. b) Validation accuracy of ResNet-20 with different approximation methods for approximation schedule 2. c) Training loss of ResNet-20 with different approximation methods for approximation schedule 3. d) Validation accuracy of ResNet-20 with different approximation methods for approximation schedule 3.}\n\\label{fig:robust}\n\\end{figure}\n\\begin{table}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{robust_table.png}\n\\end{center}\n \\caption{Validation accuracy for different approximation schedules on ResNet-20. Schedule 1 is the same as presented above.}\n\\label{tab:robust-2}\n\\end{table}\n\n\\section{Discussion and Conclusion}\n\nWhile research on accelerating deep learning inference abounds, there is relatively limited work focused on accelerating the training process. Recent works such as PruneTrain prune the neural network in training, but suffers quite serious loss in validation accuracy \\cite{lym2019prunetrain}. Approaches such as DropBack \\cite{golub2018dropback} and MeProp \\cite{wei2017minimal,sun2017meprop} show that approximated gradient are sufficient in successfully training neural networks but don't yet offer real wall-clock speedups. In this work, we show that we can train deep neural networks to good validation accuracy with very minimal gradient information on a subset of the layers, leading to wall-clock speedups for training. \n\nWe are surprised by the consistent strong performance of the zero gradient method. For ResNet-20, for two of the three approximation schedules tested, the validation accuracy loss is better than that of a smaller baseline network. Its performance is also satisfactory on VGG-19 as well as the 2-layer CNN. It admits an extremely fast implementation that delivers consistent speedups. This points to a simple way to potentially boost training speed in deep neural networks, while maintaining their performance advantage over shallower alternatives. \n\nWe also demonstrate that random gradient methods can train deep neural networks to convergence, provided they are only applied to a subset of the layers. For the 2-layer CNN and VGG-19, this method leads to the least validation accuracy loss of all three approximation methods. However, its performance serious lags other methods on ResNet-20, suggesting that its performance is network-architecture-specific. Naive feedback alignment, where the random gradient signal is fixed before training starts, has been shown to be difficult to extend to deep convolutional architectures \\cite{han2019efficient,bartunov2018assessing}.We show here that if the random gradients are newly generated every batch and applied to a subset of layers, they can be used to train deep neural networks to convergence. Interestingly, generating new random gradients every batch effectively abolishes any kind of possible ``alignment'' in the network, calling for a new explanation of why the network converges. Evidently, this method holds the potential for an extremely efficient implementation, something we are currently working on. \n\nFinally, we present a gradient approximation method with an efficient GPU implementation. Our approximation method is consistent in terms of validation accuracy across different network architectures and approximation schedules. Although the training wall clock time speedup isn't large, the validation accuracy loss is also small. We wish to re-emphasize here the small validation accuracy difference observed between the baseline ResNet-14 and ResNet-20, leading us to believe that novel training speed-up methods must incur minimal validation accuracy loss to be more practical than simply training a smaller network.\n\nIn conclusion, we show that we can ``fool\" deep neural networks into training properly while supplying it only very minimal gradient information on select layers. The approximation methods are simple and robust, holding the promise to accelerate the lengthy training process for state-of-the-art deep CNNs. \n\n\n\\section{Future Work}\nBesides those already mentioned, there are several more interesting directions of future work. One direction is predicting the validation accuracy loss that a neural network would suffer from a particular approximation schedule. With such a predictor, we can optimize for the fastest approximation schedule while constraining the final validation accuracy loss before the training run. We can also examine the effects of mingling different approximation methods and integrating existing methods such as PruneTrain and Dropback \\cite{lym2019prunetrain, golub2018dropback}. Another direction is approximating the gradient of the hidden activations, as is done in meProp \\cite{sun2017meprop}. However, if we approximate the hidden activations at a deeper layer of the network, the approximation error will be propagated to the shallower layers. Due to this concern, we start with approximating filter weight gradients, where the effect of errors are local. Finally, we are working on integrating this approach into a distributed training setting, where the approximation schedule is now 3-dimensional (machine, layer, batch). This approach would be crucial for the approximation methods to work with larger scale datasets such as ImageNet, thus potentially allowing for wall-clock speed-up in large scale training. \n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}