diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzegjm" "b/data_all_eng_slimpj/shuffled/split2/finalzzegjm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzegjm" @@ -0,0 +1,5 @@ +{"text":"\\section{Motivation}\n\nThe associated production of jets and vector bosons ($V+$\\hspace{.5mm}jets) in hadron\ncollisions represents an important test of QCD. In addition, $V+$\\hspace{.5mm}jets\\ is\na significant source of background events in many measurements and\nsearches both at the Tevatron and the LHC. The development of\nsimulation codes which produce accurate predictions for $V+$\\hspace{.5mm}jets\\\nproduction has been a very active field of research over the last few\nyears. The developments have followed two main paths: parton-level\nfixed-order predictions with NLO accuracy;\nand particle-level predictions from combining tree-level \\ensuremath{ 2 \\rightarrow N}\\\nmatrix elements with a parton shower algorithm.\nThese new models require validation against experimental measurements\nof the properties of $V+$\\hspace{.5mm}jets\\ production. The leptonic decay modes\noffer distinct experimental signals with low backgrounds, and during\nthe last two years a long list of $V+$\\hspace{.5mm}jets\\ measurements from the CDF\nand D\\O\\ experiments have been made public. All the measurements\npresented here are fully corrected for detector effects, thus offering\na reference against which existing and future simulation models can be\nvalidated and tuned. The measurements can be divided into those which\ntag heavy-flavour (HF) jets and those which are inclusive in jet\nflavour.\n\n\\section{$Z+$\\hspace{.5mm}jets measurements}\n\nCDF has presented measurements of the jet multiplicity in\n$Z+$\\hspace{.5mm}jets as well as the inclusive, differential \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\\nspectra in event with at least $N=1,2$ jets~\\cite{cdfZjets}. The boson\nis selected via its decay into an pair of high-$E_T$ electrons whose\ninvariant mass is compatible with $M_Z$. Jets are defined using the\nRun~II mid-point algorithm and are required to satisfy $\\ensuremath{ p_{T}} > 30$~GeV\nand $|y|<2.1$. The correction for detector effects is deduced from a\nsimulated event sample passed through a simulation of the detector. In\nFig.~\\ref{fig:cdf_zjets} (left) the measured \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\ spectra are\ncompared with parton-level NLO pQCD predictions from {\\sc mcfm}~\\cite{mcfm}\nwhich have been corrected for hadronization and the underlying\nevent. The NLO predictions are seen to agree with data within\nexperimental and systematic uncertainties over one order of magnitude\nin \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\ and four orders of magnitude in cross section.\n\\begin{figure}\n\\begin{center}\n\\psfig{figure=Pt_jet12.eps,height=10cm}\n\\psfig{figure=fig1a.eps,height=9.7cm}\n\\end{center}\n\\caption{Inclusive \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\ spectra in $Z+N$-jet events, $N=1,2$, with\n data compared to NLO pQCD (right). Data compared with NLO pQCD and\n various event generators predictions for $\\Delta\\phi(Z,$jet$)$ in\n $Z+1$-jet events (left).}\n\\label{fig:cdf_zjets}\n\\end{figure}\n\nD\\O\\ has presented measurements of the \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\ spectra of the three\nleading jets in the $Z(\\rightarrow\\ensuremath{e^+ e^-})+$\\hspace{.5mm}jets channel,\nnormalized to the inclusive $Z(\\rightarrow\\ensuremath{e^+ e^-})$ cross\nsection~\\cite{d0ZeeJets}. The event selection is similar to the CDF\nanalysis, with jets being reconstructed down to $20$~GeV. The\nmeasurements are compared with both with fixed-order pQCD parton-level\npredictions from {\\sc mcfm}\\ and the particle-level predictions of various\ncommonly used event generators. The comparisons for the second jet are\ngiven in Fig.~\\ref{fig:d0_zjets}. Both the LO and NLO pQCD predictions\nare consistent with data within experimental and theoretical\nuncertainties. As expected, the NLO prediction has significantly lower\nscale uncertainties than the LO prediction, corresponding to a higher\npredictive power. {\\sc pythia}~\\cite{pythia} using Tune A (``old''\n$Q^2$-order parton shower) predicts less jet activity than seen in\ndata, and the discrepancies increase with \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\ and jet\nmultiplicity. The same tendency is seen for\n{\\sc herwig}~\\cite{herwig}. {\\sc pythia}\\ using Tune S0 (``new'' \\ensuremath{ p_{T}}-ordered\nparton shower) gives good agreement for the leading \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\ spectrum,\nbut no improvement over the old model for sub-leading jets. In\ncontrast, both {\\sc sherpa}~\\cite{sherpa} and {\\sc alpgen}+{\\sc pythia}~\\cite{alpgen}\nare found to predict the shapes of the \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\ spectra reasonably well\nfor all three leading jets, with the latter generator giving somewhat\nbetter agreement for the leading jet. The normalizations are affected\nby significant scale uncertainties which increase with jet\nmultiplicity. {\\sc sherpa}\\ ({\\sc alpgen}+{\\sc pythia}) predicts more (less) jets\nthan observed in data, but for both codes the normalizations can be\nmade to agree with data by adjusting the choices of factorization and\nrenormalization scales.\n\\begin{figure}\n\\begin{center}\n\\psfig{figure=v2.8_jpt1.eps,height=8cm}\n\\end{center}\n\\caption{Data compared with NLO pQCD and various event generators\n predictions for \\ensuremath{ p_{T}}($2^{nd}$ jet) in $Z+2$-jet events.}\n\\label{fig:d0_zjets}\n\\end{figure}\n\nTwo D\\O\\ studies~\\cite{d0ZmumuJetI,d0ZmumuJetII} presents\nmeasurements of the \\ensuremath{ p_{T}}\\ and rapidity of the $Z$ and the leading jet,\nas well as various angular correlations between the two objects. The\ndata are compared with NLO pQCD from {\\sc mcfm}, {\\sc pythia}\\ using Tune A,\n{\\sc sherpa}, {\\sc alpgen}+{\\sc pythia}\\ using Tune A, and, for the angular\ncorrelation observables, {\\sc alpgen}+{\\sc herwig}. While fixed-order NLO\ncalculations are found found give accurate predictions for \\ensuremath{ p_{T}}\\ and\njet multiplicity observables (see above), it does not describe the\nspectrum of $\\Delta\\phi(Z,$jet$)$ (Fig.~\\ref{fig:cdf_zjets} (right))\nfor values close to $\\pi$, where multiple soft emissions are\nimportant, or below $\\sim 2$, where the underlying event gives sizable\ncontributions. Of the particle-level event generators, {\\sc sherpa}\\ is\nfound to give the most accurate description of the angular\ncorrelations.\n\n\\section{$V+$\\hspace{.5mm}HF-jet measurements}\n\nMany searches for new particles, e.g.\\ low-mass Higgs searches at the\nTevatron, tag $b$-jets in order to enhance the signal to background\nratio. In such searches, accurate predictions for the associated\nproduction of a vector boson and heavy-flavour jets is of major\nimportance for the sensitivity of the analysis to new physics.\n\nBoth CDF and D\\O\\ have presented measurements of a $W$ boson in\nassociation with a single $c$ quark using similar\nstrategies~\\cite{cdfWc,d0Wc}. This channel is sensitive to the\n$s$-quark content of the proton at large $Q^2$, and it is a background\nto top-quark measurements and searches for a low-mass Higgs particle\nat the Tevatron. The $W$ is selected via a high-\\ensuremath{ p_{T}}\\ lepton ($e$ or\n$\\mu$), and large missing $E_T$. A soft muon from a semi-leptonic\n$c$-quark decay is used to tag $c$-jets. For signal events the two\nleptons tend to have opposite charge, whereas the backgrounds show no\nsuch charge correlation. CDF measures $(\\sigma \\times {\\rm BR}) = 9.8\n\\pm 2.8$(stat)$\\pm^{+1.4}_{-1.6}$(sys) pb, which is in good agreement\nwith the NLO pQCD prediction of $11 \\pm^{+1.4}_{-3.0}$ pb. D\\O\\\npresents the differential \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\ cross section for $W+c$ relative to\n$W+$jet and sees agreement with {\\sc alpgen}+{\\sc pythia}\\ within uncertainties.\n\nBased on a similar event selection, CDF measures the $W+b$-jet cross\nsection~\\cite{cdfWb}. The $W$ is selected via its decay into $e\\nu$ or\n$\\mu\\nu$, and a secondary-vertex algorithm is used to define a\n$b$-quark enhanced sample. The $b$-quark content is extracted from the\nsecondary-vertex mass distribution by fitting with mass templates for\nlight-flavour, $c$ and $b$ quark samples. The cross section for\n$\\ensuremath{ p_{T}^{b\\mathrm{-jet}} } > 20$~GeV is measured to be $(\\sigma \\times {\\rm BR}) = 2.78\n\\pm 0.27$(stat)$\\pm 0.42$(sys) pb. The {\\sc alpgen}\\ prediction of the\ncross section is $0.78$ pb, which is a factor of $3-4$ below data, and\nwork is ongoing to understand this discrepancy.\n\nA very similar $b$-tagging and $b$-content extraction technique is\nused by CDF in an analysis~\\cite{cdfZb} of $Z+b$-jet events in the\n$ee$ and $\\mu\\mu$ channels. Cross sections are measured relative to\nthe inclusive $Z$ cross section and are presented differential in\n$E_T^{b{\\rm -jet}}$, $\\eta^{b{\\rm -jet}}$, \\ensuremath{ p_{T}^{Z} }\\ and jet multiplicity\nboth for $b$ jets and flavour-inclusive jets. The total relative cross\nsection is measured to be $\\sigma(Z+$jet$)\/\\sigma(Z) = (3.32 \\pm\n0.53$(stat)$\\pm 0.42$(sys)$) \\times 10^{-3}$. The NLO pQCD prediction\nis $2.3 \\times 10^{-3}$ for $\\mu_F^2 = \\mu_R^2 = m_Z^2 + p_{T,Z}^2$\nand $2.8 \\times 10^{-3}$ for $\\mu_F^2 = \\mu_R^2 =\n\\langle\\ensuremath{ p_{T}^{\\mathrm{jet}} }\\rangle^2$, in good agreement with data within\nuncertainties. The prediction of {\\sc alpgen}\\ is $2.1 \\times 10^{-3}$ and\n{\\sc pythia}\\ predicts $3.5 \\times 10^{-3}$. The large difference between\n{\\sc alpgen}\\ and {\\sc pythia}\\ has been traced back to the higher choice of\nscales used for {\\sc alpgen}\\ than for {\\sc pythia}.\n\\begin{figure}\n\\begin{center}\n\\psfig{figure=zb_ptjet_mcfm.eps,height=5cm,width=7.5cm}\n\\psfig{figure=zb_ptjet_gen.eps,height=5cm,width=7.5cm}\n\\caption{The \\ensuremath{ p_{T}^{b\\mathrm{-jet}} }\\ spectrum measured in $Z+b$-jet production\n compared with {\\sc mcfm}, {\\sc pythia}\\ and {\\sc alpgen}.}\n\\label{fig:cdf_zbjet}\n\\end{center}\n\\end{figure}\n\n\\vspace{-.1cm}\n\n\\section{Conclusions}\n\nIn addition to offering an important test of QCD, $V+$\\hspace{.5mm}jets\\ production\nis a major source of background to many measurements and searches at\nhadron colliders. Several new codes for simulating the associated\nproduction of $Z\/W$ and jets have become available over the last few\nyears, and the validation and tuning of these tools are of great\nimportance. A long list of $V+$\\hspace{.5mm}jets\\ measurements have become available\nfrom the CDF and D\\O\\ experiments during the last two\nyears. Parton-level predictions from NLO pQCD are found to offer the\nhighest predictive power for \\ensuremath{ p_{T}^{\\mathrm{jet}} }\\ spectra, showing good agreement\nwith data, both for flavor-inclusive and HF measurements. Generators\nmatching tree-level matrix elements with parton showers are found to\noffer the most accurate particle-level predictions but have\nsignificant scale uncertainties. Angular correlations show sensitivity\nto multiple soft emissions and the underlying event and are therefore\npartially outside of the scope of fixed-order pQCD calculations, and\nevent-generator predictions show varying agreement with data. In the\nheavy-flavor channels, both pQCD and event-generator predictions are\nfound to be in agreement with data within uncertainties, with a\npossible exception being $W+b$-jet production. Since all presented\nmeasurements are fully corrected for detector effects they can be\ndirectly used for testing and improving existing and future theory\nmodels.\n\n\\vspace{-.2cm}\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:Intro}\n\nSensors measuring analog responses of a general multidimensional process at discrete spatial locations are becoming superior and more affordable \\citet{tsang1985theory, zhu20203d, akyildiz2002wireless, badon2016smart, bhowmick2020measurement, bhowmick2022spatiotemporal, adrian1991particle, sun2015carbon, chu1985applications, yang2017full, yang2016dynamic}. Concurrently, the evolving big data storage facilities and computational capabilities can harness such high dimensional data \\citet{marx2013big, demchenko2013addressing, sun2020review} to inquire more about the underlying physical laws. Such physical laws have been extensively studied in the past to put forward scientific theories having mathematical formulations. Most often such well-studied scientific theories are represented in the form of ordinary or partial differential equations. In the last century, the research was directed towards forward-modeling which consists of obtaining analytical and numerical solutions of differential equations. With the advent of high-dimensional sensing systems and the acquired big data, recently the research is more focused on learning about the parameters of the continuous spatiotemporal process by addressing the inverse problem \\citet{tarantola2006popper, tarantola2005inverse, lieberman2010parameter, nagarajaiah2017modeling}. As scientists and engineers, we are cognizant of the governing theory of the multidimensional analog processes that are measured digitally. The domain knowledge allows for the description of the physical process in the form of a mathematical model. But we need to estimate the unknown parameters that identify the final connection between the observations we are measuring and the inherent physical processes which characterize them. Several studies have been conducted previously to estimate the parameters of the ordinary differential equation (ODE) models from its observations \\citep{ramsay2007parameter, peifer2007parameter, brunton2016discovering, lai2019sparse, lai2021structural}. The problem becomes harder in the case of models represented by partial differential equations (PDEs) compared to ODEs as the former includes differentials with respect to multiple variables depending on the dimensions of the model (e.g. spatiotemporal PDE models in fluid mechanics, wave optics, or geophysics).\n\nOne of the prevalent approaches of estimating PDE parameters involves optimizing the parameter space of the PDE by minimizing the difference between the numerically simulated response to the observed measurements \\citep{muller2002fitting}. But the optimization problem suffers from the presence of local minima different from global minima (non-convex) \\citep{muller2004parameter}. Also, the method requires knowledge of the boundary conditions and involves large computational cost. The other approach is based on regression analysis to estimate parameters of the temporal and spatial derivative terms in the PDE model \\citep{bar1999fitting,voss1999amplitude,liang2008parameter}. The spatial and temporal derivatives are obtained from the measured process data by performing numerical differentiation. This two-stage approach of numerical differentiation and regression has been preferred over the first approach because of its computational simplicity. \\citet{rudy2017data} and \\citet{schaeffer2017learning} extend the two-stage method to discover the structure of the PDE model from an overcomplete dictionary of feasible mathematical terms by implementing sparse linear regression. \\citet{xun2013parameter} extends the generalized smoothing approach of \\citet{ramsay2007parameter} for ODE models to estimate the parameters of PDE models. In recent times, with the emergence of big data and high-performance computational frameworks, deep learning algorithms have been implemented to address inverse problems in diverse scientific fields such as biomedical imaging \\citep{lucas2018using,ongie2020deep,jin2017deep}, geophysics \\citep{seydoux2020clustering, zhang2019regularized}, cosmology \\citep{ribli2019improved} to name a few. Similar attempts have been made to solve the inverse problem of PDE model identification by using deep neural networks \\citep{raissi2019physics, long2018pde, long2019pde, both2021deepmod}. The general approach involves fitting the measured response variable using a deep regression neural network. A separate neural network enables the implementation of the PDE model using automatic\/numerical differentiation of the fitted response model with respect to the independent variables.\n\nThe previously presented methods can be broadly categorized into two classes: regression-based and deep learning-based methods. Both classes of methods identify the latent PDE model from the measured full-field data devoid of the iterative numerical solution of the PDE model, thereby achieving higher computational efficiency. Nonetheless, both classes of methods suffer from significant drawbacks that (a) the regression-based method suffers from the inaccurate estimation of numerical derivatives in the presence of noise, especially the higher-order derivatives \\citet{rudy2017data}, and (b) the deep learning methods lack any formal rule regarding the choice of network architecture, initialization, activation functions, or optimization schemes \\citet{raissi2019physics}. The first limitation has been explicitly mentioned by the authors in \\citep{rudy2017data} where they report a substantial error in the estimation of the parameter of a fourth-order Kuramoto-Sivashinsky PDE model in the presence of a small amount of noise. The second limitation is discussed in greater detail by \\citep{raissi2019physics}. Not only the scientific interpretability of the deep learning models is absent \\citet{gilpin2018explaining, ribeiro2016model}, but also there is growing skepticism over the stability of its solution to the inverse problems \\citet{antun2020instabilities, gottschling2020troublesome}. The repeatability of its outcomes \\citet{hutson2018artificial, vamathevan2019applications} on account of randomness in the data or initialization and its robustness against adversarial perturbations \\citet{belthangady2019applications} have been increasingly questioned. Such concerns of repeatability can be found in the deep learning model of \\citep{both2021deepmod} where the method identifies the PDE models only for some of the randomized trials.\n\nThis paper addresses the above-mentioned shortcomings by proposing the method of \\textbf{SNAPE} (\\textbf{S}imulta\\textbf{N}eous Basis Function \\textbf{A}pproximation and \\textbf{P}arameter \\textbf{E}stimation) which stands on the ideals of theory-guided learning \\citep{karpatne2017theory, roscher2020explainable}, a progressive practice of data science in the scientific community. \\textbf{SNAPE} infers the parameters of the linear and nonlinear differential equation (both ODEs and PDEs) models from the measured observations of the responses with the use of domain knowledge of the physical process or any general multidimensional processes. The proposed method in this paper incorporates the concept of a generalized smoothing approach by fitting basis functions to the measured response; unlike studies by \\citep{ramsay2007parameter} and \\citep{xun2013parameter} wherein discrete sampling of penalized splines is adopted. Such approximate numerical treatments are not amenable in noisy conditions and have to be replaced by exact differentiation. In this paper we propose the use of exact differentiation of spline basis functions. The coefficients of the basis functions are constrained to satisfy the differential equation for all the observed measurements of the multidimensional general process response. The parameters of the differential equations, as well as the coefficients of the basis functions, are simultaneously evaluated using the alternating direction method of multipliers (ADMM) optimization algorithm \\citet{gabay1976dual, yang2011alternating, boyd2011distributed}. The proposed method does not require knowledge of the initial or boundary conditions of the model. \\textbf{SNAPE} demonstrates its robustness by successfully estimating parameters of differential equation models from data perturbed with a large amount of noise (nearly 100\\% Gaussian noise). The repeatability of the proposed method is guaranteed by inferring the model parameters from parametric bootstrap samples \\citet{efron1994introduction}, thereby obtaining the mean and the confidence bounds of the estimates.\n\n\n\n\\section{Results}\n\\label{sec:Results}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure01.eps}\n\t\\caption{\\textbf{SimultaNeous basis function Approximation and Parameter Estimation (SNAPE) of Partial Differential Equation (PDE) models from data.} \\textbf{(A) }Measured data with noise $\\mathrm{\\epsilon}$ of a general two-dimensional dynamic process $g\\left(\\mathbf{x}\\right)$ at $n$ discrete points \\textbf{ (B) }basis function approximation with unknown coefficients $\\boldsymbol{\\beta }$ of both the process response and its partial derivatives with respect to the independent variables $\\mathbf{x}$ \\textbf{ (C) }the PDE model $\\mathcal{F}\\left(\\right)=\\mathbf{0}$ as a function of the independent variables, the response, and its partial derivatives with unknown parameters $\\boldsymbol{\\theta }$, which governs the process \\textbf{(D)} the response and the partial derivative terms of the PDE model are constrained to simultaneously obtain the optimum basis coefficients ${\\boldsymbol{\\beta }}^*$ that approximates the measured response and the optimum parameters ${\\boldsymbol{\\theta }}^*$ that satisfy the underlying PDE model by the measured data. Likewise, the application of \\textbf{SNAPE} algorithm proposed herein can be adopted for PDE models by generalizing to any multidimensional processes (i.e., $\\mathbf{x}\\mathrm{=}\\left(x_{\\mathrm{1}},x_{\\mathrm{2}}\\mathrm{,\\dots }t\\right)\\mathrm{\\in }{\\mathbb{R}}^p$).}\\label{fig:1}\n\\end{figure}\n\n\nA multidimensional dynamic process is represented by its response $g\\left(\\mathbf{x}\\right)$, with $\\mathbf{x}\\mathrm{=}\\left(x_{\\mathrm{1}},x_{\\mathrm{2}}\\mathrm{,\\dots }t\\right)\\mathrm{\\in }{\\mathbb{R}}^p$ being the multidimensional domain of the process. In the case of solid and fluid mechanics, the domain may consist of three spatial and one temporal coordinate. In the subsequent part, the application of the proposed method is described using PDEs that provide a more generalized form of a differential equation. Such initial-boundary value problems are represented by a PDE model which is satisfied within the domain $\\mathbf{x}\\mathrm{\\in }\\mathbf{\\boldsymbol{\\varOmega}}$ given by\n\n\\begin{equation} \\label{EQ01} \n\t\\mathcal{F}\\left(\\mathbf{x},g,\\dots ,g^q,\\frac{\\partial g}{\\partial x_1},\\dots ,\\frac{\\partial g}{\\partial t},\\frac{{\\partial }^2g}{\\partial x_1\\partial x_1},\\dots ,\\frac{{\\partial }^2g}{\\partial x_1\\partial t},g\\frac{\\partial g}{\\partial x_1},\\dots ,\\frac{\\partial g}{\\partial x_1}\\frac{\\partial g}{\\partial t},\\dots ;\\boldsymbol{\\theta }\\right)=0,\\qquad \\mathbf{x}\\in \\boldsymbol{\\varOmega} \n\\end{equation} \n\nwhere the parameter vector $\\boldsymbol{\\theta }=\\left({\\theta }_1,\\dots ,{\\theta }_m\\right)$ are the coefficients of the PDE model having parametric form in $g\\left(\\mathbf{x}\\right)$ and its partial derivatives. The uniqueness of the solution is established by defining the initial and boundary conditions of the aforementioned process which is satisfied at the boundary of the domain $\\mathbf{x}\\mathrm{\\in }\\boldsymbol{\\varGamma}$ given by \n\n\\begin{equation} \\label{EQ02} \n\t\\mathcal{H}\\left(g,\\dots ,g^q,\\frac{\\mathrm{\\partial }g}{\\mathrm{\\partial }x_1},\\dots ,\\frac{\\mathrm{\\partial }g}{\\mathrm{\\partial }t},\\dots ,\\frac{{\\mathrm{\\partial }}^qg}{\\mathrm{\\partial }x^q_1},\\dots ,\\frac{{\\mathrm{\\partial }}^qg}{\\mathrm{\\partial }t^q},\\dots \\right)=h\\left(\\mathbf{x}\\right),\\qquad \\mathbf{x}\\mathrm{\\in }\\boldsymbol{\\varGamma} \n\\end{equation} \n\nThe initial or the boundary conditions are referred to as homogeneous if $h\\left(\\mathbf{x}\\right)=0$. The PDE model in equations \\ref{EQ01} and \\ref{EQ02} represents the most general form of constant-coefficient nonlinear PDE model of arbitrary order. Even if the solution of the PDE model represents continuous multivariate function and its domain $\\boldsymbol{\\varOmega}$ and boundary $\\boldsymbol{\\varGamma}$ represents continuous functional space, in a practical scenario we acquire data in discrete points of the multidimensional domain which are contaminated with measurement noise. Assuming $g\\left(\\mathbf{x}\\right)$ is measured as its surrogate $\\mathbf{y}\\left(\\mathbf{x}\\right)$ at discrete points within the multidimensional domain $\\boldsymbol{\\varOmega}$, $\\mathbf{x}\\mathrm{=}\\left(x_{\\mathrm{1}},x_{\\mathrm{2}}\\mathrm{,\\dots }t\\right)\\ \\mathrm{\\in }{\\mathbb{R}}^p$ having the measurements $\\left(y_i,{\\mathbf{x}}_i\\right)$, where $i\\ =\\ 1,\\dots ,n$ satisfying $y_i=g\\left({\\mathbf{x}}_i\\right)+{\\mathrm{\\epsilon}}_i$. The independent and identically distributed homoscedastic measurement noise ${\\mathrm{\\epsilon}}_i$, $i\\ =\\ 1,\\dots ,n$ are assumed to follow a Gaussian distribution with zero mean and ${\\mathrm{\\sigma}}^{\\mathrm{2}}_{\\mathrm{\\epsilon}}$ variance.\n\nThe objective of the present study is to estimate the unknown $\\boldsymbol{\\theta }$ in the PDE model of equation \\ref{EQ01} from the noisy measurement data. The proposed method of \\textbf{SNAPE} takes into account the PDE model and the associated unknown parameter vector $\\boldsymbol{\\theta }=\\left({\\theta }_1,\\dots ,{\\theta }_m\\right)$ by expressing the process response $g\\left(\\mathbf{x}\\right)$ as an approximation to the linear combination of basis functions given by\n\n\\begin{equation} \\label{EQ03} \n\tg\\left(\\mathbf{x}\\right)\\approx \\overline{g}\\left(\\mathbf{x}\\right)=\\sum^K_{k=1}{b_k\\left(\\mathbf{x}\\right){\\beta }_k}={\\mathbf{b}}^T\\left(\\mathbf{x}\\right)\\boldsymbol{\\beta } \n\\end{equation}\n\nwhere $\\mathbf{b}\\left(\\mathbf{x}\\right)=\\{b_1\\left(\\mathbf{x}\\right),\\dots ,b_K\\left(\\mathbf{x}\\right){\\}}^T$ is the vector of basis functions and $\\boldsymbol{\\beta }={\\left({\\beta }_1,\\dots ,{\\beta }_K\\right)}^T$ is the vector of basis coefficients. In this study, the B-splines are chosen as basis functions for all the applications. It is conjectured that B-splines bring about nearly orthogonal basis functions \\citep{berry2002bayesian} and exhibits compact support property \\citep{de1978practical}, i.e., non-zero only in short subinterval. The multidimensional B-splines are generated from the tensor product of the individual one-dimensional B-splines \\citep{de1978practical}.\n\nThe PDE model in equation \\ref{EQ01} is represented by the same linear combination of basis functions as\n\n\\begin{equation} \\label{EQ04} \n\t\\mathcal{F}\\left(\\mathbf{x},{\\mathbf{b}}^T\\left(\\mathbf{x}\\right)\\boldsymbol{\\beta },\\dots ,\\{{\\mathbf{b}}^T\\left(\\mathbf{x}\\right)\\boldsymbol{\\beta }{\\}}^q,\\{\\partial \\mathbf{b}\\left(\\mathbf{x}\\right)\/\\partial x_1{\\}}^T\\boldsymbol{\\beta },\\dots ;\\boldsymbol{\\theta }\\right)=0,\\qquad \\mathbf{x}\\in \\boldsymbol{\\varOmega} \n\\end{equation} \n\nInstead of directly estimating the PDE parameters $\\boldsymbol{\\theta }=\\left({\\theta }_1,\\dots ,{\\theta }_m\\right)$, the local parameters of the basis functions, $\\boldsymbol{\\beta }={\\left({\\beta }_1,\\dots ,{\\beta }_K\\right)}^T$, are estimated from the noisy data by imposing the constraint that the data satisfies the underlying governing PDE $\\mathcal{F}=0$ given in equation \\ref{EQ01} for each of the observations.\n\nThus, the method of \\textbf{SNAPE} solves the following constrained optimization problem:\n\n\\begin{equation} \\label{EQ05} \n\t\\begin{array}{c}\n\t\t{\\mathop{\\mathrm{min}}_{\\boldsymbol{\\beta },\\boldsymbol{\\theta }}\\ \\ \\ \\ \\sum^n_{i=1}{{\\left\\{y_i-{\\mathbf{b}}^T\\left({\\mathbf{x}}_i\\right)\\boldsymbol{\\beta }\\right\\}}^2}\\ } \\\\ \n\t\tsubject\\ to\\ \\ \\ \\ \\mathcal{F}\\left(\\mathbf{x},{\\mathbf{b}}^{\\mathbf{T}}\\left(\\mathbf{x}\\right)\\boldsymbol{\\beta },\\dots ,\\{{\\mathbf{b}}^{\\mathbf{T}}\\left(\\mathbf{x}\\right)\\boldsymbol{\\beta }{\\}}^q,\\{\\partial \\mathbf{b}\\left(\\mathbf{x}\\right)\/\\partial x_1{\\}}^T\\boldsymbol{\\beta },\\dots ;\\boldsymbol{\\theta }\\right)=\\mathbf{0},\\qquad \\mathbf{x}\\mathrm{\\in }\\boldsymbol{\\varOmega} \\end{array}\n\\end{equation}\t\n\t\nFigure \\ref{fig:1} illustrates the details of the proposed method of \\textbf{SNAPE }for estimating the parameters of the PDE model using simultaneous basis function approximation. Even though the domain of the illustrated process in figure \\ref{fig:1} is restricted to two dimensions for the purpose of visualization, the applicability of \\textbf{SNAPE} can be generalized for any multidimensional PDE model.\n \n\\subsection{Wave equation in two space dimensions}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure02.eps}\n\t\\caption{\\textbf{Spatiotemporal response of the 2D wave equation.} The full-field measured response of the 2D wave PDE model at time instants of \\textbf{(A) }$t=3$\\textbf{ (C)}$t=5.5$ for an instance of 10\\% Gaussian noise. The corresponding snapshots at \\textbf{(B)}$t=3$ and \\textbf{(D)}$t=5.5$ displays the smooth analytical approximation to the PDE solution estimated using \\textbf{SNAPE}. The black plus markers denote the positions whose time histories are shown in \\textbf{(E)} for $(x=-0.5,\\ y=-0.5)$ and \\textbf{(F)} for $(x=0,\\ y=0)$. Even in the presence of moderate noise, \\textbf{SNAPE} successfully approximates the true solution.}\\label{fig:2}\n\\end{figure}\n\nThe wave equation represents PDE of the scalar function $u\\left(\\mathbf{x}\\right)$ where the domain $\\mathbf{x}\\mathrm{\\in }\\left(x_1,x_2,\\dots ,x_m;t\\right)$ consists of a time variable and $m$ spatial variables. The PDE is expressed as ${\\mathrm{u}}_{\\mathrm{tt}}=c^2{\\mathrm{\\nabla }}^2u$ where $c$ is a real coefficient and ${\\mathrm{\\nabla }}^2$ is the Laplacian operator. This second-order linear PDE forms the basis of various fields of physics such as classical mechanics, quantum mechanics, geophysics, general relativity to name a few. The parameters of the PDE model bear information regarding the physical property of the medium through which the wave is propagating along the corresponding spatial direction. It is assumed that dense measurements of the dependent scalar quantity, which may be the pressure in a fluid medium or the displacement along a specific direction, are acquired using sensors. The goal of the present study is to infer the physics from the measured data. As the physics of the dynamic process is known to us which is expressed in the mathematical form of the PDE, we need to estimate its parameters to infer the properties of the media.\n\nAs an example, the numerical solution to the following PDE with parameters $\\boldsymbol{\\theta }=\\left(1.0,1.0\\right)$ is obtained which represents 2D wave propagation.\n\n\\begin{equation} \\label{EQ06} \n\t\\frac{{\\partial }^2u}{\\partial t^2}={\\theta }_1\\frac{{\\partial }^2u}{\\partial x^2}+{\\theta }_2\\frac{{\\partial }^2u}{\\partial y^2} \n\\end{equation} \n\nA square spatial dimension is selected with geometry $(x,y)\\mathrm{\\in }\\left[-1.0,1.0\\right]$ and time span of $t\\mathrm{\\in }\\left[0,10\\right]$. Both the Dirichlet $\\left(u=0\\right)$ and the Neumann $\\left(\\partial u\/\\partial y=0\\right)$ boundary conditions are applied at the opposite edges of$\\ x=-1.0$, $x=1.0,$ and $y=-1.0$, $y=1.0$ respectively. The initial condition of the dynamic process is set to $u\\left(x,y,0\\right)=3sin\\left(\\mathrm{\\pi }x\\right)exp\\left(sin\\left(\\frac{\\mathrm{\\pi }}{2}y\\right)\\right)$ and $u_t\\left(x,y,0\\right)=tan^{-1}\\left(cos\\left(\\frac{\\pi }{2}x\\right)\\right)$. The generated response $u\\left(x,y,t\\right)\\mathrm{\\in }{\\mathbb{R}}^{50\\times 50\\times 100}$ is corrupted with 10\\% Gaussian noise to simulate measurement noise from the sensors. The proposed method of \\textbf{SNAPE} is adopted to infer the PDE parameters. The mean of the estimated parameters $\\overline{\\boldsymbol{\\theta }}=\\left(1.002,1.022\\right)$ of the PDE model exhibits superior accuracy from the noise corrupted measured data. The robustness to noise is further demonstrated by computing the coefficient of variation (\\textit{cov}) of the estimates to be as low as $cov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.07,\\ 0.60\\right)\\%$. It also estimates the analytical approximation of the solution to the PDE model without the knowledge of the initial and boundary conditions which generated the acquired dynamic response. Figures \\ref{fig:2}(A) and \\ref{fig:2}(C) show the measured response of the system with one such random instance of Gaussian noise at the time instants of $t=3$ and $t=5.5$ respectively. The estimated approximate solution from the discrete measurements consists of a smooth continuous function as shown in Figures \\ref{fig:2}(B) and \\ref{fig:2}(D) for the same corresponding time instants. The time histories of two localized positions are shown in figures \\ref{fig:2}(E) and \\ref{fig:2}(F) that compares the measured response and the estimated function with the true response of the system. It is evident that the estimated function of the solution satisfactorily approximates the true response.\n\n\\subsection{Chaotic response of forced Duffing oscillator}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{Figure03.eps}\n\t\\caption{\\textbf{Chaotic solution of forced Duffing equation.} The solution of the nonlinear ODE of forced Duffing oscillator exhibits deterministic chaos for certain values of parameters as discussed in the text. \\textbf{(A)} One such instance of measured chaotic response with 10\\% Gaussian noise. \\textbf{(B) }The magnified time history demonstrates the ability of the proposed method to estimate the chaotic solution even from moderate noisy data.}\\label{fig:3}\n\\end{figure}\n\nThe Duffing equation represents the nonlinear dynamics of a system with cubic nonlinearity. The parameters $\\boldsymbol{\\theta }=\\left({\\theta }_1,\\ {\\theta }_2,\\ {\\theta }_3\\right)$ in the nonhomogeneous ODE $x_{tt}+{\\theta }_1x_t+{\\theta }_2x+{\\theta }_3x^3=\\gamma cos(\\omega t)$ provides the linear damping and stiffness as well as the nonlinear cubic stiffness of the system. At the forcing parameters of $\\gamma =0.42$ and $\\omega \\ =\\ 1$ and system parameters of $\\boldsymbol{\\theta }=\\left(0.5,\\ -1,\\ 1\\right)$ the solution of the nonlinear ODE exhibits deterministic chaos. For the provided values of the ODE parameters, the system is numerically solved for period $t\\mathrm{\\in }\\left[0,\\ 200\\right]$ and the response $x(t)\\mathrm{\\in }{\\mathbb{R}}^{4000}$ is perturbed with 10\\% Gaussian noise to mimic measurement noise. One such random instance of measured data is compared with the true response in Figure \\ref{fig:3}(A). Figure \\ref{fig:3}(B) shows the magnified section of the small part of the data. \\textbf{SNAPE} is applied to the noise corrupted chaotic response to infer the parameters of the system. The mean of the estimated parameters is $\\overline{\\boldsymbol{\\theta }}=\\left(0.49,\\ -1.0,\\ \\ 0.99\\right)$ and the corresponding uncertainty of estimation as$\\ cov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(1.06,\\ 0.98,\\ 0.63\\right)\\%$ signifies the superior accuracy and robustness of the proposed method. Also, the analytical approximate solution of the Duffing equation compares well with the true solution as shown in Figures \\ref{fig:3}(A) and \\ref{fig:3}(B).\n\n\\subsection{Parameter estimation of Navier-Stokes equations}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure04.eps}\n\t\\caption{\\textbf{Inferring Navier-Stokes equation from 10\\% Gaussian added noise.} The full-field measured response of the Navier-Stokes PDE model shows vortex-shedding at time instants of \\textbf{(A) }$t=3$\\textbf{ (C)}$t=5$ for one random instance of 10\\% Gaussian noise. \\textbf{SNAPE }estimates smooth approximation to the solution from the noisy data whose corresponding snapshots at \\textbf{(B)}$t=3$ and \\textbf{(D)}$t=5$ are shown for comparison. The nonlinearity of the response is evident looking at the measured time histories of the positions \\textbf{(E)} $(x=-0.5,\\ y=-0.5)$ and \\textbf{(F)} $(x=0,\\ y=0)$. Regardless of the added noise and discretization error, \\textbf{SNAPE} provides an estimated analytical solution of the PDE that satisfactorily approximates the hidden true solution.}\\label{fig:4}\n\\end{figure}\n\nThe Navier-Stokes equations are a set of coupled nonlinear PDEs which describe the dynamics of fluids. The study of these equations is ubiquitous in a wide variety of scientific applications including climate modeling, blood flow in the human body, ocean currents, pollution analysis, and many more. This example involves incompressible flow past a cylinder which exhibits an asymmetric vortex shedding pattern in the wake of the cylinder. The equation in terms of the vorticity and velocity fields is given by\n\n\\begin{equation} \\label{EQ07} \n\t\\frac{\\partial \\omega }{\\partial t}+{\\theta }_1\\frac{{\\partial }^2\\omega }{\\partial x^2}+{\\theta }_2\\frac{{\\partial }^2\\omega }{\\partial y^2}+{\\theta }_3u\\frac{\\partial \\omega }{\\partial x}+{\\theta }_4v\\frac{\\partial \\omega }{\\partial y}=0 \n\\end{equation}\n\nThe two components of the velocity field data $u\\left(x,y,t\\right)$ and $v\\left(x,y,t\\right)$ are obtained from \\citet{raissi2019physics} where the numerical solution of equation \\ref{EQ07} is performed for the parameter values$\\mathrm{\\ }\\boldsymbol{\\theta }=\\left(-0.01,-0.01,\\ 1.0,1.0\\right)$. The vorticity field data $\\omega \\left(x,y,t\\right)$ is evaluated numerically from the velocity field data. The vorticity as well as the two components of velocity field datasets $\\left(\\omega ,u,v\\right)\\in {\\mathbb{R}}^{100\\times 50\\times 100}$ are perturbed with 10\\% Gaussian noise to simulate the measured data. The discrete measurement data is acquired over a rectangular domain of $x\\mathrm{\\in }\\left[1.0,\\ 8.0\\right]$ and $y\\mathrm{\\in }\\left[-2.0,\\ 2.0\\right]$ with the period of $t\\mathrm{\\in }\\left[0,\\ 9.9\\right]$. The mean of the estimated parameters $\\overline{\\boldsymbol{\\theta }}=\\left(-0.01,-0.006,\\ 0.88,\\ 0.91\\right)\\ $with the uncertainty $cov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(5.46,\\ 5.28,\\ 5.33,\\ 4.44\\right)\\%$ using the method of \\textbf{SNAPE }compares satisfactorily well with the exact values considering the discretization error while evaluating vorticity from the velocity components. Figures \\ref{fig:4}(A) and \\ref{fig:4}(C) show one instance of measured noise-corrupted vorticity field at $t=3$ and $t=5$ respectively. The corresponding smooth analytical approximation of the solution is shown in figures \\ref{fig:4}(B) and \\ref{fig:4}(C). The comparison of time histories of the estimated solution with the true response, at two different locations as shown in figures \\ref{fig:4}(E) and \\ref{fig:4}(F), corroborate the efficacy of the present method.\n\n\\subsection{Application in classical and quantum mechanics}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure05.eps}\n\t\\caption{\\textbf{Learning the nonlinear Schr\\\"{o}dinger equation from the complex field data.} \\textbf{(A)} The magnitude of the complex field data $|\\psi \\left(x,t\\right)|$ perturbed with a random realization of 10\\% Gaussian noise overlaid on the surface of the true solution. \\textbf{(B) }The real component of the measured complex field. \\textbf{(C) }The real component of the estimated approximate solution. \\textbf{(D)} The imaginary component of the measured complex field. \\textbf{(E) }The imaginary component of the estimated approximate solution. The comparison of the magnitude of the estimated solution from the noisy complex field data to the magnitude of the true solution \\textbf{(F) }at time instant $t=1$ and \\textbf{(G)} at position $x=1$ reveals the efficacy and robustness of \\textbf{SNAPE}.}\\label{fig:5}\n\\end{figure}\n\nThe nonlinear Schr\\\"{o}dinger equation (NLSE) finds its application in light propagation through nonlinear optical fibers, the study of Bose-Einstein condensates, and small amplitude surface gravity waves. This example extends the applicability of the proposed method for complex fields $\\psi \\left(x,t\\right)$ whose PDE is given as\n\n\\begin{equation} \\label{EQ08} \n\t\\frac{\\partial \\psi }{\\partial t}+{\\theta }_1\\frac{{\\partial }^2\\psi }{\\partial x^2}+{\\theta }_2{\\left|\\psi \\right|}^2\\psi =0 \n\\end{equation}\n\nThe data $\\psi \\left(x,t\\right)\\in {\\mathbb{C}}^{512\\times 501}$\\textit{ }is obtained from \\citet{rudy2017data} where the above PDE is numerically solved for the parameter values $\\boldsymbol{\\theta }=\\left(-0.5i,-1.0i\\right)$. The solution domain consists of $x\\mathrm{\\in }\\left[-5,5\\right]$ and $t\\mathrm{\\in }\\left[0,\\mathrm{\\pi }\\right]$. Like before, 10\\% Gaussian noise is added to mimic the measurement data acquired using sensors. \\textbf{SNAPE} is applied to the complex field measurement data, and with the domain knowledge of the structure of the governing PDE the mean of the estimated parameters is $\\overline{\\boldsymbol{\\theta }}=\\left(-0.44i,-0.96i\\right)$ with a low uncertainty bound of $cov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.76,\\ 0.31\\right)\\%$. Figure \\ref{fig:5}(A) shows the magnitude of an instance of noise corrupted measured complex field data superimposed on the true solution of the NLSE of equation \\ref{EQ08}. The real and the imaginary components of the measured complex field data are shown in figures \\ref{fig:5}(B) and \\ref{fig:5}(D) respectively. \\textbf{SNAPE} not only infers the parameters of the NLSE but also is successful in estimating the analytical approximate solution of NLSE. Figures \\ref{fig:5}(C) and \\ref{fig:5}(E) show the real and imaginary components of the estimated approximate solution. The efficacy of the proposed method is further exemplified in figures \\ref{fig:5}(F) and \\ref{fig:5}(G) where the magnitude of the analytical approximate solution estimated from noisy measured data is compared with the magnitude of the true solution at the time instant $t=1$ and the location $x=1$ respectively.\n\n\\subsection{Theory-guided learning of parametric ODEs and PDEs}\n\nTable \\ref{tab:01} exhibits the application of \\textbf{SNAPE} on the measured response of a broad range of differential equation models predominant in the scientific community. The response includes both periodic as well as chaotic oscillations from one-dimensional time histories (ODEs) to multidimensional spatiotemporal dynamics (PDEs). The measured responses of all the systems reveal strong nonlinearity apart from the linear wave equation. For each of the models, the constrained equation in the optimization of Eq. 5 is custom-built following the convention of \\textit{theory-guided learning}. The simulated real, as well as the complex field data, is corrupted with Gaussian noise to take into consideration the eminent noise from the sensors and acquisition devices. The robustness and repeatability of \\textbf{SNAPE} are demonstrated by performing repeated estimation on 10 bootstrap samples of noise corrupted data. Unlike deep learning-based methods \\citep{both2021deepmod}, \\textbf{SNAPE} successfully learns the differential equations for each random instance of noisy data. Moreover, it provides uncertainty bounds of the estimated parameters that arise from the inherent randomness of the measurement noise and discretization errors. As the data for the PDEs of Kuramoto-Sivashinsky, Burgers', Korteweg-de Vries, and Schr\\\"{o}dinger equation are obtained from \\citet{rudy2017data}, the results of the estimation provide a direct comparison of the regression-based method (\\textit{9}) with the proposed method of \\textbf{SNAPE}. For all four cases, the \\textbf{SNAPE }exhibits higher accuracy and robustness to noise. The superior performance is more prominent in the case of higher-order PDEs like the Kuramoto-Sivashinsky equation where the accuracy of estimation of \\textbf{SNAPE} on 5\\% noise is much higher than that of the method in \\citet{rudy2017data} on 1\\% noise. The velocity field data of the Navier-Stokes equation is obtained from \\citet{raissi2019physics} while the vorticity field data is computed from the velocity field data through numerical differentiation. Even though both the components of velocity and the vorticity data are corrupted with noise, the accuracy of the \\textbf{SNAPE} estimates is similar to that of the deep learning-based method in \\citet{raissi2019physics} for 1\\% noise. Besides, \\textbf{SNAPE }is successful in providing stable and robust estimates of the Navier-Stokes PDE parameters even for the higher amount of added noise. The results of the tabulated examples demonstrate the applicability and reliability of the proposed method for a wide variety of spatiotemporal processes where scientific theories are available.\n\\begin{landscape}\n\t\\begin{table}[htbp]\n\t\t\\centering\n\t\t\\caption{\\textbf{Parameter estimation of differential equation models prevalent in mathematical sciences. }For each of the examples, the standard form of the differential equations is provided along with the exact values of parameters used to simulate the responses. \\textbf{SNAPE} is applied on 10 bootstrap samples generated from 1\\% and 5\\% Gaussian noise corrupted response for each of the examples. The mean $\\overline{\\boldsymbol{\\theta }}$ and the coefficient of variation $cov\\left(\\boldsymbol{\\theta }\\right)$ of the estimated parameters demonstrates the accuracy and robustness of the proposed method.}\n\t\t\\resizebox{\\textwidth}{!}{%\n\t\t\t\\begin{tabular}{|P{2.0in}|P{2.0in}|P{2.0in}|P{2.5in}|P{2.5in}|} \\hline \n\t\t\t\t{} & {} & {} & {} & {} \\\\[5pt] \n\t\t\t\t\\textbf{Differential Equations} & \\textbf{Form} & \\textbf{Exact} & \\textbf{1\\% Noise} & \\textbf{5\\% Noise} \\\\[10pt] \\hline \n\t\t\t\t{} & {} & {} & {} & {} \\\\[5pt]\n\t\t\t\t\\textbf{Van der Pol oscillator} & $x_{tt}+{\\theta }_1x_t+{\\theta }_2x^2x_t+{\\theta }_3x=0$ & $\\boldsymbol{\\theta }\\boldsymbol{=}\\left(-8,\\ 8,\\ 1\\right)$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(-7.95,\\ 8.03,\\ 1.01\\ \\right) \\\\ \n\t\t\t\t\tcov\\left(\\boldsymbol{\\theta }\\right)\\boldsymbol{=}\\left(0.19,\\ 0.19,\\ 0.24\\right)\\% \\end{array}\n\t\t\t\t$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(-8.00,\\ 8.08,\\ 1.03\\ \\right) \\\\ \n\t\t\t\t\tcov\\left(\\boldsymbol{\\theta }\\right)\\boldsymbol{=}\\left(1.84,\\ 1.89,\\ 1.30\\right)\\% \\end{array}\n\t\t\t\t$ \\\\[20pt] \\hline \n\t\t\t\t{} & {} & {} & {} & {} \\\\[5pt]\n\t\t\t\t\\textbf{Forced Duffing oscillator} & $x_{tt}+{\\theta }_1x_t+{\\theta }_2x+{\\theta }_3x^3=0.42cos(t)$ & $\\boldsymbol{\\theta }\\boldsymbol{=}\\left(0.5,\\ -1,\\ 1\\right)$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(0.5,\\ -0.99,\\ 1.0\\ \\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.96,\\ 0.89,\\ 0.57\\right)\\% \\end{array}\n\t\t\t\t$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(0.49,\\ -0.99,\\ 1.0\\ \\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(1.0,\\ 0.93,\\ 0.60\\right)\\% \\end{array}\n\t\t\t\t$ \\\\[20pt] \\hline \n\t\t\t\t{} & {} & {} & {} & {} \\\\[5pt]\n\t\t\t\t\\textbf{2D Wave equation} & $u_{tt}={\\theta }_1u_{xx}+{\\theta }_2u_{yy}$\\textit{ } & $\\boldsymbol{\\theta }\\boldsymbol{=}\\left(1,\\ 1\\right)$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(1.00,\\ 1.00\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.02,\\ 0.18\\right)\\% \\end{array}\n\t\t\t\t$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(0.99,\\ 1.02\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.02,\\ 0.16\\right)\\% \\end{array}\n\t\t\t\t$ \\\\[20pt] \\hline \n\t\t\t\t{} & {} & {} & {} & {} \\\\[5pt]\n\t\t\t\t\\textbf{Kuramoto-Sivashinsky equation} & $u_t+{\\theta }_1uu_x+{\\theta }_2u_{xx}+{\\theta }_3u_{xxxx}=0$ & $\\boldsymbol{\\theta }\\boldsymbol{=}\\left(1,\\ 1,\\ 1\\right)$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(1.06,\\ 1.01,\\ 1.01\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.89,\\ 0.95,\\ 0.93\\right)\\% \\end{array}\n\t\t\t\t$\\textbf{\\textit{ \\newline }} & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(0.88,\\ 0.76,\\ 0.76\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(21.8,\\ 17.3,\\ 17.9\\right)\\% \\end{array}\n\t\t\t\t$ \\\\[20pt] \\hline \n\t\t\t\t{} & {} & {} & {} & {} \\\\[5pt]\n\t\t\t\t\\textbf{Burgers' equation} & $u_t+{\\theta }_1uu_x+{\\theta }_2u_{xx}=0$\\textit{ } & $\\boldsymbol{\\theta }\\boldsymbol{=}\\left(1,\\ -0.1\\right)$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(1.01,\\ -0.10\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.05,\\ 0.11\\right)\\% \\end{array}\n\t\t\t\t$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(1.01,\\ -0.10\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.17,\\ 0.93\\right)\\% \\end{array}\n\t\t\t\t$ \\\\[20pt] \\hline \n\t\t\t\t{} & {} & {} & {} & {} \\\\[5pt]\n\t\t\t\t\\textbf{Korteweg-de Vries equation} & $u_t+{\\theta }_1uu_x+{\\theta }_2u_{xxx}=0$ & $\\boldsymbol{\\theta }\\boldsymbol{=}\\left(6,\\ 1\\right)$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(6.02,\\ 1.01\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.04,\\ 0.08\\right)\\% \\end{array}\n\t\t\t\t$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(6.03,\\ 1.03\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.18,\\ 0.38\\right)\\% \\end{array}\n\t\t\t\t$ \\\\[20pt] \\hline \n\t\t\t\t{} & {} & {} & {} & {} \\\\[5pt]\n\t\t\t\t\\textbf{Nonlinear Schr\\\"{o}dinger equation} & ${\\psi }_t+{\\theta }_1{\\psi }_{xx}+{\\theta }_2{\\left|\\psi \\right|}^2\\psi =0$\\textit{ } & $\\boldsymbol{\\theta }\\boldsymbol{=}\\left(-0.5i,\\ -1i\\right)$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(-0.49i,\\ -1.0i\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.05,\\ 0.2\\right)\\% \\end{array}\n\t\t\t\t$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(-0.45i,\\ -0.96i\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.44,\\ 0.17\\right)\\% \\end{array}\n\t\t\t\t$ \\\\[20pt] \\hline \n\t\t\t\t{} & {} & {} & {} & {} \\\\[5pt]\n\t\t\t\t\\textbf{Navier-Stokes equation} & ${\\omega }_t+{\\theta }_1{\\omega }_{xx}+{\\theta }_2{\\omega }_{yy}+{\\theta }_3u{\\omega }_x+{\\theta }_4v{\\omega }_y=0$ & $\\boldsymbol{\\theta }\\boldsymbol{=}\\left(-0.01,\\ -0.01,\\ 1,\\ 1\\right)$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(-0.01,\\ -0.01,\\ 1.01,\\ 1.02\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(0.02,\\ 0.14,\\ 0.06,\\ 0.05\\right)\\% \\end{array}\n\t\t\t\t$ & $ \\begin{array}{c}\n\t\t\t\t\t\\overline{\\boldsymbol{\\theta }}\\boldsymbol{=}\\left(-0.01,\\ -0.01,\\ 0.98,\\ 0.99\\right) \\\\ \n\t\t\t\t\tcov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(1.90,\\ 1.82,\\ 1.39,\\ 1.13\\right)\\% \\end{array}\n\t\t\t\t$ \\\\[20pt] \\hline \n\t\t\\end{tabular}}\n\t\t\\label{tab:01}%\n\t\\end{table}%\n\\end{landscape}\n\n\\subsection{Robustness to extreme noise}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{Figure06.eps}\n\t\\caption{\\textbf{Performance of SNAPE under extreme noise.} \\textbf{(A)} The estimated functional solution approximates the true solution of the Van der Pol equation exhibiting nonlinear relaxation oscillation using \\textbf{SNAPE} from measured time history perturbed with 50\\% Gaussian noise. \\textbf{(B) SNAPE} reveals the dominant phase portrait hidden within the cluster of noisy data. \\textbf{(C) }The cloud of 100\\% Gaussian noise corrupted response of Burgers' PDE model overlaid on its true response surface. \\textbf{(D)} The measured response due to the presence of extreme noise vaguely acquires the nonlinear traveling wave. \\textbf{(E) }Even in the presence of such extreme noise, \\textbf{SNAPE} not only estimates the parameters of the PDE model with reasonable accuracy, it also estimates the analytical solution that satisfactorily approximates the true solution as revealed from the cross-sections of the response corresponding to the dotted lines.}\\label{fig:6}\n\\end{figure}\n\nIn this part, an attempt is made to infer PDE model parameters and estimate its approximate solution using \\textbf{SNAPE} from measured data having extreme levels of noise. In practice, there are situations where an acquired signal contains elevated noise due to the specified limitations of the sensor or acquisition system. Often, we tend to discard those measurements as it is difficult to infer useful information regarding the physical properties of those processes that govern the acquired response. In such scenarios, we can apply the scientific domain knowledge we have about the process and try to infer as much physics from the extremely noisy data as possible. \\textbf{SNAPE} bridges the gap between well-established scientific theories and the latest data-driven learning algorithms.\n\nThe first example consists of the Van der Pol oscillator which exhibits non-conservative relaxation oscillations with nonlinear damping. Such relaxation oscillations are used in diverse physical and biological sciences, including but not limited to nonlinear electric circuits, geothermal geysers, networks of firing nerve cells, and the beating of the human heart. The evolution in time of the position $x$ is expressed by the differential equation $x_{tt}-\\mu \\left(1-x^2\\right)x_t+x=0$\\textit{ } where $\\mu $ is the nonlinear parameter that regulates the strength of damping and relaxation. In a more general form, the following ODE model is used to generate the data.\n\n\\begin{equation} \\label{EQ09} \n\t\\frac{d^2x}{dt^2}+{\\theta }_1\\frac{dx}{dt}+{\\theta }_2x^2\\frac{dx}{dt}+{\\theta }_3x=0 \n\\end{equation}\n\nThe generated time history $x\\left(t\\right)\\mathrm{\\in }{\\mathbb{R}}^{5000}$ for a period of $t\\mathrm{\\in }\\left[0,50\\right]$ with true parameter values of $\\boldsymbol{\\theta }=\\left(-8.0,\\ 8.0,\\ 1.0\\right)$ is corrupted with 50\\% Gaussian noise to simulate extreme measurement noise. Even in the presence of acute noise in the measured signal as shown in figure \\ref{fig:6}(A), the estimated solution function approximates well the true response of the system. Also, the mean of the parameters of the ODE $\\overline{\\boldsymbol{\\theta }}=\\left(-7.56,\\ 7.94,\\ 1.03\\right)$ are estimated with reasonable accuracy. Even with such high noise content, the parameters are estimated with reasonable uncertainty of $cov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(29.4,\\ 36.1,\\ 13.67\\right)\\%$. As shown in figure \\ref{fig:6}(B), the phase portrait of the measured response is too smudged to outline the hidden dynamics, whereas \\textbf{SNAPE} approximately brings out the true phase portrait.\n\nIn the next example, the parameters of the Burgers' equation are estimated from its response which is perturbed with 100\\% Gaussian noise. This nonlinear PDE occurs in many branches of applied mathematics such as fluid mechanics, gas dynamics, nonlinear acoustics, or traffic flows. The Burgers' equation is obtained from the Navier-Stokes equation by neglecting the term corresponding to the pressure gradient. Depending on the application, the parameters of the PDE model signify diffusion coefficient in gas dynamics or kinematic viscosity in fluid mechanics. The PDE model of the Burgers' equation is given as \n\n\\begin{equation} \\label{EQ10} \n\t\\frac{\\partial u}{\\partial t}+{\\theta }_1u\\frac{\\partial u}{\\partial x}+{\\theta }_2\\frac{{\\partial }^2u}{\\partial x^2}\\ =\\ 0 \n\\end{equation} \n\nThe data $u\\left(x,t\\right)\\mathrm{\\in }{\\mathbb{R}}^{256\\times 101}$ is obtained from \\citet{rudy2017data} for parameter values $\\overline{\\boldsymbol{\\theta }}=\\left(1.0,\\ -0.1\\right)$ with solution domain $x\\mathrm{\\in }\\left[-8,\\ 8\\right]$ and $t\\mathrm{\\in }\\left[0,\\ 10\\right]$. Figure \\ref{fig:6}(C) shows the cloud of measurement data which is indistinguishable from the superimposed true response. \\textbf{SNAPE} is applied to this extremely noisy data, with the knowledge of the mathematical form of the underlying process. The mean of the estimated parameters $\\overline{\\boldsymbol{\\theta }}=\\left(1.15,-0.19\\right)$ demonstrates compromised accuracy due to such extreme noise content, yet the inference of the proposed estimation method is successful with the estimated uncertainty about the mean as $cov\\boldsymbol{(}\\boldsymbol{\\theta }\\boldsymbol{)=}\\left(1.87,\\ 6.61\\right)\\%$. Figure \\ref{fig:6}(E) shows the approximate functional solution along with the cross-section of the responses at specific locations and instant of time.\n\n\\section{Discussion}\n\\label{sec:Discussion}\n\n\\textbf{SNAPE} explicitly satisfies the differential equation $\\mathcal{F}=0$ in the form of constraints in the optimization, however, it does not require the knowledge of the initial or the boundary conditions. As per the formulation of the optimization problem of \\textbf{SNAPE}, the initial, as well as the boundary conditions, are implicitly satisfied at $\\mathbf{x}\\boldsymbol{\\in }\\widehat{\\boldsymbol{\\mathit{\\Gamma}}}$, a sub-domain of $\\boldsymbol{\\varOmega}$ as shown in figure \\ref{fig:7}. The measurement points at the periphery of the domain $\\boldsymbol{\\varOmega}$ form a pseudo-boundary $\\widehat{\\boldsymbol{\\mathit{\\Gamma}}}$ represented by the dotted closed curve in figure 7. By minimizing the loss function of \\textbf{SNAPE} in Eq. 5, the Dirichlet boundary condition of $g\\left({\\mathbf{x}}_i\\right)\\approx y_i\\ $is approximately satisfied where $\\left[\\left(y_i,{\\mathbf{x}}_i\\right)\\mathrm{\\in }\\widehat{\\boldsymbol{\\varGamma}}\\right]$. This implies \\textbf{SNAPE }can learn the PDE models from the data acquired from inside the domain irrespective of the initial or the boundary conditions. The learned differential equation (ODE and PDE) models enable us to simulate responses for initial or boundary conditions other than that of the observed response. Besides estimating the parameters of the model, \\textbf{SNAPE }provides an analytical approximation for the solution of the differential equation $g\\left(\\mathbf{x}\\right)\\approx \\overline{g}\\left(\\mathbf{x}\\right)={\\mathbf{b}}^T\\left(\\mathbf{x}\\right)\\boldsymbol{\\beta }$. It signifies that the approximate response of the governing process can be evaluated from the continuous function $\\overline{g}\\left(\\mathbf{x}\\right)$ for any real value of $\\mathbf{x}\\boldsymbol{\\in }\\boldsymbol{\\varOmega}$,even though the response is observed at discrete points. Furthermore, \\textbf{SNAPE} avoids the evaluation of numerical derivatives that sets it apart from other regression-based methods. As a result, it provides a stable estimation of the model parameters even from responses with high noise content. Compared to the deep learning-based methods, \\textbf{SNAPE} demonstrates higher robustness and repeatability in the learning of the model as the estimation is performed with 10 random bootstrap realizations of noise corrupted responses for all the applications.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.4\\textwidth]{Figure07.eps}\n\t\\caption{\\textbf{Domain and boundary of a hypothetical differential equation.} A representative two-dimensional domain $\\boldsymbol{\\mathit{\\Omega}}$ and the boundary $\\boldsymbol{\\mathit{\\Gamma}}$ of an arbitrary PDE model. The red dots indicate $n$ number of discrete measurements $\\left(y_i,{\\mathbf{x}}_i\\right)$. The dotted curve $\\widehat{\\boldsymbol{\\mathit{\\Gamma}}}$ represents pseudo-boundary of the PDE model defined by the peripheral data points located on it.}\\label{fig:7}\n\\end{figure}\n\nUnlike data-driven machine learning techniques, the indispensable component of \\textbf{SNAPE} is the known theory of the dynamic process that is derived from the first principle. It combines the domain knowledge that we have studied and discovered so far with the modern aspects of data science to infer the differential equation models from the observed data. This theory-specific subjectivity of the estimation framework is attributed to the formulation of the constrained equation in \\textbf{SNAPE} for each application. In situations where two or more theories are hypothesized for a set of observed data, \\textbf{SNAPE} can be extended to include the competing classes of differential equations in its optimization scheme to perform model selection. In the current version, \\textbf{SNAPE} enforces an ODE or a PDE as a constraint, a future extension will be the incorporation of coupled ODEs or PDEs into the optimization scheme so that it can simultaneously estimate parameters of the system of differential equations. Even though Table S2 in supplementary materials compares the performance of \\textbf{SNAPE} with that of the deep learning-based method for the Navier-Stokes equation, the future scope of work will include a more comprehensive comparison of their respective benefits and limitations for wider applications. \\textbf{SNAPE} can be used to address the much-unexplored theory of identifiability of nonlinear differential equation models from a set of observations. This in turn will not only enrich our understanding of nonlinear differential equations (ODEs and PDEs) but also promote smart strategies of nonlinear control and sensor placement for complex dynamic processes. \n\n\\section{Materials and Methods}\n\\label{sec:Methods}\n\nThe proposed \\textbf{SNAPE }algorithm performs the constrained optimization of equation \\ref{EQ05} by searching for the optimal $\\overline{\\boldsymbol{\\beta }}$ that minimizes the loss function and simultaneously satisfies the constrained equation parameterized by $\\overline{\\boldsymbol{\\theta }}$ that approximates the governing differential equations. \\textbf{SNAPE} is performing the task of inferring the parameters of the differential equations by avoiding the computation of the higher-order derivatives and subsequently avoids infusion of unnecessary numerical errors in the process of estimation.\n\n\\subsection{Formulation of the optimization problem}\n\nThe form of the constrain equation depends on the form of the underlying differential equation, so the exact algorithm of \\textbf{SNAPE} slightly varies with each model yet the framework of estimation remains the same. For example, the shorthand notation of the functional relation that approximates the Burgers' equation \\ref{EQ10} is given as.\n\n\\begin{equation} \\label{EQ11} \n\t\\mathcal{F}\\left(\\mathbf{x},{\\mathbf{b}}^T\\left(x,t\\right)\\boldsymbol{\\beta },{\\left(\\frac{\\mathrm{\\partial }\\mathbf{b}\\left(x,t\\right)}{\\mathrm{\\partial }t}\\right)}^T\\boldsymbol{\\beta },{\\left(\\frac{\\mathrm{\\partial }\\mathbf{b}\\left(x,t\\right)}{\\mathrm{\\partial }x}\\right)}^T\\boldsymbol{\\beta },{\\left(\\frac{{\\mathrm{\\partial }}^2\\mathbf{b}\\left(x,t\\right)}{\\mathrm{\\partial }x^2}\\right)}^T\\boldsymbol{\\beta };\\boldsymbol{\\theta }\\right)\\mathrm{\\approx }\\mathbf{0} \n\\end{equation}\n\nNow, the basis functions ${\\mathbf{b}}^T\\left(x,t\\right)$ are evaluated at $n$ observation points to obtain basis matrix $\\overline{\\mathbf{B}}\\mathrm{\\in }{\\mathbb{R}}^{n\\mathrm{\\times }m}$, where $m$ is the number of columns in the basis matrix which depends on the choice of the order and number of knots in the B-splines functions. The order of the B-spline basis functions ${\\mathbf{b}}^T\\left(x,t\\right)$ are chosen such that it can be differentiated up to the degree of the PDE. Likewise, the following matrices are evaluated as well.\n\n\\begin{equation} \\label{EQ12} \n\t\\begin{aligned}\n\t\\frac{\\mathrm{\\partial }u}{\\mathrm{\\partial }t}\\mathrm{\\approx }{\\left(\\frac{\\mathrm{\\partial }\\mathbf{b}\\left(x,t\\right)}{\\mathrm{\\partial }t}\\right)}^T\\boldsymbol{\\beta }=&{\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta } \\\\\n\t\\frac{\\partial u}{\\partial x}\\mathrm{\\approx }{\\left(\\frac{\\partial \\mathbf{b}\\left(x,t\\right)}{\\partial x}\\right)}^T\\boldsymbol{\\beta }=&{\\overline{\\mathbf{B}}}_{\\mathbf{1}}\\boldsymbol{\\beta } \\\\\n\t\\frac{{\\partial }^2u}{\\partial x^2}\\mathrm{\\approx }{\\left(\\frac{{\\partial }^2\\mathbf{b}\\left(x,t\\right)}{\\partial x^2}\\right)}^T\\boldsymbol{\\beta }=&{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\n\t\\end{aligned}\n\\end{equation}\n\nwhere ${\\overline{\\mathbf{B}}}_{\\mathbf{0}},{\\overline{\\mathbf{B}}}_{\\mathbf{1}},{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\mathrm{\\in }{\\mathbb{R}}^{n\\mathrm{\\times }m}$ . The measured data is fitted with the B-spline functions such that at every point of measurement the PDE of equation \\ref{EQ10} is satisfied, or the condition in equation \\ref{EQ11} is satisfied. Hence, the optimization problem as presented in equation \\ref{EQ05} is recast into the following form.\n\n\\begin{equation} \\label{EQ13} \n\t\\begin{array}{c}\n\t\t{\\mathop{\\mathrm{min}}_{\\boldsymbol{\\beta },{\\theta }_1,{\\theta }_2}\\ \\ \\ \\ \\frac{1}{2}||\\mathbf{y}-\\overline{\\mathbf{B}}\\boldsymbol{\\beta }{||}^2_2\\ } \\\\[5pt]\n\t\tsubject\\ to\\ \\ \\ \\ {\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_1\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\boldsymbol{\\bigodot }{\\overline{\\mathbf{B}}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_2{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{\\le }\\boldsymbol{\\delta } \\end{array}\t\n\\end{equation}\n\nwhere $\\mathrm{\\odot }$ represents Hadamard (elementwise) product. Due to the presence of measurement noise $\\boldsymbol{\\epsilon }$ as well as discretization error, the residual of the approximate PDE model used in the constraint equation is not equated to zero but bounded by a small magnitude of modeling error $\\boldsymbol{\\delta }$.\n\n\\subsection{Alternating Direction Method of Multipliers (ADMM)}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.5\\textwidth]{Figure08.eps}\n\t\\caption{\\textbf{SNAPE algorithm for Burgers' equation.} As per the notion of \\textit{theory-guided learning}, the constraint equation in the optimization framework of \\textbf{SNAPE} is unique for a model. Although the provided \\textbf{SNAPE} algorithm is explicitly applicable for Burgers' PDE model, it demonstrates the key components of the algorithm which can be easily extended to any other linear or nonlinear models (both ODEs and PDEs).}\\label{fig:8}\n\\end{figure}\n\nThis section describes the ADMM algorithm to solve the constrained optimization of the \\textbf{SNAPE} method as stated in Eq. 13. The ADMM algorithm has originally been proposed by \\citet{gabay1976dual} to find the infimum of variational problems that appear in continuum mechanics. The equivalent representation \\citep{yang2011alternating} of the optimization problem in Eq. 13 is given as\n\n\\begin{equation} \\label{EQ14} \n\t\\begin{array}{c}\n\t\t{\\mathop{\\mathrm{min}}_{\\boldsymbol{\\beta },{\\theta }_1,{\\theta }_2}\\ \\ \\ \\ \\frac{1}{2}||\\mathbf{y}-\\overline{\\mathbf{B}}\\boldsymbol{\\beta }{||}^2_2\\ }+\\frac{1}{2\\mu }||\\mathbf{r}{||}^2_2 \\\\[5pt] \n\t\tsubject\\ to\\ \\ \\ \\ {\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_1\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\boldsymbol{\\bigodot }{\\overline{\\mathbf{B}}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_2{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}\\mathbf{r}\\boldsymbol{=}\\mathbf{0} \\end{array}\t\n\\end{equation} \n\nwhere $\\mathbf{r}\\mathrm{\\in }{\\mathbb{R}}^n$ is an auxiliary variable. The scaled form of augmented Lagrangian of the above optimization problem is given as\n\n\\begin{equation} \\label{EQ15} \n\t\\mathcal{L}\\left(\\beta ,{\\theta }_1,{\\theta }_2,u,r\\right)=\\frac{1}{2}||\\mathbf{y}-\\overline{\\mathbf{B}}\\boldsymbol{\\beta }{||}^2_2+\\frac{1}{2\\mu }||\\mathbf{r}{||}^2_2+\\frac{\\rho }{2}||G\\left(\\beta ,{\\theta }_1,{\\theta }_2,r\\right)+\\mathbf{u}{||}^2_2-\\frac{\\rho }{2}||\\mathbf{u}{||}^2_2 \n\\end{equation} \n\nwhere the function $G\\left(\\boldsymbol{\\beta },{\\theta }_1,{\\theta }_2,\\mathbf{r}\\right)={\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_1\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\boldsymbol{\\bigodot }{\\overline{\\mathbf{B}}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_2{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}\\mathbf{r}$. The ADMM optimization \\citep{boyd2011distributed} scheme involves an iterative update of the optimization parameters till its convergence. In the case of linear differential equation models, the function $G\\left(\\right)$ will be linear in terms of the basis coefficients $\\boldsymbol{\\beta }$, rendering the problem in equation \\ref{EQ13} as biconvex optimization. It means in one of the iteration updates steps, the subproblem is convex with respect to one of the parameters by treating the other parameter as constant. In the case of nonlinear models such as here, the matrix$\\boldsymbol{\\mathrm{\\ }}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\mathrm{=}}\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\boldsymbol{\\bigodot }{\\overline{\\mathbf{B}}}_{\\mathbf{1}}$ is assumed constant for each iteration so that the function $G\\left(\\right)$ becomes linear in terms of $\\boldsymbol{\\beta }$. It is a biconvex relaxation of the original nonconvex problem when nonlinear differential equations are considered. The updates of the parameters at $k$${}^{th}$ step are computed by the following ADMM form \\citep{yang2011alternating, boyd2011distributed}.\n\n\\begin{equation} \\label{EQ16} \n\t\\begin{aligned}\n\t\t{\\mathbf{r}}^{k+1}\\coloneqq& \\frac{\\mathrm{\\mu }\\mathrm{\\rho }}{1+\\mathrm{\\mu }\\mathrm{\\rho }}\\left({\\mathbf{u}}^k-G\\left({\\mathrm{\\beta }}^k,{\\theta }^k_1,{\\theta }^k_2,{\\mathbf{r}}^k\\right)\\right) \\\\\n\t\t{\\boldsymbol{\\beta }}^{k+1}\\coloneqq& \\ \\ \\mathop{\\mathrm{argmin}}_{\\boldsymbol{\\beta }}\\left(\\mathcal{L}\\left({\\boldsymbol{\\beta }}^k,{\\theta }^k_1,{\\theta }^k_2,\\mathbf{u},{\\mathbf{r}}^{k+1}\\right)\\right) \\\\\n\t\t{{\\theta }_1}^{k+1}\\coloneqq& \\ \\ \\mathop{\\mathrm{argmin}}_{{\\theta }_1}\\left(\\mathcal{L}\\left({\\boldsymbol{\\beta }}^{k+1},{\\theta }^k_1,{\\theta }^k_2,\\mathbf{u},{\\mathbf{r}}^{k+1}\\right)\\right) \\\\\n\t\t{{\\theta }_2}^{k+1}\\coloneqq& \\ \\ \\mathop{\\mathrm{argmin}}_{{\\theta }_2}\\left(\\mathcal{L}\\left({\\boldsymbol{\\beta }}^{k+1},{{\\theta }_1}^{k+1},{\\theta }^k_2,\\mathbf{u},{\\mathbf{r}}^{k+1}\\right)\\right) \\\\\n\t\t{\\mathbf{u}}^{k+1}\\coloneqq& {\\mathbf{u}}^k+\\gamma \\left(G\\left({\\boldsymbol{\\beta }}^{k+1},{{\\theta }_1}^{k+1},{{\\theta }_2}^{k+1},{\\mathbf{r}}^{k+1}\\right)\\right)\n\t\\end{aligned}\n\\end{equation}\n\nThe \\textbf{SNAPE }algorithm for the Burgers' equation is provided in figure \\ref{fig:8}. The updates of the parameters at each iteration step of the algorithm are computed by optimizing the corresponding objectives in Eq. 15. The closed-form expressions of the optimal parameters at each iteration step are obtained due to the aforementioned biconvex relaxation. For other ODEs or PDEs, a similar computational framework is followed by tweaking the provided algorithm with the corresponding form of the $G\\left(\\right)$ function.\n\n\\section*{Acknowledgment}\n\\label{S:ack}\nThe authors wish to acknowledge Dr. Anastasios Kyrillidis, assistant professor in the Department of Computer Science at Rice University for his valuable discussions on the ADMM optimization framework. This research was made possible by Science and Engineering Research Board of India (SERB)-Rice University Fellowship to Sutanu Bhowmick for pursuing his Ph.D. at Rice University. The financial support by SERB-India is gratefully acknowledged.\n\n\n\\renewcommand\\thefigure{A.\\arabic{figure}}\n\\setcounter{figure}{0}\n\\renewcommand{\\theequation}{A.\\arabic{equation}}\n\\setcounter{equation}{0}\n\\renewcommand\\thetable{A.\\arabic{table}}\n\\setcounter{table}{0}\n\\section*{Appendix}\n\nThis section provides detailed additional information regarding the proposed method of \\textbf{SNAPE}. At first, the univariate B-spline basis function which forms the building block of \\textbf{SNAPE} is discussed in brief along with its extension for multidimensional functions. Then the closed-form expression of the optimum parameters at each iterative ADMM update of the algorithm is derived. Further, the convergence of \\textbf{SNAPE} for responses corrupted with various amounts of noise and random initialization is extensively studied. The examples of the Korteweg-de Vries equation and the Kuramoto-Sivashinsky equation that are included in Table 1, are discussed in detail in this supplementary document. Finally, the performance of \\textbf{SNAPE} is compared with the previously proposed methods in the literature.\n\n\\subsection*{B-spline basis function}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.7\\textwidth]{S1.eps}\n\t\\caption{The sequence of B-spline basis functions of \\textbf{(A)} order 1, \\textbf{(B)} order 2, \\textbf{(C)} order 3, and \\textbf{(D)} order 4 with 11 knots evenly spaced between 0 and 1. Each B-spline basis function is non-zero on a few adjacent subintervals, hence they have local support.}\\label{fig:S1}\n\\end{figure}\n\nA univariate B-spline is a polynomial function of specific order defined over a domain with $k$ number of knots in equal or unequal intervals including the two boundaries. \\citet{de1978practical} provides a recursive algorithm to generate B-splines of any order from B-splines of lower order. Figure \\ref{fig:S1} shows a sequence of B-splines up to order four for the domain $[0,1]$ with 11 equidistant knots shown by the dashed vertical lines. The individual B-spline basis function is non-zero within a small interval, thereby demonstrating its property of compact (local) support. The number of basis functions with $k$ knots is computed as $p=k+o-2$ where $o$ is the order of the B-splines. The polynomial pieces join at $o$ inner knots where the derivatives up to orders $(o-1)$ are continuous. In the present study, the univariate B-spline basis functions are generated using the functional data analysis Matlab toolbox \\citep{ramsay2002applied}.\n\nThe univariate B-spline basis functions are extended to obtain the multidimensional tensor product B-spline basis functions \\citep{de1978practical, piegl1996nurbs, eilers2003multivariate}. For example, a two-dimensional domain $\\mathbf{x}\\mathrm{\\in }{\\mathbb{R}}^2\\ $consisting of one spatial dimension and another temporal dimension $\\mathbf{x}\\mathrm{\\in }\\left(x,t\\right)$ will have a set of basis functions ${\\mathbf{b}}_{1p}\\left(x\\right),p=1,\\dots ,m_1$ to represent functions in the $x$ domain, and similarly a set of $m_2$ basis functions ${\\mathbf{b}}_{2p}\\left(t\\right),p=1,\\dots ,m_2$ for the coordinate $t$. Then each of the $m_1\\times m_2$ tensor product basis functions are defined as\n\n\\begin{equation} \\label{SEQ01} \n\t{\\mathbf{b}}_{jk}\\left(\\mathbf{x}\\right)={\\mathbf{b}}_{1j}\\left(x\\right){\\mathbf{b}}_{2k}\\left(t\\right),j=1,\\dots ,m_1,k=1,\\dots ,m_2 \n\\end{equation} \n\nThe tensor product B-spline basis function existing in the $x\\times t$ plane is represented by the following two-dimensional function\n\n\\begin{equation} \\label{SEQ02} \n\t\\mathbf{b}\\left(\\mathbf{x}\\right)=\\sum^{m_1}_{j=1}{\\sum^{m_2}_{k=1}{{\\beta }_{jk}}}{\\mathbf{b}}_{jk}\\left(\\mathbf{x}\\right) \n\\end{equation}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{S2.eps}\n\t\\caption{A portion of B-splines tensor product basis from some selected pairs of cubic B-splines. Each two-dimensional basis function is the tensor product of the corresponding one-dimensional B-spline basis functions.}\\label{fig:S2}\n\\end{figure}\n\nwhere ${\\beta }_{jk}$ are the elements of $m_1\\times m_2$ matrix of unknown tensor product B-spline coefficients. Figure S2 demonstrates 16 tensor product basis functions corresponding to the univariate cubic B-splines shown in blue and red, which is only a portion of a full-basis. Each of the tensor product basis is positive corresponding to the nonzero support of the individual univariate ranges. The tensor product basis function of equation \\ref{SEQ02} represents a continuous function that can be evaluated for any real value of the domain $\\mathbf{x}$. The function $\\mathbf{b}\\left(\\mathbf{x}\\right)$ is evaluated at $n$ observation points within a grid of $n_x{\\times n}_t$ in the $\\mathbf{x}\\mathrm{\\in }\\left(x,t\\right)$ domain. The surface equation is re-expressed in matrix notation to incorporate computational efficiency as ${\\mathbf{b}\\left(\\mathbf{x}\\right)}_{n_x{\\times n}_t}\\boldsymbol{=}\\mathbf{B}\\ \\boldsymbol{\\beta }$ where $\\boldsymbol{\\beta }\\boldsymbol{=}\\boldsymbol{vec}\\left(\\left[{\\beta }_{jk}\\right]\\right)$ and\n\n\\begin{equation} \\label{SEQ03} \n\t\\mathbf{B}=\\left({\\mathbf{B}}_x\\otimes {\\mathbf{1}}^{\\mathbf{T}}_{n_x}\\right)\\odot \\left({\\mathbf{1}}^{\\mathbf{T}}_{n_t}\\otimes {\\mathbf{B}}_t\\right) \n\\end{equation} \n\nThe matrices ${\\mathbf{B}}_x\\mathrm{\\in }{\\mathbb{R}}^{n_x\\times m_1}$ and ${\\mathbf{B}}_t\\mathrm{\\in }{\\mathbb{R}}^{n_t\\times m_2}$ are the evaluated univariate B-splines at the grid points $n_x$ and $n_t$ of the corresponding axes. The symbol $\\otimes $ represents the Kronecker product of the matrix with the vector of ones having proper dimension and $\\odot $ denotes the Hadamard product. Each column of $\\mathbf{B}\\mathrm{\\in }{\\mathbb{R}}^{n\\mathrm{\\times }m}$ can be reshaped into the unit ranked matrix and graphically displayed as a two-dimensional surface as shown in figure \\ref{fig:S2}. The compact support of even multidimensional B-splines is evident from the figures as the values are nonzero within a small adjacent rectangular interval. It is conjectured that B-splines form about a set of nearly orthogonal basis functions \\citep{berry2002bayesian} and the presence of many zeros in each of the evaluated functions are exploited to reduce the computational complexity and bring in numerical stability.\n\n\\subsection*{Closed-form expressions of optimum ADMM updates}\n\nThis section describes the derivation of the optimal solutions at each iterative update of \\textbf{SNAPE}. The mathematical expressions of the iterative updates of the parameters depend on the form of the differential equation. Here, as an example, the iterative updates for the Burgers' equation are derived in detail. For different ODEs or PDEs, the corresponding iterative updates can be computed following a similar approach. The Burgers' equation with field variable $u\\left(x,t\\right)$ has the following differential form,\n\n\\begin{equation} \\label{SEQ04} \n\t\\frac{\\partial u}{\\partial t}+{\\theta }_1u\\frac{\\partial u}{\\partial x}+{\\theta }_2\\frac{{\\partial }^2u}{\\partial x^2}\\ =\\ 0 \n\\end{equation} \n\nThe vector of noise corrupted measurement data $\\mathbf{y}=u\\left(x,t\\right)+\\boldsymbol{\\epsilon }\\boldsymbol{,\\ \\ }\\mathbf{y}\\boldsymbol{\\ }\\boldsymbol{\\mathrm{\\in }}{\\mathbb{R}}^{n\\times 1}$ where $n$\\textbf{ }is the number of observations and $\\boldsymbol{\\epsilon }$\\textbf{ }is i.i.d Gaussian noise with zero mean and unknown variance. \\textbf{SNAPE} represents the PDE model and the associated parameter vector $\\boldsymbol{\\theta }=\\left({\\theta }_1,\\ {\\theta }_2\\right)$ by expressing the process response $u\\left(x,t\\right)$ as an approximation to the linear combination of nonparametric basis functions given by\n\n\\begin{equation} \\label{SEQ05} \n\tu\\left(x,t\\right)\\approx \\overline{u}\\left(x,t\\right)=\\sum^K_{k=1}{b_k\\left(x,t\\right){\\beta }_k}={\\mathbf{b}}^T\\left(x,t\\right)\\boldsymbol{\\beta } \n\\end{equation} \n\nwhere $\\mathbf{b}\\left(x,t\\right)=\\{b_1\\left(x,t\\right),\\dots ,b_K\\left(x,t\\right){\\}}^T$ is the vector of basis functions and $\\boldsymbol{\\beta }={\\left({\\beta }_1,\\dots ,{\\beta }_K\\right)}^T$ is the vector of basis coefficients. The basis functions ${\\mathbf{b}}^T\\left(x,t\\right)$ are evaluated at $n$ observation points to obtain basis matrix $\\overline{\\mathbf{B}}\\mathrm{\\in }{\\mathbb{R}}^{n\\mathrm{\\times }m}$, where $m$ is the number of columns in the basis matrix. The matrices corresponding to the linear terms of the PDE are evaluated as well.\n\n\\begin{equation} \\label{SEQ06} \n\t\\begin{aligned}\n\t\t\\frac{\\mathrm{\\partial }u}{\\mathrm{\\partial }t}\\mathrm{\\approx }{\\left(\\frac{\\mathrm{\\partial }\\mathbf{b}\\left(x,t\\right)}{\\mathrm{\\partial }t}\\right)}^T\\boldsymbol{\\beta }=&{\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta } \\\\\n\t\t\\frac{\\partial u}{\\partial x}\\mathrm{\\approx }{\\left(\\frac{\\partial \\mathbf{b}\\left(x,t\\right)}{\\partial x}\\right)}^T\\boldsymbol{\\beta }=&{\\overline{\\mathbf{B}}}_{\\mathbf{1}}\\boldsymbol{\\beta } \\\\\n\t\t\\frac{{\\partial }^2u}{\\partial x^2}\\mathrm{\\approx }{\\left(\\frac{{\\partial }^2\\mathbf{b}\\left(x,t\\right)}{\\partial x^2}\\right)}^T\\boldsymbol{\\beta }=&{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\n\t\\end{aligned}\n\\end{equation}\n\nwhere ${\\overline{\\mathbf{B}}}_{\\mathbf{0}},{\\overline{\\mathbf{B}}}_{\\mathbf{1}},{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\mathrm{\\in }{\\mathbb{R}}^{n\\mathrm{\\times }m}$ . The equivalent ADMM representation \\citep{yang2011alternating} of the \\textbf{SNAPE}'s optimization problem is given as\n\n\\begin{equation} \\label{SEQ07} \n\t\\begin{array}{c}\n\t\t{\\mathop{\\mathrm{min}}_{\\boldsymbol{\\beta },{\\theta }_1,{\\theta }_2}\\ \\ \\ \\ \\frac{1}{2}||\\mathbf{y}-\\overline{\\mathbf{B}}\\boldsymbol{\\beta }{||}^2_2\\ }+\\frac{1}{2\\mu }||\\mathbf{r}{||}^2_2 \\\\[5pt] \n\t\tsubject\\ to\\ \\ \\ \\ {\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_1\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\boldsymbol{\\bigodot }{\\overline{\\mathbf{B}}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_2{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}\\mathbf{r}\\boldsymbol{=}\\mathbf{0} \\end{array}\t\n\\end{equation}\n\nwhere $\\mathbf{r}\\mathrm{\\in }{\\mathbb{R}}^n$ is an auxiliary variable. The scaled form of augmented Lagrangian of the above optimization problem is given as\n\n\\begin{equation} \\label{SEQ08}\n\t\\mathcal{L}\\left(\\beta ,{\\theta }_1,{\\theta }_2,u,r\\right)=\\frac{1}{2}||\\mathbf{y}-\\overline{\\mathbf{B}}\\boldsymbol{\\beta }{||}^2_2+\\frac{1}{2\\mu }||\\mathbf{r}{||}^2_2+\\frac{\\rho }{2}||G\\left(\\beta ,{\\theta }_1,{\\theta }_2,r\\right)+\\mathbf{u}{||}^2_2-\\frac{\\rho }{2}||\\mathbf{u}{||}^2_2\n\\end{equation}\n\n where the function $G\\left(\\boldsymbol{\\beta },{\\theta }_1,{\\theta }_2,\\mathbf{r}\\right)={\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_1\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\boldsymbol{\\bigodot }{\\overline{\\mathbf{B}}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_2{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}\\mathbf{r}$. The matrix$\\boldsymbol{\\mathrm{\\ }}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\mathrm{=}}\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\boldsymbol{\\bigodot }{\\overline{\\mathbf{B}}}_{\\mathbf{1}}$ is assumed constant for each iteration so that the function $G\\left(\\right)$ becomes linear in terms of $\\boldsymbol{\\beta }$. It is a biconvex relaxation of the original nonconvex problem when nonlinear differential equations are considered. The updates of the parameters at $k$${}^{th}$ step are computed by the following ADMM form \\citep{yang2011alternating, boyd2011distributed}.\n \n \\begin{equation} \\label{SEQ09} \n \t\\begin{aligned}\n \t\t{\\mathbf{r}}^{k+1}\\coloneqq& \\frac{\\mathrm{\\mu }\\mathrm{\\rho }}{1+\\mathrm{\\mu }\\mathrm{\\rho }}\\left({\\mathbf{u}}^k-G\\left({\\mathrm{\\beta }}^k,{\\theta }^k_1,{\\theta }^k_2,{\\mathbf{r}}^k\\right)\\right) \\\\\n \t\t{\\boldsymbol{\\beta }}^{k+1}\\coloneqq& \\ \\ \\mathop{\\mathrm{argmin}}_{\\boldsymbol{\\beta }}\\left(\\mathcal{L}\\left({\\boldsymbol{\\beta }}^k,{\\theta }^k_1,{\\theta }^k_2,\\mathbf{u},{\\mathbf{r}}^{k+1}\\right)\\right) \\\\\n \t\t{{\\theta }_1}^{k+1}\\coloneqq& \\ \\ \\mathop{\\mathrm{argmin}}_{{\\theta }_1}\\left(\\mathcal{L}\\left({\\boldsymbol{\\beta }}^{k+1},{\\theta }^k_1,{\\theta }^k_2,\\mathbf{u},{\\mathbf{r}}^{k+1}\\right)\\right) \\\\\n \t\t{{\\theta }_2}^{k+1}\\coloneqq& \\ \\ \\mathop{\\mathrm{argmin}}_{{\\theta }_2}\\left(\\mathcal{L}\\left({\\boldsymbol{\\beta }}^{k+1},{{\\theta }_1}^{k+1},{\\theta }^k_2,\\mathbf{u},{\\mathbf{r}}^{k+1}\\right)\\right) \\\\\n \t\t{\\mathbf{u}}^{k+1}\\coloneqq& {\\mathbf{u}}^k+\\gamma \\left(G\\left({\\boldsymbol{\\beta }}^{k+1},{{\\theta }_1}^{k+1},{{\\theta }_2}^{k+1},{\\mathbf{r}}^{k+1}\\right)\\right)\n \t\\end{aligned}\n \\end{equation}\n\nEach iterative update of the parameters involves optimization of the Lagrangian for the corresponding parameter. The optimum values ${\\boldsymbol{\\beta }}^*$, ${\\theta }^*_1$, and ${\\theta }^*_2$ for each ADMM iteration step is computed by optimizing the following loss function\n\n\\begin{equation} \\label{SEQ10} \n\t\\begin{aligned}\n\t\tJ=&\\frac{1}{2}||\\mathbf{y}-\\overline{\\mathbf{B}}\\boldsymbol{\\beta }{||}^2_2+\\frac{1}{2\\mu }||\\mathbf{r}{||}^2_2+\\frac{\\mathrm{\\rho }}{2}||{\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_1{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_2{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }+(\\mathbf{r}+\\mathbf{u}){||}^2_2-\\frac{\\rho }{2}||\\mathbf{u}{||}^2_2 \\\\\n\t\tJ=&\\frac{1}{2}{\\left(\\mathbf{y}-\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\right)}^T\\left(\\mathbf{y}-\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\right)+\\frac{1}{2\\mu }{\\mathbf{r}}^T\\mathbf{r}\\boldsymbol{+}\\frac{\\rho }{2}{\\left({\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_1{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_2{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }+\\left(\\mathbf{r}+\\mathbf{u}\\right)\\right)}^T\\left({\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_1{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_2{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }+\\left(\\mathbf{r}+\\mathbf{u}\\right)\\right) \\\\\n\t\t&-\\frac{\\rho }{2}{\\mathbf{u}}^T\\mathbf{u} \\\\\n\t\tJ=&\\frac{1}{2}\\left({\\mathbf{y}}^{\\mathbf{T}}\\mathbf{y}-2{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}\\mathbf{y}\\boldsymbol{+}{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}\\overline{\\mathbf{B}}\\boldsymbol{\\beta }\\right)+\\frac{1}{2\\mu }{\\mathbf{r}}^T\\mathbf{r} \\\\\n\t\t&\\boldsymbol{+}\\frac{\\rho }{2}\\left({{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{\\beta }\\boldsymbol{+}{2\\theta }_1{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{2\\theta }_2{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }^2_1{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{1}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }^2_2{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^T_{\\mathbf{2}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}{2\\theta }_1{\\theta }_2{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{1}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\right) \\\\\n\t\t&\\boldsymbol{+}\\frac{\\rho }{2}\\left(2({\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}\\boldsymbol{+}{\\theta }_1{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{1}}+{\\theta }_2{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^T_{\\mathbf{2}})\\left(\\mathbf{r}+\\mathbf{u}\\right)\\right)-\\frac{\\rho }{2}{\\mathbf{u}}^T\\mathbf{u}\n\t\\end{aligned}\n\\end{equation}\n\nThe gradient of this loss function with respect to $\\boldsymbol{\\beta }$ is given as:\n\n\\begin{equation} \\label{SEQ11} \n\t\\begin{aligned}\n\t\t\\frac{\\mathrm{\\partial }J}{\\mathrm{\\partial }\\boldsymbol{\\beta }}=&\\frac{1}{2}\\left(\\boldsymbol{2}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}\\overline{\\mathbf{B}}\\boldsymbol{\\beta }-2{\\overline{\\mathbf{B}}}^{\\mathbf{T}}\\mathbf{y}\\right) \\\\\n\t\t&\\boldsymbol{+}\\frac{\\rho }{2}\\left({\\mathbf{2}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\mathbf{\\beta }\\boldsymbol{+}{4\\theta }_1{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{4\\theta }_2{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}{2\\theta }^2_1{\\mathbf{B}}^T_{\\mathbf{1}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{2\\theta }^2_2{\\overline{\\mathbf{B}}}^T_{\\mathbf{2}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}{4\\theta }_1{\\theta }_2{\\mathbf{B}}^T_{\\mathbf{1}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\right) \\\\\n\t\t&\\boldsymbol{+}\\frac{\\rho }{2}\\left(2\\left({\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}\\boldsymbol{+}{\\theta }_1{\\mathbf{B}}^T_{\\mathbf{1}}+{\\theta }_2{\\overline{\\mathbf{B}}}^T_{\\mathbf{2}}\\right)\\left(\\mathbf{r}+\\mathbf{u}\\right)\\right)\n\t\\end{aligned}\n\\end{equation}\n\nThe closed-form expression for the optimum parameter ${\\boldsymbol{\\beta }}^*$ is obtained by equating $\\frac{\\mathrm{\\partial }J}{\\mathrm{\\partial }\\boldsymbol{\\beta }}=0$.\n\n\\begin{equation} \\label{SEQ12}\n\t\\begin{aligned} \n\t{\\boldsymbol{\\beta }}^*=&{[{\\overline{\\mathbf{B}}}^{\\mathbf{T}}\\overline{\\mathbf{B}}+\\rho ({{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}\\overline{\\mathbf{B}}}_{\\mathbf{0}}\\boldsymbol{+}{2\\theta }_1{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{+}{2\\theta }_2{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{+}{\\theta }^2_1{\\mathbf{B}}^T_{\\mathbf{1}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{+}{\\theta }^2_2{\\overline{\\mathbf{B}}}^T_{\\mathbf{2}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{+}{2\\theta }_1{\\theta }_2{\\mathbf{B}}^T_{\\mathbf{1}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}})]}^{-1} \\\\\n\t&[{\\overline{\\mathbf{B}}}^{\\mathbf{T}}\\mathbf{y}-\\rho \\left({\\overline{\\boldsymbol{B}}}^{\\boldsymbol{T}}_{\\boldsymbol{0}}\\boldsymbol{+}{\\theta }_1{\\mathbf{B}}^T_{\\mathbf{1}}+{\\theta }_2{\\overline{\\mathbf{B}}}^T_{\\mathbf{2}}\\right)\\left(\\mathbf{r}+\\mathbf{u}\\right)] \n\\end{aligned}\n\\end{equation}\n\nSimilarly, the gradient of the loss function with respect to ${\\theta }_1$ is given as:\n\n\\begin{equation} \\label{SEQ13}\n\t\\frac{\\mathrm{\\partial }J}{\\mathrm{\\partial }{\\theta }_1}=\\frac{\\rho }{2}\\left(2{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}2{\\theta }_1{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{1}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}2{\\theta }_2{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{1}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}2{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{1}}\\left(\\mathbf{r}+\\mathbf{u}\\right)\\right)\n\\end{equation}\n\nThe closed-form expression for the optimum parameter ${\\theta }^*_1$ is obtained by equating $\\frac{\\mathrm{\\partial }J}{\\mathrm{\\partial }{\\theta }_1}=0$.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{S3.eps}\n\t\\caption{Convergence plot of the parameters of the Burgers' equation. The black dashed line represents the exact value of the parameter used to simulate the response. The colored lines represent the updated parameter value at each of the iteration steps corresponding to the percentage of Gaussian noise corrupted measured data.}\\label{fig:S3}\n\\end{figure}\n\n\\begin{equation} \\label{SEQ14} \n\t{\\theta }^*_1=-{[{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{1}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }]}^{-1}[{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}{\\mathbf{B}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_2{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{1}}{\\overline{\\mathbf{B}}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{1}}\\left(\\mathbf{r}+\\mathbf{u}\\right)] \n\\end{equation}\n\nSimilarly, the closed-form expression for the optimum parameter ${\\theta }^*_2$ is obtained by equating $\\frac{\\mathrm{\\partial }J}{\\mathrm{\\partial }{\\theta }_2}=0$.\n\n\\begin{equation} \\label{SEQ15} \n\t{\\theta }^*_2=-{[{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{2}}{\\mathbf{B}}_{\\mathbf{2}}\\boldsymbol{\\beta }]}^{-1}[{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\overline{\\mathbf{B}}}^{\\mathbf{T}}_{\\mathbf{0}}{\\mathbf{B}}_{\\mathbf{2}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\theta }_1{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{2}}{\\overline{\\mathbf{B}}}_{\\mathbf{1}}\\boldsymbol{\\beta }\\boldsymbol{+}{\\boldsymbol{\\beta }}^{\\mathbf{T}}{\\mathbf{B}}^T_{\\mathbf{2}}\\left(\\mathbf{r}+\\mathbf{u}\\right)] \n\\end{equation}\n\nThe following algorithm demonstrates the parameter estimation of the Burgers' equation model using \\textbf{SNAPE}. \\vspace{5mm}\n\n\\begin{center}\n\\begin{tabular}{p{3.5in}} \\hline \n\t\\\\[2pt]\n\t\\hspace{15mm} Algorithm: \\textbf{SNAPE} \\textbf{(\\textit{Burgers' Equation})} \\\\[7pt] \\hline \n\t\\\\[3pt]\n\t\\hspace{5mm} Initialize $\\rho \\mathrm{>0}$, $\\mu \\mathrm{>0}$, $\\gamma \\mathrm{>0}$, ${\\theta }^0_1$, \\textbf{and} ${\\theta }^0_2$ \\\\\n\t \\hspace{5mm} $\\mathbf{u}^0\\mathrm{\\leftarrow }0$ \\\\\n\t \\hspace{5mm} $\\mathbf{r}^0\\mathrm{\\leftarrow }0$ \\\\\n\t \\hspace{5mm} ${\\boldsymbol{\\beta } }^0\\mathrm{\\leftarrow }{[{\\overline{\\mathbf{B}}}^T\\overline{\\mathbf{B}}]}^{-1}[{\\overline{\\mathbf{B}}}^T\\mathbf{y}]$ \\\\\n\t \\hspace{5mm} $k\\mathrm{\\leftarrow }0$ \\\\\n\t \\hspace{5mm} while \\textbf{\\textit{till convergence}} do \\\\\n\t \\hspace{10mm} $\\mathrm{\\mathbf{B}}\\mathrm{\\leftarrow }\\overline{\\mathbf{B}}\\beta \\bigodot {\\overline{\\mathbf{B}}}_1$ \\\\ \n\t \\hspace{10mm} $\\mathbf{r}^{k+1}\\mathrm{\\leftarrow }\\frac{\\mu \\rho }{1+\\mu \\rho }\\left(\\mathbf{u}^k-G\\left({\\boldsymbol{\\beta } }^k,{\\theta }^k_1,{\\theta }^k_2,\\mathbf{r}^k\\right)\\right)$ \\\\ \n\t \\hspace{10mm} ${\\boldsymbol{\\beta } }^{k+1}\\mathrm{\\leftarrow }{\\boldsymbol{\\beta } }^*$ \\\\\n\t \\hspace{10mm} ${{\\theta }_1}^{k+1}\\mathrm{\\leftarrow }{\\theta }^*_1$ \\\\\n\t \\hspace{10mm} ${{\\theta }_2}^{k+1}\\mathrm{\\leftarrow }{\\theta }^*_2$ \\\\\n\t \\hspace{10mm} $\\mathbf{u}^{k+1}\\mathrm{\\leftarrow }\\mathbf{u}^k+\\gamma \\left(G\\left({\\boldsymbol{\\beta } }^{k+1},{{\\theta }_1}^{k+1},{{\\theta }_2}^{k+1},\\mathbf{r}^{k+1}\\right)\\right)$ \\\\\n\t \\hspace{10mm} $k\\mathrm{\\leftarrow }k+1$ \\\\[5pt] \\hline \n\\end{tabular} \\\\[20pt]\n\\end{center}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{S4.eps}\n\t\\caption{ Convergence plot of the parameters of the Burgers' equation where each colored path is corresponding to a random initialization from the uniform distributions ${\\theta }^0_1\\sim U\\left(-10,10\\right)$ and ${\\theta }^0_2\\sim U\\left(-10,10\\right)$. The black dashed line represents the exact value of the parameter used to simulate the response. For all the initializations, the measured response is corrupted with an extreme level (100\\%) of Gaussian noise.}\\label{fig:S4}\n\\end{figure}\n\nThe figure \\ref{fig:S3} shows the plots of each of the optimum parameters of Burgers' equation for each iteration step of \\textbf{SNAPE}. The figure demonstrates the convergence of \\textbf{SNAPE} for measured data corrupted with low (1\\%) to extreme (100\\%) levels of Gaussian noise. As expected, with increasing noise content, \\textbf{SNAPE} requires more iterations to reach convergence. In this example, the initial value of the parameter is set to ${\\theta }^0_1=3.0$ and ${\\theta }^0_2=3.0$.\n\nThe convergence to the optimum parameter values of the model does not depend on the initialization of the model's parameters. \\textbf{SNAPE} exhibits insensitivity towards the choice of ${\\theta }^0_1$ and ${\\theta }^0_2$. Figure \\ref{fig:S4} shows the convergence plots of the parameters of Burgers' equation using \\textbf{SNAPE} for 10 different initializations randomly sampled from the uniform distributions ${\\theta }^0_1\\sim U\\left(-10,10\\right)$ and ${\\theta }^0_2\\sim U\\left(-10,10\\right)$. The original data is corrupted with 100\\% noise for all the random instances of initialization to inspect the algorithm's convergence stability under extreme perturbation. \n\n\\subsection*{Examples}\n\nThis section describes the theory-guided learning of the Korteweg-de Vries equation and the Kuramoto-Sivashinsky equation from its noise-corrupted measured using \\textbf{SNAPE}. The performance of the parameter estimation is already demonstrated in Table 1. The simulated data for both models is obtained from \\citet{rudy2017data}.\n\n\\subsubsection*{Korteweg-de Vries (KdV) equation}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{S5.eps}\n\t\\caption{\\textbf{(A)} 5\\% Gaussian noise corrupted measured data points overlaid on the surface of the true solution of the Korteweg-de Vries equation.\\textbf{(B)} The measured response shows two traveling waves with different amplitudes.\\textbf{(C)} The analytical approximate solution of the underlying PDE model. The black dashed lines indicate the position and time instant of response shown as 1D plots in the figures below. The comparison of the estimated solution from the noisy measured data to the true solution \\textbf{(D)} at position$\\ x=0$ and \\textbf{(E)} at time instant $t=10$ reveals the efficacy and robustness of SNAPE.}\\label{fig:S5}\n\\end{figure}\n\nThe KdV equation has relations to many physical problems including but not limited to waves in shallow water with weakly nonlinear restoring force and acoustic waves in plasma or on a crystal lattice. The corresponding PDE model is given as\n\n\\begin{equation} \\label{SEQ16} \n\t\\frac{\\mathrm{\\partial }u}{\\mathrm{\\partial }t}+{\\theta }_1u\\frac{\\mathrm{\\partial }u}{\\mathrm{\\partial }x}+{\\theta }_2\\frac{{\\mathrm{\\partial }}^3u}{\\mathrm{\\partial }x^3}=0 \n\\end{equation} \n\nThe numerical simulation of the response $u\\left(x,t\\right)\\mathrm{\\in }{\\mathbb{R}}^{512\\times 201}$ is performed in the domain $x\\mathrm{\\in }\\left[-30,\\ 30\\right]$ and $t\\mathrm{\\in }\\left[0,\\ 20\\right]$ for the parameter values $\\boldsymbol{\\theta }=\\left(6.0,\\ 1.0\\right)$. It models 1D wave propagation of two non-interacting traveling waves of different amplitudes. As shown in Table \\ref{tab:01}, \\textbf{SNAPE} robustly estimates the parameters of the KdV equation with high accuracy for cases where the simulated response is corrupted with 1\\% and 5\\% Gaussian noise. Figure \\ref{fig:S5} (A) shows one such instance of measured data corrupted with 5\\% noise overlaid on the true response of the KdV equation. The estimated functional solution approximates well the true response of the model as shown in the time history plot in figure \\ref{fig:S5} (D) and an instantaneous snapshot of response in figure \\ref{fig:S5} (D).\n\n\\subsubsection*{Kuramoto-Sivashinsky (KS) equation}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\textwidth]{S6.eps}\n\t\\caption{\\textbf{(A)} The measured data points with 5\\% Gaussian noise overlaid on the surface of the true solution of the Kuramoto-Sivashinsky equation.\\textbf{(B)} The measured response demonstrates a complex spatiotemporal pattern.\\textbf{(C)} The analytical approximate solution of the underlying PDE model. The black dashed lines indicate the position and time instant of response shown as 1D plots in the figures below. The comparison of the estimated solution from the noisy measured data to the true solution \\textbf{(D)} at position$\\ x=65$ and \\textbf{(E)} at time instant $t=70$ reveals the efficacy and robustness of SNAPE.}\\label{fig:S6}\n\\end{figure}\n\nThe fourth-order nonlinear PDE of the KS equation has attracted a great deal of attention to model complex spatiotemporal dynamics of spatially extended systems that are driven far from equilibrium by intrinsic instabilities such as instabilities in laminar flame fonts, phase dynamics in reaction-diffusion systems, and instabilities of dissipative trapped ion modes in plasmas. The PDE model of the KS equation in one space dimension is given as\n\n\\begin{equation} \\label{SEQ17} \n\t\\frac{\\mathrm{\\partial }u}{\\mathrm{\\partial }t}+{\\theta }_1u\\frac{\\mathrm{\\partial }u}{\\mathrm{\\partial }x}+{\\theta }_2\\frac{{\\mathrm{\\partial }}^2u}{\\mathrm{\\partial }x^2}+{\\theta }_3\\frac{{\\mathrm{\\partial }}^4u}{\\mathrm{\\partial }x^4}=0 \n\\end{equation} \n\nThe original data consists of solution domain $x\\mathrm{\\in }\\left[0,\\ 100.5\\right]$ and $t\\mathrm{\\in }\\left[0,\\ 100\\right]$ for the parameter values $\\boldsymbol{\\theta }=\\left(1.0,\\ 1.0,\\ 1.0\\right)$. But in the present study, a part of the response $u\\left(x,t\\right)\\mathrm{\\in }{\\mathbb{R}}^{524\\times 151}$ in the domain $x\\mathrm{\\in }\\left[49.2,\\ 100.5\\right]$ and $t\\mathrm{\\in }\\left[40,\\ 100\\right]$ is used to infer the parameters of the model. Even though the model consists of a fourth-order derivative and the measured response is corrupted with Gaussian noise (1\\% and 5\\%), \\textbf{SNAPE} is successful in estimating the parameters with reasonable accuracy and uncertainty as tabulated in Table \\ref{tab:01}. Figure \\ref{fig:S6} (A) shows one such instance of measured data corrupted with 5\\% noise overlaid on the true response of the KS equation. The estimated analytical solution approximates well the true response of the model as shown in the time history plot in figure \\ref{fig:S6} (D) and an instantaneous snapshot of response in figure \\ref{fig:S6} (D).\n\n\\subsubsection*{Comparative study}\n\n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{\\textbf{Comparative performance of SNAPE. }The relative error of parameter estimation along with its variance in percentage for \\textbf{SNAPE }is compared with that of \\citet{rudy2017data} for the following PDE models with the same dataset. In general, the accuracy and robustness of \\textbf{SNAPE}'s estimation are better from 5\\% Gaussian noise corrupted data than that of \\citet{rudy2017data} from data with 1\\% Gaussian added noise.}\n\t\\begin{tabular}{|p{1.0in}|p{1.8in}|p{0.8in}|p{0.7in}|p{0.7in}|} \\hline \n\t\t{} & {} & {} & {} & {} \\\\[2pt]\n\t\t\\textbf{Differential Equations} & \\hspace{15mm} \\textbf{Form} & \\textbf{\\citet{rudy2017data}\\newline (1\\% Noise)} & \\textbf{SNAPE \\newline (1\\% Noise)} & \\textbf{SNAPE \\newline (5\\% Noise)} \\\\ \\hline \n\t\t{} & {} & {} & {} & {} \\\\[2pt]\n\t\t\\textbf{Kuramoto-Sivashinsky equation} & $u_t+{\\theta }_1uu_x+{\\theta }_2u_{xx}+{\\theta }_3u_{xxxx}=0$ & $52\\pm 1.4\\%$ & $3.6\\pm 0.92\\%$\\textbf{\\textit{\\newline }} & $20.8\\pm 19\\%$\\newline \\\\ \\hline \n\t\t{} & {} & {} & {} & {} \\\\[2pt]\n\t\t\\textbf{Burgers' equation} & $u_t+{\\theta }_1uu_x+{\\theta }_2u_{xx}=0$\\textit{ } & $0.8\\pm 0.6\\%$ & $1.0\\pm 0.08\\%$ & $1.0\\pm 0.55\\%$\\newline \\\\ \\hline \n\t\t{} & {} & {} & {} & {} \\\\[2pt]\n\t\t\\textbf{Korteweg-de Vries equation} & $u_t+{\\theta }_1uu_x+{\\theta }_2u_{xxx}=0$ & $7\\pm 5\\%$ & $0.4\\pm 0.06\\%$ & $0.7\\pm 0.28\\%$\\newline \\\\ \\hline \n\t\t{} & {} & {} & {} & {} \\\\[2pt]\n\t\t\\textbf{Nonlinear Schr\\\"{o}dinger equation} & ${\\psi }_t+{\\theta }_1{\\psi }_{xx}+{\\theta }_2{\\left|\\psi \\right|}^2\\psi =0$\\textit{ } & $3\\pm 1\\%$ & $0.9\\pm 0.13\\%$ & $5.7\\pm 0.31\\%$ \\\\ \\hline \n\t\\end{tabular}\n\t\\label{tab:S01}%\n\\end{table}%\n\n\\begin{table}[htbp]\n\t\\centering\n\t\\caption{\\textbf{Comparative performance of SNAPE for Navier-Stokes equation. }The relative error of parameter estimation along with its variance in percentage for \\textbf{SNAPE }is compared with that of \\citep{raissi2019physics} with the same dataset. The accuracy of \\textbf{SNAPE}'s estimation from 5\\% Gaussian noise corrupted data is comparable to that of \\citep{raissi2019physics} from data with 1\\% Gaussian added noise.}\n\t\\begin{tabular}{|p{1.0in}|p{1.8in}|p{0.8in}|p{0.7in}|p{0.7in}|} \\hline \n\t\t{} & {} & {} & {} & {} \\\\[2pt]\n\t\t\\textbf{Differential Equations} & \\hspace{15mm} \\textbf{Form} & \\textbf{\\citet{raissi2019physics} \\newline (1\\% Noise)} & \\textbf{SNAPE (1\\% Noise)} & \\textbf{SNAPE (5\\% Noise)} \\\\ \\hline \n\t\t{} & {} & {} & {} & {} \\\\[2pt]\n\t\t\\textbf{Navier-Stokes equation} & ${\\omega }_t+{\\theta }_1{\\omega }_{xx}+{\\theta }_2{\\omega }_{yy}+{\\theta }_3u{\\omega }_x+{\\theta }_4v{\\omega }_y=0$\\textit{ } & $8.9\\%$ & $9.1\\pm 0.07\\%$\\textbf{\\textit{\\newline }} & $9.2\\pm 1.6\\%$\\newline \\\\ \\hline \n\t\\end{tabular}\n\t\\label{tab:S02}%\n\\end{table}%\n\nThis section compares the efficacy of the proposed method of \\textbf{SNAPE} with that of the prevalent methods in the literature of estimating parameters of PDE models. The data for the PDE models of KS equation, Burgers' equation, KdV equation, and NLSE are obtained from the same source of \\citet{rudy2017data} whose results are compared with \\textbf{SNAPE} in Table \\ref{tab:S01}. The same data for the estimation provides a common basis for the comparison. The regression-based method in \\citet{rudy2017data} is demonstrated for measurement noise up to 1\\%. However, the accuracy and robustness of \\textbf{SNAPE} not only outperforms that of \\citet{rudy2017data} for all the PDE models corrupted with 1\\% Gaussian noise, but also performs better with 5\\% added noise for almost all the cases.\n\t\nThe velocity field data for the Navier-Stokes equation is obtained from \\citet{raissi2019physics}. The vorticity field data is numerically obtained from it and subsequently, the two velocity components and vorticity field data are corrupted with Gaussian noise to replicate the measurement noise. The following table compares the performance of \\textbf{SNAPE} with the deep learning-based method in \\citet{raissi2019physics} for the same dataset. In \\citet{raissi2019physics} the authors estimate the parameters from one random instance of added noise, but here the robustness and repeatability of \\textbf{SNAPE} are demonstrated by performing parameter estimation from 10 bootstrap samples of noise-induced data. The accuracy of estimation using \\textbf{SNAPE }for 5\\% noise shown in Table \\ref{tab:S02} is comparable to that in \\citet{raissi2019physics} for 1\\% noise.\n\n\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{Supplemental Item 1: Detailed parameters for the Monte Carlo code}\n\n\\subsubsection{Choice of lattice parameters}\nFor probing solid skyrmion states with locally triangular structure \nand their melting into a liquid, we choose a periodic rectangular spin lattice \nwith $L_x = k \\times 97$ and $L_y = k \\times 84$ with integer $k$.\nThis realizes a ratio $L_y\/L_x = 0.865979...$ very close \nto $\\sqrt{3}\/2 = .866025...$, and the distortion is minimal for a triangular \nskyrmion crystal \n(in ``base'' configuration)\nwith $N_\\mathrm{sk} = n \\times n$ skyrmions with even $n$ ($n$ rows in the $y$ \ndirection of $n$ skyrmions in the $x$ direction). To probe helical skyrmion \nstates with $\\pm 45^\\circ$ tilting as well as crystal skyrmion states with \nsquare-lattice structure, we rather use square-shaped spin lattices with linear \nlength $L$.\n\n\\subsubsection{Energy minimization and simulated annealing in magnetic field at $T \/ J \n=0$}\nAt zero temperature, energy minimization and simulated annealing\nin magnetic field \nare performed by the zero-temperature heat-bath\nalgorithm (where spins are aligned with their molecular \nfield produced by neighboring spins and by the external magnetic field $h$).\nThis produces the zero-temperature phase diagram (see\n\\Fig{fig:fig2} of the main text). \nWe typically use $4 \\times 10^3$ sweeps to minimize the energy of the system at \neach magnetic field.\nTriangular-crystal and square-shaped skyrmion initial configurations \nat various densities are pieced together from individual skyrmions placed\nvery close to the corresponding lattice sites (each individual skyrmion is \nprepared in\na small system with $7 \\times 7$ spins at $h \/ J = 0.6$). The helical \ninitial configurations for \\Fig{fig:fig2}(b)\nare prepared at $h \/ J = 0$ from a single wave\nvector $2m\\pi \\left(1, 1 \\right) \/ L $ with integer $m$ and then \nrelaxed \nthrough the zero-temperature heat-bath algorithm. \nSimulated annealing in $h$ with steps of size $|\\Delta h \/ J| =\n0.001$ then leads from $h \/ J = 0.6$ to lower $h$, and from \n$h \/ J = 0$ to higher $h$ for\nfor triangular\nskyrmion crystal states, and from $h \/ J = 0$ to higher magnetic field for\nhelical states (see \\Fig{fig:fig2} of the main text).\nThis yields the energies for helical, triangular-crystal and square-crystal \nstates, as well as for the paramagnetic state.\nSystem sizes are $\\left( L_x, L_y \\right) = (291, 252)$\nfor triangular skyrmion crystals, and $L = 256$ for helical states and for \nsquare skyrmion crystals.\n\n\n\\subsubsection{Monte Carlo parameters at finite temperature}\nIn our computations of the correlation functions at \nfinite temperature,\none unit of Monte Carlo time consists of one heat-bath sweep and of \n$10^4$ \nover-relaxation sweeps, where one sweep denotes one update per spin. We use \nGPUs to simulate the system with the \ncheckerboard decomposition \\cite{Weigel2011,Weigel2012}. \nEach block of the checkerboard consists of $16 \\times 16$ spins. As the \nlattice dimensions $L_x$ and $L_y$ are not necessarily multiples of the \nblock size, the\ncheckerboard is randomly shifted after every $10^2$ over-relaxation sweeps\nin order to assure the ergodicity of the algorithm. \n\nBecause of the anisotropic shape of the underlying rectangular lattice, the\nnumber of skyrmions is incommensurate with the ``tip'' crystalline configuration\nwith $\\mathrm{Re}\\Psi_6 < 0$, and the lattice structure of $g \\left(x, y\n\\right)$ would be strongly distorted, resulting in a higher energy. We thus \nfocus on ``base'' configurations which minimize the distortion with the \nunderlying spin lattice.\n\n\\subsection{Supplemental Item 2: Cooling-rate dependence of the number of \nskyrmions}\n\\label{supp3}\n\n\\begin{figure}[htbp]\n\\includegraphics[width=0.4\\linewidth]{figS1.pdf}\n\\caption{\\small\nCooling-rate dependence of the number of skyrmions in\nconfigurations at $T \/ J = 0.155$ obtained by simulated annealing simulations\nfrom a high temperature. The magnetic field $h \/ J = 0.5$. \nThe leftmost point $\\theta = 0$ corresponds to\nthe equilibrium limit.\n}\n\\label{fig:figS1}\n\\end{figure}\n\nIn our production runs, the skyrmion number is kept rigorously fixed at what \nwe believe to be the thermodynamically relevant value. \nTo determine this dominant density of skyrmions at finite temperature, we run \nsimulated annealing (SA) simulations from high temperature. In our SA at $h \/ J = 0.5$\nstarting from $T \/ J = 0.955$ to $0.155$ with $\\Delta T \/ J = 0.01$, the number of\nMonte Carlo steps at each temperature $M$ is controlled, where one Monte\nCarlo time step consists of one heat-bath sweep followed by ten over-relaxation\nsweeps. The resultant skyrmion number\n$N_\\mathrm s$ decreases with decreasing \nthe cooling rate $\\theta = (\\Delta T \/ J) \/ M$ (see \n\\Fig{fig:figS1}). The equilibrium density of\nskyrmions is obtained in the limit of vanishing cooling rate. $N_\\mathrm\ns$ approaches $128 \\times 128$ in this limit for the system with $L_x =\n1164$ and $L_y = 1008$. For smaller systems with $(L_x, L_y) = (582, 504)$ and\n$(291, 252)$, the same limiting density is obtained. \n\n\\subsection{Supplemental Item 3: Determining the position of a single skyrmion}\n\nAlthough each skyrmion is composed of Heisenberg spins on a \ndiscrete square lattice, \nwe may assign it a real-valued position ($(x,y)$).\nThis allows us to effectively map the Heisenberg-spin model to \na model of interacting particles \nand to compute positional and the bond-orientational order, in analogy to what \nis done for two-dimensional particle systems. At zero temperature, an isolated \nskyrmion has a symmetric structure around its core, and spins near a core of a \nskyrmion are antiparallel to the magnetic\nfield (they thus point into the $-z$ direction)). We thus consider a connected \ncluster of spins with $S_i^{\\left( z\n\\right)} < 0$ as a skyrmion, and define the position of the skyrmion \n$\\vec R_\\mathrm{sk}$ in the two-dimensional plane as\n\\begin{equation}\nR_\\mathrm{sk}^{\\left( \\alpha \\right)} = \n\\frac{1}{A_\\mathrm{sk}}\\sum_{i \\in \\mathrm{skyrmion}}\\left( r_i^{\\left( \\alpha \n\\right)} \n+ S_i^{\\left( \\alpha \\right)} \\right), \\quad (\\alpha = x, y)\n\\label{e:skyrmion_locator}\n\\end{equation}\nwhere $A_\\mathrm{sk}$ is the number of spins composing the skyrmion. \nAt finite temperature, the thermal fluctuations of the Heisenberg spins \ninduce fluctuations in the determination of the skyrmion position.\n\n\\subsection{Supplemental Item 4: Coupling between skyrmion positions and \nthe spin lattice}\nThe skyrmion locator of \\eq{e:skyrmion_locator} allows us to compute the \ncoupling potential between skyrmion locations and spin lattice sites. We \nsimulate one\nsingle skyrmion in the system with $16 \\times 16$ spins at $T \/ J = 0.1$ to\nobtain a histogram of the skyrmion locations\n\\begin{equation*}\nP \\left( \\Delta \\vec r \\right) \n= \\left\\langle \n\\delta \\left( \\Delta \\vec r - \\min_k \\left(\\vec R_\\mathrm{sk} - \\vec \\ell_k \n\\right)\\right) \\right\\rangle,\n\\end{equation*}\nwhere the bracket $\\left\\langle \\cdots \\right\\rangle$ represents average\nover configurations with only one skyrmion, and $\\vec \\ell_k$ represents the\nlocation of $k$-th lattice site. Then, the coupling potential is estimated as\n$V_\\mathrm{coup} \\left( \\Delta\\vec r\\right) = -\\log \\left( P\\left( \\Delta \\vec\nr \\right) \\right) \/ \\beta$. \\Fig{fig:sk-c-l} shows $V_\\mathrm{coup} \\left(\n\\Delta\\vec r\\right)$ at various magnetic fields. The coupling\npotential is clearly nonzero. Furthermore, the potential minimum depends \non the magnetic field (see\n\\Fig{fig:sk-c-l}). We checked that the coupling potential is independent of\nsystem size.\n\n\\begin{figure}[b]\n\\includegraphics[width=1\\linewidth]{figS2.pdf}\n\\caption{\nEffective coupling potential $V_\\mathrm{coup} \\left( \\Delta \\vec r \\right)$\ncharacterizing the coupling between skyrmion positions and lattice\nsites. For $h \/ J \\gtrsim 0.68$, \nthe potential minimum \ncoincides with the lattice sites ($\\Delta x = \\Delta y = 0$),\nbut at smaller magnetic field, the minimum lies \nbetween the lattice spins, at $\\Delta x = \\Delta y = 0.5$.\n}\n\\label{fig:sk-c-l}\n\\end{figure}\n\n\n\\subsection{Supplemental Item 5: Phase diagram of the system}\nTransition temperatures between the paramagnetic and the helical phases in\nthe phase diagram \\Fig{fig:fig1} are estimated as the peak locations of the\nspecific heat. We perform regular Monte Carlo simulations using the heat-bath,\nthe over-relaxation, and the exchange Monte Carlo (parallel tempering)\nalgorithms. System sizes range from $L = 32$ (the total number of spins is\n$N = 1024$) to $L = 256$ ($N = 65536$). The number of Monte Carlo sweeps is\ntypically $10^5$. We checked that the peak location of the specific heat varies \nvery little with the system size, up to $L = 256$.\n\n\\fi\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\setcounter{equation}{0}\n\nModels of three-dimensional (3D) gravity were introduced to help us in\nclarifying highly complex dynamical behavior of the realistic\nfour-dimensional general relativity (GR). In the last three decades, they\nled to a number of outstanding results \\cite{x1}. However, in the early\n1990s, Mielke and Baekler \\cite{x2} proposed a new, non-Riemannian\napproach to 3D gravity, based on the Poincar\\'e gauge theory (PGT)\n\\cite{x3,x4,x5,x6}. In contrast to the traditional GR with an underlying\nRiemannian geometry of spacetime, the PGT approach is characterized by a\nRiemann--Cartan geometry, with both the curvature and the torsion of\nspacetime as carriers of the gravitational dynamics. Thus, PGT allows\nexploring the interplay between gravity and geometry in a more general\nsetting.\n\nThree-dimensional GR with or without a cosmological constant, as well as\nthe Mielke--Baekler (MB) model, are topological theories without\npropagating modes. From the physical point of view, such a degenerate\nsituation is certainly not quite realistic. In the context of Riemannian\ngeometry, this limitation is surmounted by two well-known models:\ntopologically massive gravity \\cite{x7} and the Bergshoeff--Hohm--Townsend\nmassive gravity \\cite{x8}. On the other hand, including propagating modes\nin PGT is much more natural: it is achieved simply by using Lagrangians\nquadratic in the field strengths \\cite{x9,x10,x11,x12}.\n\nSince the general parity-invariant PGT Lagrangian in 3D is defined by\neight arbitrary para\\-me\\-ters \\cite{x11}, it is a theoretical challenge\nto find out which values of the parameters are allowed in a viable theory.\nFollowing the approach of Sezgin and Niewenhuizen \\cite{x13},\nHelay\\\"el-Neto et al. \\cite{x10} used the \\emph{weak-field approximation}\naround the Minkowski background to analyze this issue in a\nparity-violating version of PGT, and found a number of interesting\nrestrictions on the parameters. However, one should be very careful with\nthe interpretation of these results, since (i) it is not clear how the\ntrensition from Minkowski to (anti-)de Sitter [(A)dS] background might\ninfluence the perturbative analysis, and (ii) the weak-field approximation\ndoes not always lead to a correct identification of the physical degrees\nof freedom. Regarding (ii), we note that the constrained Hamiltonian\nmethod \\cite{x14,x4} is best suited for analyzing dynamical content of\ngauge field theories, respecting fully their \\emph{nonlinear structure}.\nAs noticed by Chen et al. \\cite{x15} and Yo and Nester \\cite{x16}, it may\nhappen, for some ranges of parameters, that the canonical structure of a\ntheory (the number and\/or type of constraints) is changed after\nlinearization in a way that affects its physical content, such as the\nnumber of physical degrees of freedom. Based on the \\emph{canonical\nstability under linearization} as a criterion for an acceptable choice of\nparameters, Shie et al. \\cite{x17} were able to define a PGT cosmological\nmodel that offers a convincing explanation of dark energy as an effect\ninduced by torsion. Recently, the Bergshoeff--Hohm--Townsend massive\ngravity is found to be canonically unstable under linearization\n\\cite{x18,x19}.\n\nIn this paper, we use the constrained Hamiltonian formalism to study (a)\nthe phenomenon of ``constraint bifurcation\" and (b) the stability under\nlinearization of the general parity-invariant PGT in 3D \\cite{x11}, in\norder to find out the parameter values that define consistent models of 3D\ngravity with propagating torsion. Because of the complexity of the\nHamiltonian structure, we restrict our attention to the scalar sector,\nwith $J^P=0^+$ or $0^-$ modes, defined with respect to the (A)dS\nbackground. Investigation of higher spin modes is left for a future study.\n\nThe paper is organized as follows. In Section 2, we review basic\nLagrangian aspects of the parity-invariant PGT in 3D. In Section 3, we\ngive a brief account of the weak-field approximation around the (A)dS\nbackground, restricting our attention to the scalar sector, with $J^P=0^+$\nor $0^-$. In Section 4, we analyze general aspects of the canonical\ndynamics of PGT; in particular, we examine how, depending on certain\ncritical values of parameters, some extra primary constraints may appear\n(if-constraints), leading to a significant effect on the Hamiltonian\nstructure. In Section 5, we analyze the canonical structure of the\nspin-$0^+$ sector, including the ``constraint bifurcation\" effects. Then,\nthe test of canonical stability under linearization is used to reveal\ndynamically acceptable values of parameters. In Section 6, the same type\nof analysis is carried out for the spin-$0^-$ sector. Section 7 is devoted\nto concluding remarks, and appendices contain technical details.\n\nOur conventions are as follows: the Latin indices $(i,j,k, ...)$ refer\nto the local Lorentz frame, the Greek indices $(\\mu,\\nu,\\l, ...)$ refer\nto the coordinate frame, and both run over 0,1,2; the metric components\nin the local Lorentz frame are $\\eta_{ij} = (+,-,-)$; totally\nantisymmetric tensor $\\ve^{ijk}$ is normalized to $\\ve^{012}= 1$.\n\n\\section{Lagrangian formalism}\n\\setcounter{equation}{0}\n\nWe begin our considerations by a short account of the Lagrangian formalism\nfor PGT. Assuming parity invariance, the dynamics of 3D gravity with\npropagating torsion is determined by the gravitational Lagrangian\n(density) $\\tcL_G=b\\cL_G$,\n\\bsubeq\\lab{2.1}\n\\be\n\\cL_G=-aR-2\\L_0+\\cL_{T^2}+\\cL_{R^2}\\, ,\n\\ee\nwhere $\\L_0$ is a bare cosmological constant, $a=1\/16\\pi G $, and the\npieces quadratic in the field strengths read:\n\\bea\n\\cL_{T^2}&:=&\\frac{1}{2}T^{ijk}\\left(a_1{}^{(1)}T_{ijk}\n +a_2{}^{(2)}T_{ijk}+a_3{}^{(3)}T_{ijk}\\right)\\,,\\nn\\\\\n\\cL_{R^2}&:=&\\frac{1}{4}R^{ijkl}\\left( b_4{}^{(4)}R_{ijkl}\n +b_5{}^{(5)}R_{ijkl}+b_6{}^{(6)}R_{ijkl}\\right)\\,,\n\\eea\nwhere $^{(n)}T_{ijk}$ and $^{(n)}R_{ijkl}$ are irreducible components of\nthe torsion and the Riemann--Cartan curvature \\cite{x11}. Since the Weyl\ncurvature vanishes in 3D, one can rewrite these expressions in the form\nthat is more practical for the canonical analysis:\n\\bea\n\\cL_{T^2}&=&\n T^{ijk}\\left(\\a_1T_{ijk}+\\a_2T_{kji}+\\a_3\\eta_{ij}V_k\\right)\\,,\\nn\\\\\n\\cL_{R^2}&=&R^{ij}\\left(\\b_1R_{ij}+\\b_2R_{ji}\n +\\b_3\\eta_{ij}R\\right)=:R^{ij}\\cH_{ij}\\, ,\n\\eea\n\\esubeq\nHere, $V_k:=T^m{}_{mk}$, $R_{ij}:=R^m{}_{imj}$ is the Ricci tensor, $R$ is\nthe scalar curvature, and\n\\bea\n&&\\a_1=\\frac{1}{6}(2a_1+a_3)\\, ,\\quad\\a_2=\\frac{1}{3}(a_1-a_3)\\, ,\n \\quad \\a_3=\\frac{1}{2}(a_2-a_1)\\, . \\nn\\\\\n&&\\b_1=\\frac{1}{2}(b_4+b_5)\\, ,\\quad \\b_2=\\frac{1}{2}(b_4-b_5)\\, ,\n \\quad \\b_3=\\frac{1}{12}(b_6-4b_4)\\, . \\nn\n\\eea\nWe also introduce the covariant momenta $\\cH_{ijk}=\\pd\\cL_{G}\/\\pd\nT^{ijk}$ and $\\cH_{ijkl}=\\pd\\cL_{G}\/\\pd R^{ijkl}$:\n\\bea\n\\cH_{ijk}&=&2\\left(a_1{}^{(1)}T_{ijk}\n +a_2{}^{(2)}T_{ijk}+a_3{}^{(3)}T_{ijk}\\right) \\nn\\\\\n &=&4\\left(\\a_1 T_{ijk}+\\a_2 T_{[kj]i}\n +\\a_3\\eta_{i[j}v_{k]}\\right)\\, , \\nn\\\\\n\\cH_{ijkl}&=&\n -2a(\\eta_{ik}\\eta_{jl}-\\eta_{jk}\\eta_{il})+\\cH'_{ijkl}\\,,\\nn\\\\\n\\cH'_{ijkl}&=&2\\left(b_4{}^{(4)}R_{ijkl}\n +b_5{}^{(5)}R_{ijkl}+b_6{}^{(6)}R_{ijkl}\\right) \\nn\\\\\n &=&2(\\eta_{ik}\\cH_{jl}-\\eta_{jk}\\cH_{il})-(k\\lra l)\\, . \\nn\n\\eea\n\nGeneral field equations for the PGT theory \\eq{2.1} are given in\n\\cite{x11}. Without matter contribution, these equations, transformed to\nthe local Lorentz basis, take the form:\n\\bsubeq\\lab{2.2}\n\\bea\n&&\\nab^m\\cH_{imj}\n +\\frac{1}{2}\\cH_i{}^{mn}(-T_{jmn}+2\\eta_{jm}V_n)\n -t_{ij}=0\\, , \\\\\n&&2aT_{kij}+2T^m{}_{ij}(\\cH_{mk}-\\eta_{mk}\\cH)\n +4\\nab_{[i}(\\cH_{j]k}-\\eta_{j]k}\\cH)\n +\\ve_{ijn}\\ve^{mr}{_k}\\cH_{mr}{^n}=0\\, ,\n\\eea\n\\esubeq\nwhere $\\cH=\\cH^k{_k}$, and $t_{ij}$ is the energy-momentum tensor of\ngravity:\n$$\nt_{ij}:=\\eta_{ij}\\cL_G-T^{mn}{_i}\\cH_{mnj}+2a\\hR_{ji}\n -2(\\hR^n{_i}\\cH_{nj}-\\hR_j{}^{nm}{_i}\\cH_{nm})\\, .\n$$\n\nRelying again on the vanishing of the Weyl curvature, one can express\nBianchi identities in terms of the Ricci tensor. In the local Lorentz\nbasis, these identities take the form:\n\\bea\n&&\\ve^{mnr}\\nab_m T^i{}_{nr}+\\ve^{rsn}T^i{}_{mn}T^m{}_{rs}\n +2\\ve^{imn}R_{mn}=0\\, , \\nn\\\\\n&&\\nab_k G^{ki}-V_kG^{ki}=0\\, ,\n\\eea\nwhere $G_{ki}:=R_{ki}-\\frac{1}{2}\\eta_{ik}R$.\n\n\\section{Scalar excitations around (A)dS background}\n\\setcounter{equation}{0}\n\nParticle spectrum of 3D gravity with torsion \\eq{2.1} around the Minkowski\nbackground $M_3$ is already known \\cite{x10,x11}. Here, we wish to examine\nthe modification of this spectrum induced by transition to the (A)dS\nbackground. This will help us to clarify the relation between the\ncanonical stability of the theory under linearization and its $M_3$ or\n(A)dS particle spectrum. Our attention is restricted to the scalar sector,\nwith $J^P=0^+,0^-$ modes.\n\nMaximally symmetric configuration of 3D gravity with torsion is defined by\nthe set of fields $\\bar\\phi=(\\bar b^i{_\\m},\\bar A^{ij}{_\\m})$, such that\n\\be\n\\bT_{ijk}=p\\ve_{ijk}\\,,\\qquad\n\\bR^{ij}{}_{mn}\n =-q\\left(\\d^i{_m}\\d^j{_n}-\\d^i{_n}\\d^j{_m}\\right)\\, , \\lab{3.1}\n\\ee\nwhere the parameters $p$ and $q$ define an effective cosmological\nconstant,\n$$\n\\Leff:=q-\\frac{p^2}{4}\\, .\n$$\nIn order for this configuration to be a solution of the field equations in\nvacuum, the para\\-me\\-ters $p$ and $q$ have to satisfy the following\nconditions \\cite{x11}:\n\\bsubeq\\lab{3.2}\n\\bea\n&&p(a+qb_6+2a_3)=0\\, , \\lab{3.2a}\\\\\n&&aq-\\L_0+\\frac{1}{2}p^2a_3-\\frac{1}{2}q^2b_6=0\\, . \\lab{3.2b}\n\\eea\n\\esubeq\nIn the weak-field approximation around $\\bar\\phi$, the gravitational\nvariables $\\phi=(b^i{_\\m},A^{ij}{_\\m})$ take the form\n$\\phi=\\bar\\phi+\\tilde\\phi$. We use the convention that indices of the\nlinear excitations $\\tilde\\phi$ are changed by the background triad and\/or\nmetric.\n\nThe analysis of the particle spectrum is based on the linearized field\nequations. In the same approximation, the Bianchi identities read:\n\\bsubeq\\lab{3.3}\n\\bea\n&&\\ve^{kmn}\\bar\\nabla_k\\tilde T^i{}_{mn}\n -2p\\tilde V^i+2\\ve^{imn}\\tR_{mn}=0\\, , \\lab{3.3a}\\\\\n&&\\bar\\nabla_k\\tilde G^{ki}-q\\tilde V^i=0\\, . \\lab{3.3b}\n\\eea\n\\esubeq\n\n\\subsection{Spin-\\mb{0^+} mode}\n\nLooking at the particle spectrum of the theory \\eq{2.1} on the $M_3$\nbackground, see section 3 in \\cite{x11}, one finds that the spin-$0^+$\nmode has a finite mass (and propagates) if\n\\be\na_2(b_4+2b_6)\\ne 0\\, . \\nn\n\\ee\nIn order to study the spin-$0^+$ mode, we adopt the following, somewhat\nsimplified conditions:\n\\bsubeq\\lab{3.4}\n\\be\na_2,b_6\\ne 0\\, ,\\qquad a_1=a_3=b_4=b_5=0\\, . \\lab{3.4a}\n\\ee\nIn fact, this choice is not unique since the existence of a spin-$0^+$\nmode can be realized, for instance, without requiring $b_4=0$. However,\nour ``minimal\" choice \\eq{3.4a} greatly simplifies the calculations, and\nmoreover, one does not expect that any essential dynamical feature of the\nspin-$0^+$ mode will be thereby lost, see \\cite{x15,x16}. The\ncorresponding Lagrangian reads:\n\\be\n\\cL^+_G=-aR-2\\L_0\n +\\frac{1}{2}a_2V^k V_k+\\frac{1}{12}b_6 R^2\\,, \\lab{3.4b}\n\\ee\nand the conditions \\eq{3.2} reduce to\n\\be\np(a+qb_6)=0\\, ,\\qquad aq-\\L_0-\\frac{1}{2}q^2b_6=0\\, .\n\\ee\n\\esubeq\n\nNow, we are going to show that the Minkowskian conditions \\eq{3.4a}\nequally well define the spin-$0^+$ mode with respect to the (A)dS\nbackground \\eq{3.1}. We start by noting that, under the conditions\n\\eq{3.4a}, the linearized field equations \\eq{2.2} read:\n\\bsubeq\\lab{3.5}\n\\bea\n&&(a+qb_6)\\tG_{ji}+a_2\\eta_{i[j}\\bar\\nab^k\\tV_{k]}\n +\\frac{b_6q}{3}\\eta_{ij}\\tR=0\\, , \\\\\n&&(a+qb_6)\\tT_{ijk}-\\frac{pb_6}{6}\\ve_{ijk}\\tR +a_2\\eta_{i[j}\\tV_{k]}\n +\\frac{b_6}{3}\\eta_{i[j}\\bar\\nab_{k]}\\tR=0\\, ,\n\\eea\n\\esubeq\nand their traces are\n\\bsubeq\\lab{3.6}\n\\bea\n&&-2a_2\\bar\\nabla_i\\tilde V^i+(a-qb_6)\\tR=0\\, , \\lab{3.6a}\\\\\n&&(a+qb_6+a_2)\\tilde V_k+\\frac{b_6}{3}\\bar\\nabla_k\\tR=0\\, .\\lab{3.6b}\n\\eea\n\\esubeq\nIn the generic case, by combining $\\bar\\nabla_k\\bar\\nabla^k$ of \\eq{3.6a}\nwith $\\bar\\nabla^k$ of \\eq{3.6b}, one obtains\n\\be\n\\left(\\bar\\nabla_i\\bar\\nabla^i+m_{0^+}^2\\right)\\s=0\\, ,\\qquad\nm_{0^+}^2=\\frac{3(a-qb_6)(a+qb_6+a_2)}{2a_2b_6}\\, , \\lab{3.7}\n\\ee\nwhere $\\s:=\\bar\\nabla_i\\tV^i$. Thus, the field $\\s$ can be identified as\nthe spin-$0^+$ excitation with respect to the (A)dS background, the mass of\nwhich is finite. In the limit of vanishing $q$, $m^2_{0^+}$ reduces to the\ncorresponding Minkowskian expression.\n\n\\subsection{Spin-\\mb{0^-} mode}\n\nSimilar analysis can be applied to the spin-$0^-$ excitation. We start\nfrom the Minkowski\\-an condition that the spin-$0^-$ mode has a finite\nmass (and propagates) \\cite{x11},\n$$\n(a_1+2a_3)b_5\\ne 0\\, .\n$$\nWe describe dynamics of the spin-$0^-$ sector by the simplified conditions:\n\\bsubeq\n\\be\na_3,b_5\\ne 0\\, ,\\qquad a_1=a_2=b_4=b_6=0\\, . \\lab{3.8a}\n\\ee\nThe related Lagrangian has the form\n\\be\n\\cL^-_G=-aR-2\\L_0+3a_3\\cA^2+b_5R_{[ij]}R^{[ij]}\\, , \\lab{3.8b}\n\\ee\nwith $\\cA=\\ve^{ijk}T_{ijk}\/6$, and the conditions \\eq{3.2} reduce to\n\\be\np(a+2a_3)=0\\, ,\\qquad aq-\\L_0+\\frac{1}{2}p^2a_3=0\\, . \\lab{3.8c}\n\\ee\n\\esubeq\n\nStarting from the linearized field equations,\n\\bsubeq\n\\bea\n&&a_3\\ve_{ijk}\\bar\\nab^k\\tilde{\\cA}+a_3p\\eta_{ij}\\tilde\\cA\n +\\frac{4a_3}{3}p\\ve_{(imn}\\tilde t_{j)}{}_{mn}\n -a_3p\\ve_{ijk}\\tV^k+a\\tilde G_{ji}+b_5q\\tilde R_{[ij]}=0\\,,\\\\\n&&a\\tT_{ijk}+pb_5\\ve^n{}_{jk}\\tR_{[ni]}\n +b_5\\bar\\nab_{[j}(\\tR_{k]i}\n -\\tR_{ik]})+2a_3\\ve_{ijk}\\tilde{\\cA}=0\\, .\n\\eea\n\\esubeq\nthe axial irreducible components of these equations read:\n\\bsubeq\n\\bea\n&&a_3\\bar\\nab^i\\tilde{\\cA}-a_3p\\tV^i\n -\\frac{1}{2}(a-qb_5)\\ve^{ijk}\\tR_{jk}=0\\, , \\nn\\\\\n&&(a+2a_3)\\tilde\\cA\n +\\frac{1}{3}b_5\\ve^{ijk}\\bar\\nabla_i\\tR_{jk}=0\\, . \\nn\n\\eea\n\\esubeq\nThen, the divergence of the first equation combined with the second one\nyields\n\\be\na_3\\bar\\nabla^i\\bar\\nabla_i\\tilde\\cA-pa_3\\bar\\nabla_i\\tilde V^i\n+\\frac{1}{2}(a-qb_5)\\frac{3(a+2a_3)}{b_5}\\tilde\\cA=0\\, . \\lab{3.11}\n\\ee\nNow, using the divergence of the first Bianchi identity \\eq{3.3a} and the\ncommutator identity $[\\bar\\nabla_m,\\bar\\nabla_n]\\tilde X_i\n =-p\\ve_{mnk}\\bar\\nabla^k \\tilde X_i -2q\\eta_{i[m}\\tilde X_{n]}$,\nwe find\n$$\n\\s\\equiv\\bar\\nabla_k\\tV^k=-\\frac{3}{2}p(a+2a_3)\\tilde\\cA=0\\, ,\n$$\nas a consequence of \\eq{3.8c}. Hence, \\eq{3.11} implies\n\\be\n\\left(\\bar\\nabla_k\\bar\\nabla^k+m_{0^-}^2\\right)\\tilde\\cA=0\\, ,\\qquad\nm_{0^-}^2=\\frac{3(a-qb_5)(a+2a_3)}{2a_3b_5}\\, .\n\\ee\nThus, generically, $\\tilde\\cA$ can be identified as the spin-$0^-$\nexcitation with respect to the (A)dS background. For $q=0$, $m^2_{0^-}$\ntakes the Minkowskian form.\n\n\\section{Hamiltonian structure}\n\\setcounter{equation}{0}\n\nIn this section, we analyze general features of the Hamiltonian structure\nof 3D gravity with propagating torsion, defined by the Lagrangian\n\\eq{2.1}; see \\cite{x4,x20}.\n\n\\subsection{Primary constraints}\n\nWe begin our study by analyzing the primary constraints. The canonical\nmomenta corresponding to basic dynamical variables\n$(b^i{_\\mu},A^{ij}{}_\\m)$ are $(\\pi_i{^\\m},\\Pi_{ij}{^\\m})$; they are given\nby\n\\be\n\\pi_i{^\\m}:=\\frac{\\pd\\tcL}{\\pd(\\pd_0 b^i{_\\m})}\n =b\\cH_i{}^{0\\m}\\, , \\qquad\n\\Pi_{ij}{^\\m}:=\\frac{\\pd\\tcL}{\\pd(\\pd_0 A^{ij}{}_\\m)}\n =b\\cH_{ij}{}^{0\\m}\\, . \\nn\n\\ee\nSince the torsion and the curvature do not involve the velocities\n$\\pd_0b^i{_0}$ and $\\pd_0 A^{ij}{_0}$, one obtains the so-called ``sure\"\nprimary constraints\n\\be\n\\pi_i{^0}\\approx 0\\, ,\\qquad \\Pi_{ij}{^0}\\approx 0\\, ,\n\\ee\nwhich are always present, independently of the values of coupling\nconstants. If the Lagrangian \\eq{2.1} is singular with respect to some of\nthe remaining velocities $\\pd_0 b^i{_\\a}$ and $\\pd_0 A^{ij}{_\\a}$, one\nobtains further primary constraints. The existence of these primary\n``if-constraints\" (ICs) is determined by the critical values of the\ncoupling constants.\n\n\\prg{The torsion sector.} The gravitational Lagrangian \\eq{2.1} depends on\nthe time derivative $\\pd_0 b^i{_\\a}$ only through the torsion tensor,\nappearing in $\\cL_{T^2}$. It is convenient to decompose $T_{ijk}$ into\nthe parallel and orthogonal components with respect to the spatial\nhypersurface $\\S$ (see Appendix A),\n$$\nT_{ijk}=T_{i\\bj\\bk}+2T_{i[\\bj\\perp}n_{k]}=\\fT_{ijk}+\\cT_{ijk}\\,,\n$$\nwhere $\\fT_{ijk}:=T_{i\\bj\\bk}$ does not depend on velocities and the\nunphysical variables $(b^i{_0},A^{ij}{_0})$, and $n_k$ is the normal to\n$\\S$. Now, by introducing the parallel gravitational momentum\n$\\hpi_i{^\\bk}=\\pi_i{^\\a}b^k{_\\a}$ ($\\hpi_i{^\\bk}n_k=0$), one obtains\n\\bsubeq\\lab{4.2}\n\\bea\n\\hpi_{i\\bk}&=&J\\cH_{i\\perp\\bk}(T)\\, , \\lab{4.2a}\n\\eea\nwhere $J:=\\det(b^\\bi{_\\a})$, and\n\\be\n\\cH_{i\\perp\\bk}=2\\left[2\\a_1T_{i\\perp\\bk}+\\a_2(T_{\\bk\\perp i}\n -T_{\\perp\\bk i})+\\a_3(n_iV_\\bk-\\eta_{i\\bk}V_\\perp)\\right]\\, .\\nn\n\\ee\nThe linearity of $\\cH_{ijk}(T)$ in the torsion tensor allows us to rewrite\n\\eq{4.2a} in the form\n\\be\n\\phi_{i\\bk}:=\\frac{\\hpi_{i\\bk}}{J}-\\cH_{i\\perp\\bk}(\\fT)\n =\\cH_{i\\perp\\bk}(\\cT) \\, ,\n\\ee\n\\esubeq\nwhere the ``velocities\" $T_{i\\bj\\perp}$ appear only on the right-hand\nside. This system of equations can be decomposed into irreducible parts\nwith respect to the group of two-dimensional rotations in $\\S$. Going over\nto the parameters $a_1,a_2,a_3$, one obtains:\n\\bsubeq\\lab{4.3}\n\\bea\n&&\\phi_{\\perp\\bk}\\equiv\\frac{\\hpi_{\\perp\\bk}}J-(a_2-a_1)T^\\bm{}_{\\bm\\bk}\n =(a_1+a_2)T_{\\perp\\perp\\bk} \\,, \\lab{4.3a}\\\\\n&&\\irr{S}{\\phi}\\equiv\\frac{{}^S\\hpi}J\n =-2a_2T^\\bm{}_{\\bm\\perp}\\,, \\\\\n&&\\irr{A}{\\phi}_{\\bi\\bk}\\equiv\\frac{{}^A\\hpi_{\\bi\\bk}}J\n -\\frac{2}{3}(a_1-a_3)T_{\\perp\\bi\\bk}=\n -\\frac{2}{3}(a_1+2a_3)T_{[\\bi\\bk]\\perp}\\,, \\\\\n&&\\irr{T}{\\phi}_{\\bi\\bk}\\equiv\\frac{{}^T\\hpi_{\\bi\\bk}}J\n = -2a_1\\irr{T}{T}_{\\bi\\bk\\perp}\\, ,\n\\eea\n\\esubeq\nwhere $\\irr{S}{\\phi}$, $\\irr{A}{\\phi}_{\\bi\\bk}$ and\n$\\irr{T}{\\phi}_{\\bi\\bk}$ are the trace (scalar), antisymmetric and\ntraceless-symmetric parts of $\\phi_{\\bi\\bk}$ (Appendix A).\n\nIf the critical parameter combinations appearing on the right-hand sides\nof Eqs. \\eq{4.3} vanish, the corresponding expressions $\\phi_K$ become\nadditional primary constraints, the primary ICs. After a suitable\nreordering, the result of the analysis is summarized as follows:\n\\bitem\n\\item[--] For $a_2=0$, $a_1+2a_3=0$, $a_1+a_2=0$ and\/or $a_1=0$,\nthe expressions $\\irr{S}{\\phi}$, $\\irr{A}{\\phi}_{\\bi\\bk}$,\n$\\phi_{\\perp\\bk}$ and\/or $\\irr{T}{\\phi}_{\\bi\\bk}$ become primary ICs\n(see Table 1 below).\n\\eitem\n\n\\prg{The curvature sector.} In order to examine how the gravitational\nLagrangian depends on the velocities $\\pd_0 A^{ij}{_\\a}$, we start with\nthe following decomposition of the curvature tensor:\n$$\nR_{ijmn}=R_{ij\\bm\\bn}+2R_{ij[\\bm\\perp}n_{n]}=\\fR_{ijmn}+\\cR_{ijmn}\\,,\n$$\nwhere $\\fR_{ijmn}:=R_{ij\\bm\\bn}$ does not depend on the ``velocities\"\n$R_{ij\\perp\\bk}$ and the unphysical variables. The parallel\ngravitational momentum $\\hPi_{ij}{^\\bk}=:\\Pi_{ij}{^\\a}b^k{_\\a}$\n($\\hPi_{ij}{^\\bk}n_k=0$) is given as\n\\bsubeq\\lab{4.4}\n\\bea\n\\hPi_{ij\\bk}&=&J\\cH_{ij\\perp\\bk}(R)\\, , \\lab{4.4a}\n\\eea\nwhere\n\\bea\n\\cH_{ij\\perp\\bk}&=&-4an_{[i}\\eta_{j]\\bk}\n +4n_{[i}\\cH_{j]\\bk}-4\\eta_{[i\\bk}\\cH_{j]\\perp} \\nn\\\\\n &=&4n_{[i}\\eta_{j]\\bk}\\left(-a+2\\b_3R\\right) \\nn\\\\\n &&+4\\b_1\\left(n_{[i}R_{j]\\bk}-\\eta_{[i\\bk}R_{j]\\perp}\\right)\n +4\\b_2\\left(n_{[i}R_{\\bk j]}-\\eta_{[i\\bk}R_{\\perp j]}\\right)\\,.\\nn\n\\eea\nSince the ``velocities\" $R_{ij\\perp\\bk}$ are contained only in $\\cR$, we\nrewrite this equation as\n\\be\n\\Phi_{ij\\bk}:=\\frac{\\hPi_{ij\\bk}}J+4an_{[i}\\eta_{j]\\bk}\n -\\cH'_{ij\\perp\\bk}(\\fR)=\\cH'_{ij\\perp\\bk}(\\cR)\\, . \\lab{4.4b}\n\\ee\n\\esubeq\nThe components of a tensor $X_{\\perp\\bi\\bj}$ can be decomposed into the\ntrace, antisymmetric and symmetric-traceless piece (Appendix A). Such a\ndecomposition of \\eq{4.4b} yields:\n\\bsubeq\\lab{4.5}\n\\bea\n&&\\irr{S}{\\Phi}_\\perp\\equiv\\frac{\\irr{S}{\\hPi}_\\perp}{J}+4a\n -\\frac{2}{3}(b_6-b_4)R^{\\bk\\bn}{}_{\\bk\\bn}\n =\\frac{2}{3}(b_4+2b_6)R^\\bk{}_{\\perp\\bk\\perp}\\,, \\lab{4.5a}\\\\\n&&\\irr{A}{\\Phi}_{\\perp\\bi\\bj}\\equiv\n \\frac{\\irr{A}{\\hPi}_{\\perp\\bi\\bj}}J+2b_5R^\\bk{}_{[\\bi\\bj]\\bk}\n =2b_5R_{[\\bi\\perp\\bj]\\perp}\\,,\\\\\n&&\\irr{T}{\\Phi}_{\\perp\\bi\\bj}\\equiv\\frac{\\irr{T}{\\hPi}_{\\perp\\bi\\bj}}{J}\n -b_4\\left(2R_{(\\bi\\bk\\bj)}{}^\\bk\n -\\eta_{\\bi\\bj}R^{\\bm\\bn}{}_{\\bm\\bn}\\right)\n =b_4\\left(2R_{(\\bi\\perp\\bj)\\perp}\n -\\eta_{\\bi\\bj}R^\\bk{}_{\\perp\\bk\\perp}\\right)\\, .\n\\eea\nFor a tensor $X_{\\bi\\bj\\bk}=-X_{\\bj\\bi\\bk}$, the pseudoscalar\n$(\\ve^{\\bi\\bj\\bk}X_{\\bi\\bj\\bk})$ and the symmetric-traceless piece\n$(X_{\\bi(\\bj\\bk)}-\\text{traces})$ identically vanish. Hence, Eq. \\eq{4.4b}\nimplies one more relation:\n\\be\n\\irr{V}{\\Phi}^\\bi\\equiv\n \\frac{\\irr{V}{\\hPi}^\\bi}J-(b_4-b_5)R_{\\perp\\bk}{}^{\\bi\\bk}\n =(b_4+b_5)R^{\\bi\\bk}{}_{\\perp\\bk}\\, ,\n\\ee\n\\esubeq\nwhere $\\irr{V}{X}^\\bi=X^{\\bi\\bj}{_\\bj}$ (Appendix A).\n\nThus, when the parameters appearing on the right-hand sides of \\eq{4.5}\nvanish, we have the additional primary constraints $\\Phi_K$. Combining\nthese relations with those obtained in the torsion sector, one find the\ncomplete set of primary ICs, including their spin-parity characteristics\n($J^P$), as shown in Table 1.\n\\begin{center}\n\\doublerulesep 1.6pt\n\\begin{tabular}{l l l}\n\\multicolumn{3}{c}{Table 1. Primary if-constraints} \\\\\n\\hline\\hline\\rule[-5.5pt]{0pt}{20pt}\nCritical conditions & Primary constraints &$J^P$ \\\\\n\\hline\\rule[-7pt]{0pt}{21pt}\n$a_2=0$ &$\\irr{S}{\\phi}\\approx 0$ & \\\\[-1.2ex]\n~$b_4+2b_6=0$ &$\\irr{S}{\\Phi}_{\\perp}\\approx 0$\n & \\raisebox{1.6ex}{$0^+$} \\\\\n\\hline\\rule[-7pt]{0pt}{21pt}\n$a_1+2a_3=0$ &$\\irr{A}{\\phi}_{\\bi\\bk}\\approx 0$ & \\\\[-1.2ex]\n~$b_5=0$ &$\\irr{A}{\\Phi}_{\\perp\\bi\\bk}\\approx 0$\n & \\raisebox{1.6ex}{$0^-$} \\\\\n\\hline\\rule[-7pt]{0pt}{21pt}\n$a_1+a_2=0$ &${\\phi}_{\\perp\\bk}\\approx 0$ & \\\\[-1.2ex]\n~$b_4+b_5=0$ &$\\irr{V}{\\Phi}_\\bk\\approx 0$\n & \\raisebox{1.6ex}{$1$} \\\\\n\\hline\\rule[-7pt]{0pt}{21pt}\n$a_1=0$ &$\\irr{T}{\\phi}_{\\bi\\bk}\\approx 0$ & \\\\[-1.2ex]\n~$b_4=0$ &$\\irr{T}{\\Phi}_{\\perp\\bi\\bk}\\approx 0$\n & \\raisebox{1.6ex}{$2$} \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\nThis classification has a noteworthy interpretation: whenever a pair of\nthe ICs with specific $J^P$ is absent, the corresponding dynamical mode is\nliberated and becomes a \\emph{physical degree of freedom} (DoF). Thus, for\n$a_2(b_4+2b_6)\\ne 0$, the spin-$0^+$ ICs are absent, and the related DoF\nbecomes physical. Similarly, $(a_1+2a_3)b_5\\ne 0$ implies that the\nspin-$0^-$ DoF becomes physical. The results obtained here refer to the\nfull nonlinear theory; possible differences with respect to the\nperturbative analysis (Section 3) will be discuss in Sections 5 and 6.\n\n\\subsection{General form of the Hamiltonian}\n\nOnce we know the complete set of the primary ICs, we can construct first\nthe canonical and then the total Hamiltonian. Being interested only in the\ngravitational degrees of freedom, we disregard the matter contribution.\n\n\\prg{Canonical Hamiltonian.} In the absence of matter, the canonical\nHamiltonian (density) is defined by\n$$\n\\cH_\\can=\\pi_i{^\\a}\\dot b^i{_\\a}\n +\\frac{1}{2}\\Pi_{ij}{^\\a}\\dot A^{ij}{_\\a}-b\\cL_G\\, .\n$$\nUsing the lapse and shift functions $N$ and $N^\\a$, defined in Appendix A,\none can rewrite $\\cH_\\can$ in the Dirac--ADM form \\cite{x4,x20}:\n\\bsubeq\\lab{4.6}\n\\be\n\\cH_\\can=N\\cH_\\perp+N^\\a\\cH_\\a\n -\\frac{1}{2}A^{ij}{_0}\\cH_{ij}+\\pd_\\a D^\\a\\,,\n\\ee\nwhere\n\\bea\n&&\\cH_\\perp=\\hpi_i{^\\bj}T^i{}_{\\perp\\bj}\n +\\frac{1}{2}\\hPi_{ij}{}^\\bk R^{ij}{}_{\\perp\\bk}-J\\cL_G\n -n^i\\nab_\\a\\pi_i{^\\a}\\,, \\nn\\\\\n&&\\cH_\\a=\\pi_i{^\\b}T^i{}_{\\a\\b}+\\frac 12\\Pi_{ij}{}^\\b R^{ij}{}_{\\a\\b}\n -b^i{_\\a}\\nab_\\b\\pi_i{^\\b}\\,, \\nn\\\\\n&&\\cH_{ij}=2\\pi_{[i}{^\\a}b_{j]\\a}+\\nab_\\a\\Pi_{ij}{}^\\a\\,,\\nn\\\\\n&&D^\\a=b^i{_\\a}\\pi_i{^\\a}+\\frac 12\\Pi_{ij}{^\\a} A^{ij}{_\\a}\\,.\n\\eea\n\\esubeq\n\nThe canonical Hamiltonian is linear in the unphysical variables\n$(b^i{_0},A^{ij}{_0})$, and $\\cH_\\perp$ is the only dynamical part of\n$\\cH_\\can$. The ``velocities\" $T^i{}_{\\perp\\bk}, R^{ij}{}_{\\perp\\bk}$\nappearing in $\\cH_\\perp$ can be expressed in terms of the phase-space\nvariables, using Eqs. \\eq{4.3} and \\eq{4.5}. Explicit calculation is\nsimplified by separating the torsion and the curvature contributions in\n$\\cH_\\perp$:\n\\bea\n&&\\cH_\\perp=2\\L_0J+\\cH_\\perp^T+\\cH_{\\perp}^R\\, , \\nn\\\\\n&&\\cH^T_\\perp:=\\hpi^{i\\bj}T_{i\\perp\\bj}-J\\cL_{T^2}\n -n^i\\nab_\\a\\pi_i{^\\a}\\,, \\nn\\\\\n&&\\cH^R_\\perp:=\\frac{1}{2}\\hPi^{ij\\bk}R_{ij\\perp\\bk}\n -J\\cL_{R^2}+aJ\\fR\\, .\n\\eea\nThe torsion piece of $\\cH_\\perp$ turns out to have the form (Appendix\nA)\n\\bsubeq\\lab{4.8}\n\\bea\n\\cH_{\\perp}^T&=&\\frac{1}{2}J\\phi^2-J\\cL_{T^2}(\\fT)\n -n^i\\nab_\\a\\pi_i{^\\a}\\, , \\\\\n\\phi^2&:=&\\frac{\\l(a_1+a_2)}{a_1+a_2}(\\phi_{\\perp\\bi})^2\n +\\frac{\\l(a_2)}{2a_2}(\\irr{S}{\\phi})^2 \\nn\\\\\n&&+\\frac{3}{2}\\frac{\\l(a_1+2a_3)}{(a_1+2a_3)}(\\irr{A}{\\phi}_{\\bi\\bj})^2\n +\\frac{\\l(a_1)}{2a_1}(\\irr{T}{\\phi}_{\\bi\\bj})^2\\, .\n\\eea\n\\esubeq\nwhere $\\l(x)$ is the singular function\n$$\n\\dis\\frac{\\l(x)}{x}=\\left\\{\n\\ba{cc}\n\\dis\\frac{1}{x}\\,, & x\\neq 0 \\\\[9pt]\n0\\,,& x=0\n\\ea\\right.\n$$\nwhich takes care of the conditions under which ICs become true\nconstraints. Similar calculations for the curvature part yields\n\\bsubeq\\lab{4.9}\n\\bea\n\\cH_{\\perp}^R&=&\\frac{1}{4}J\\Phi^2-J\\cL_{R^2}(\\fR)+aJ\\fR\\,, \\\\\n\\Phi^2&:=&\\frac{\\l(b_5)}{b_5}(\\irr{A}{\\Phi}_{\\perp\\bj\\bk})^2\n +\\frac{\\l(b_4)}{b_4}(\\irr{T}{\\Phi}_{\\perp\\bj\\bk})^2 \\nn\\\\\n &&+\\frac{3}{2}\\frac{\\l(b_4+2b_6)}{b_4+2b_6}(\\irr{S}{\\Phi}_\\perp)^2\n +2\\frac{\\l(b_4+b_5)}{b_4+b_5}(\\irr{V}{\\Phi}_\\bi)^2\\, .\n\\eea\n\\esubeq\n\n\\prg{Total Hamiltonian.} The total Hamiltonian is defined by the\nexpression\n\\bsubeq\n\\be\n\\cH_{\\tot}=\\cH_c+u^k{_0}\\phi_k{^0}+\\frac{1}{2}u^{ij}\\Phi_{ij}{^0}\n +(u\\cdot\\phi)+(v\\cdot\\Phi)\\, ,\n\\ee\nwhere $u$'s and $v$'s are arbitrary multipliers and\n$(u\\cdot\\phi)+(v\\cdot\\Phi)$ denotes the contribution of all the primary\nICs. Formally, the existence of ICs is regulated by the form of the\nrelated multipliers; for instance, $u_{\\perp\\bk}$ is given as\n$u_{\\perp\\bk}:=[1-\\l(a_1+a_2)]u'_{\\perp\\bk}$, and so on. Using the\nirreducible decomposition technique, we find:\n\\bea\n&&(u\\cdot\\phi):=u^{\\perp\\bk}\\phi_{\\perp\\bk}\n +\\irr{T}{u}^{\\bi\\bk}\\,\\irr{T}{\\phi}_{i\\bk}\n +\\irr{A}{u}^{\\bi\\bk}\\,\\irr{A}{\\phi}_{i\\bk}\n +\\frac{1}{2}\\irr{S}{u}\\,\\irr{S}{\\phi}\\, , \\nn\\\\\n&&(v\\cdot\\Phi):=\n \\irr{T}{v}^{\\perp\\bi\\bk}\\,\\irr{T}{\\Phi}_{\\perp\\bi\\bk}\n +\\irr{A}{v}^{\\perp\\bi\\bk}\\,\\irr{A}{\\Phi}_{\\perp\\bi\\bk}\n +\\frac{1}{2}\\irr{S}{v}^\\perp\\,\\irr{S}{\\Phi}_\\perp\n +\\irr{V}{v}^\\bk\\,\\irr{V}{\\Phi}_\\bk\\, .\n\\eea\n\\esubeq\n\n\\prg{Consistency conditions.} Having found the form of the total\nHamiltonian, we can now apply Dirac's consistency algorithm to the primary\nconstraints, $\\dot\\phi_K=\\{\\phi_K,H_\\tot\\}\\approx 0$, where $H_\\tot=\\int\nd^3 x\\cH_\\tot$ and $\\{X,Y\\}$ is the Poisson bracket (PB) between $X$ and\n$Y$; then, the procedure continues with the secondary constraints, and so\non \\cite{x20}. In what follows, our attention will be focused on the\nscalar sector, with $J^P=0^+$ or $0^-$ modes.\n\n\\section{Spin-\\mb{0^+} sector}\n\\setcounter{equation}{0}\n\nAs one can see from Table 1, the absence of two spin-$0^+$ constraints,\n$\\irr{S}{\\phi}$ and $\\irr{S}{\\Phi}_\\perp$, is ensured by the condition\n$a_2(b_4+2b_6)\\ne 0$, whereby the spin-$0^+$ degree of freedom becomes\nphysical. To study the dynamical content of this sector, we adopt the\nrelaxed conditions\n\\be\na_2,b_6\\ne 0\\, ,\\qquad a_1=a_3=b_4=b_5=0\\, , \\lab{5.1}\n\\ee\nwhich define the Lagrangian $\\cL^+_G$ as in \\eq{3.4b}.\n\n\\subsection{Hamiltonian and constraints}\n\n\\prg{Primary constraints.} In the spin-$0^+$ sector \\eq{5.1}, general\nconsiderations of the previous section lead to the following conclusions:\nthe set of primary constraints is given by\n\\be\\lab{5.2}\n\\ba{ll}\n\\pi_i{^0}\\approx 0\\,,\\quad & \\Pi_{ij}{^0}\\approx 0\\,, \\\\[4pt]\n\\irr{A}{\\phi}_{\\bi\\bj}\n :=\\dis\\frac{\\irr{A}{\\hpi}_{\\bi\\bj}}{J}\\approx 0\\,, &\n \\irr{T}{\\phi}_{\\bi\\bj}\n :=\\dis\\frac{\\irr{T}{\\hpi}_{\\bi\\bj}}{J}\\approx 0\\,,\\\\[6pt]\n\\irr{A}{\\Phi}_{\\perp\\bi\\bj}\n :=\\dis\\frac{\\irr{A}{\\hPi}_{\\perp\\bi\\bj}}{J}\\approx 0\\,,\\qquad &\n \\irr{T}{\\Phi}_{\\perp\\bi\\bj}\n :=\\dis\\frac{\\irr{T}{\\hPi}_{\\perp\\bi\\bj}}{J}\\approx 0\\, , \\\\[6pt]\n\\irr{V}{\\Phi}_\\bi:=\\dis\\frac{\\irr{V}{\\hPi}_{\\bi}}{J}\\approx 0\\,,&\n\\ea\n\\ee\nthe dynamical part of the canonical Hamiltonian has the form\n\\bea\n\\cH_\\perp&=&J\\left[\\frac{1}{2a_2}(\\phi_{\\perp\\bk})^2\n +\\frac{1}{4a_2}(\\irr{S}{\\phi})^2\n +\\frac{3}{16b_6}{(\\irr{S}{\\Phi}_{\\perp}})^2\\right] \\nn\\\\\n &&-J\\cL_G^+(\\fT,\\fR)-n_i\\nab_\\a\\pi^{i\\a}\\, , \\lab{5.3}\n\\eea\nwhere $\\phi_{\\perp\\bk},\\irr{S}{\\phi}$, and $\\irr{S}{\\Phi}_\\perp$ are the\n``generalized\" momentum variables defined in \\eq{4.3} and \\eq{4.5}, and\nthe total Hamiltonian reads\n\\be\n\\cH_\\tot=\\cH_\\can+\\irr{A}{u}^{\\bi\\bj}{}\\irr{A}{\\phi}_{\\bi\\bj}\n +\\irr{T}{u}^{\\bi\\bj}\\irr{T}{\\phi}_{\\bi\\bj}\n +\\irr{A}{v}^{\\bi\\bj}\\irr{A}{\\Phi}_{\\bi\\bj}\n +\\irr{T}{v}^{\\bi\\bj}\\irr{T}{\\Phi}_{\\bi\\bj}\n +\\irr{V}{v}^\\bi\\irr{V}{\\Phi}_{\\bi}\\, .\n\\ee\n\n\\prg{Secondary constraints.} The consistency conditions of the sure\nprimary constraints $\\pi_i{^0}$ and $\\pi_{ij}{^0}$ produce the secondary\nconstraints\n\\bsubeq\n\\be\n\\cH_\\perp\\approx 0\\, ,\\qquad \\cH_\\a\\approx 0\\, ,\n\\qquad \\cH_{ij}\\approx 0\\, , \\lab{5.5a}\n\\ee\nwhere\n\\bea\n\\cH_\\a&\\approx& \\hpi_\\perp{^\\bi} T_{\\perp\\a\\bi}\n -\\frac{1}{2}\\irr{S}\\hpi\\fV_\\a+\\frac 12\\irr{S}\\hPi_\\perp R_{\\perp\\a}\n -b^i{_\\a}\\nab_\\b \\pi_i{^\\b}\\, , \\nn\\\\\n\\cH_{\\bi\\bk}&\\approx&\\frac{\\irr{A}{\\hpi}_{\\bi\\bk}}{J}\n +\\frac{\\irr{S}{\\hPi}_\\perp}{2J}T_{\\perp\\bi\\bk}\\,, \\nn\\\\\n\\cH_{\\perp\\bk}&\\approx&\n \\frac{\\hpi_{\\perp\\bk}}{J}-\\frac{\\irr{S}{\\hPi}_\\perp}{2J}\\fV_\\bk\n +\\nab_\\bk\\frac{\\irr{S}{\\hPi}_\\perp}{2J}\\,. \\lab{5.5b}\n\\eea\n\\esubeq\n\nGoing over to the (eight) primary ICs, $X_M=(\\irr{A}{\\phi}, \\irr{T}{\\phi},\n\\irr{A}{\\Phi},\\irr{T}{\\Phi},\\irr{V}{\\Phi})$, we note that the only\nnonvanishing PBs among them are\n\\bea\n&&\\{\\irr{A}{\\Phi}_{\\perp\\bi\\bj},\\irr{A}{\\phi}^{\\bm\\bn}\\}\n \\approx-\\frac{\\irr{S}{\\hPi}_\\perp}{2J^2}\n \\d_i{}^{[\\bm}\\d_j{}^{\\bn]}\\d\\,, \\nn\\\\\n&&\\{\\irr{T}{\\Phi}_{\\perp\\bi\\bj},\\irr{T}{\\phi}^{\\bm\\bn}\\}\n \\approx\\frac{\\irr{S}{\\hPi}_\\perp}{2J^2}\n \\d_{(\\bi}{}^{(\\bn}\\d_{\\bj)}{}^{\\bm)}\\d\\,. \\lab{5.6}\n\\eea\nAs long as $\\irr{S}{\\hPi}_\\perp\\ne 0$, the constraints\n$(\\irr{A}{\\phi},\\irr{T}{\\phi},\\irr{A}{\\Phi},\\irr{T}{\\Phi})$ are second\nclass (SC) \\cite{x4,x20}, and their consistency conditions fix the values\nof the corresponding multipliers $(\\irr{A}{u},\\irr{T}{u}, \\irr{A}{v},\n\\irr{T}{v})$ in $\\cH_\\tot$. On the other hand, $\\irr{V}{\\Phi}$ commutes\nwith all the other primary constraints, but not with its own secondary\npair $\\chi_\\bi=\\{\\irr{V}{\\Phi}_\\bi,H_\\tot\\}$, see \\cite{x20}. Using\n$\\chi_\\bi\\approx J^{-1}\\{\\irr{V}{\\hPi}_\\bi,H_{\\tot}\\}$ and\n\\bea\n&&\\{\\irr{V}{\\hPi}_\\bi,\\cH_{mn}\\}\\approx 0\\, ,\\qquad\n \\{\\irr{V}{\\hPi}_\\bi,\\cH_\\a\\}\\approx 0\\, , \\nn\\\\\n&&\\{\\irr{V}{\\hPi}_\\bi,\\cH_\\perp\\}\\approx\n J\\left[\\frac{\\phi_{\\perp\\bi}}{a_2}\\left(\n \\frac{a_2}{2}-\\frac{\\irr{S}{\\hPi}_\\perp}{4J}\\right)\n +\\frac{a_2}{2}\\fV_\\bi\n +\\nab_\\bi\\frac{\\irr{S}{\\hPi}_\\perp}{4J}\\right]\\, , \\nn\n\\eea\none ends up with\n\\be\n\\chi_\\bi:=\\frac{\\phi_{\\perp\\bi}}{a_2}\\left(\n \\frac{\\irr{S}{\\hPi}_\\perp}{4J}-\\frac{a_2}{2}\\right)\n -\\frac{a_2}{2}\\fV_\\bi-\\nab_\\bi\\frac{\\irr{S}{\\hPi}}{4J}\\,.\\lab{5.7}\n\\ee\nThe only nonvanishing PB involving $\\chi_\\bi$ is\n\\bea\n\\{\\chi_\\bi,\\irr{V}{\\Phi}_\\bk\\}=\\frac{2}{a_2 J}\\eta_{\\bi\\bk}\n \\frac{\\irr{S}{\\hPi}_\\perp}{4J}\\left(\n \\frac{\\irr{S}{\\hPi}_\\perp}{4J}-a_2\\right)\\d\\, . \\lab{5.8}\n\\eea\nThus, for $\\irr{S}{\\hPi}_\\perp(\\irr{S}{\\hPi}_\\perp-4Ja_2)\\ne 0$, both\n$\\chi_\\bi$ and $\\irr{V}{\\Phi}_\\bk$ are SC. Consequently, the consistency\ncondition of $\\chi_\\bi$ determines the multiplier $\\irr{V}{v}^\\bi$, which\ncompletes the consistency algorithm.\n\nIf the kinetic energy density in the Hamiltonian \\eq{5.3} is to be\npositive definite (``no ghosts\"), the coefficients of $(\\irr{S}{\\phi})^2$\nand $(\\irr{S}{\\Phi}_\\perp)^2$ should be positive:\n\\be\na_2>0\\, ,\\qquad b_6>0\\, .\n\\ee\nOn the other hand, $(\\phi_{\\perp\\bk})^2$ gives a negative definite\ncontribution, but it is an interaction term, as can be seen from \\eq{4.3a}\nand \\eq{5.5b}.\n\n\\subsection{Constraint bifurcation}\n\nIn the previous discussion, we identified the conditions for which all the\nICs, $X'_M=\\left(X_M,\\chi\\right)$, are SC. To calculate the determinant of\nthe $10\\times 10$ matrix $\\D^+_{MN}=\\{X'_M,X'_N\\}$,\n$$\n\\D^+\\approx\\left|\\begin{array}{cccccc}\n 0 & 0 & \\{\\irr{A}{\\phi},\\irr{A}{\\Phi}\\} & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & \\{\\irr{T}{\\phi},\\irr{T}{\\Phi}\\} & 0 & 0 \\\\\n -\\{\\irr{A}{\\phi},\\irr{A}{\\Phi}\\} & 0 & 0 & 0 & 0 & 0 \\\\\n 0 & -\\{\\irr{T}{\\phi},\\irr{T}{\\Phi}\\} & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & \\{\\irr{V}{\\phi},\\chi\\} \\\\\n 0 & 0 & 0 & 0 & -\\{\\irr{V}{\\phi},\\chi\\} & 0 \\\\\n \\end{array} \\right|\n$$\nwe use \\eq{5.6} and \\eq{5.8}, which leads to\n\\be\n\\D^+\\sim \\left(\\frac{\\irr{S}{\\Pi}_\\perp}{4J}\\right)^{10}\n \\left(\\frac{\\irr{S}{\\Pi}_\\perp}{4J}-a_2\\right)^4\\, .\n\\ee\nIntroducing a convenient notation\n\\be\nW:=\\frac{\\irr{S}{\\Pi}_\\perp}{4J}\\, ,\n\\ee\nwe see that $\\D^+$ can vanish only on a set (of spacetime points) of\nmeasure zero, defined by $W=0$ or $W-a_2=0$. In other words, the condition\n\\be\nW(W-a_2) \\ne 0 \\lab{5.12}\n\\ee\nis fulfilled \\emph{almost everywhere} (everywhere except on a set of\nmeasure zero). Thus, our previous discussion can be summarized by saying\nthat all of the ICs are SC almost everywhere; the related (generic)\nclassification of constraints is shown in Table 2.\n\\begin{center}\n\\doublerulesep 1.6pt\n\\begin{tabular}{lll}\n\\multicolumn{3}{c}{Table 2. Generic constraints\n in the $0^+$ sector} \\\\\n \\hline\\hline\n\\rule{0pt}{12pt}\n& First class \\phantom{x} & ~~Second class \\phantom{x} \\\\\n \\hline\n\\rule[-1pt]{0pt}{16pt}\nPrimary\n ~& $\\pi_i{^0}$, $\\Pi_{ij}{^0}$ & ~~$X_M$ \\\\\n \\hline\n\\rule[-1pt]{0pt}{19pt}\nSecondary\\phantom{x}\n ~& $\\cH'_{\\perp}$, $\\cH'_\\a$, $\\cH'_{ij}$ & ~~$\\chi_\\bi$\\\\\n \\hline\\hline\n\\end{tabular}\n\\end{center}\nThe Hamiltonian constraints $\\cH'_\\perp,\\cH'_\\a$ and $\\cH'_{ij}$ are first\nclass (FC) \\cite{x4,x20}; they are obtained from \\eq{5.3} and \\eq{5.5b} by\nadding the contributions containing the determined multipliers. With\n$N=18$, $N_1=12$ and $N_2=10$, the dimension of the phase space is given\nas $N^*=2N-2N_1-N_2=2$. Thus, the theory exhibits a single Lagrangian DoF\nalmost everywhere.\n\nHowever, the determinant $\\D^+$, being a field-dependent object, may\nvanish in some regions of spacetime, changing thereby the number and\/or\ntype of constraints and the number of physical degrees of freedom, as\ncompared to the generic situation described in Table 2. This effect, known\nas the phenomenon of \\emph{constraint bifurcation}, can be fully\nunderstood by analyzing the dynamical behavior of the two factors in\n\\eq{5.12}. Although the complete analysis can be carried out in the\ncanonical formalism, we base our arguments on the Lagrangian formalism, in\norder to simplify the exposition (see Appendix B).\n\n\\prg{1.} Starting with the second factor,\n\\be\n\\Om:=W-a_2\\approx -\\left(a-\\frac{1}{6}b_6R+a_2\\right)\\,, \\lab{5.13}\n\\ee\nwhere we used \\eq{4.5a} to clarify the geometric interpretation, one can\nprove the relation\n\\be\n-\\Om V_k +2\\pd_k\\Om\\approx 0\\, , \\lab{5.14}\n\\ee\nwhich implies that the behavior of $\\Om$ is limited to the following two\noptions (Appendix B):\n\\bitem\n\\item[(a)] either $\\Om(x)$ vanishes globally, on the whole spacetime\n manifold,\\vspace{-8pt}\n\\item[(b)] or it does not vanish anywhere.\n\\eitem\nWhich of these two options is realized depends upon the initial conditions\nfor $\\Om$; choosing them in accordance with (b) extends the generic\nbehavior of $\\Om$, $\\Om\\ne 0$ almost everywhere, to the whole spacetime.\nThis mechanism is the same as the one observed in the spin-$0^+$ sector of\nthe 4-dimensional PGT; compare \\eq{5.14} with equation (4.20) in\n\\cite{x16}$_1$.\n\n\\prg{2.} We now focus our attention to the first factor in \\eq{5.12},\n\\be\nW\\approx-\\left(a-\\frac{1}{6}b_6R\\right)\\, .\n\\ee\nIt is interesting that a solution for the $W$-bifurcation ($W=0$) can be\nfound by relying on the solution for the $\\Om$-bifurcation, which is based\non choosing $\\Om\\ne 0$ on the initial spatial surface $\\S$. Indeed, the\nchoice $\\Om>0$ on $\\S$ implies\n\\bsubeq\n\\be\n\\Om>0\\quad \\mbox{globally}.\n\\ee\nThen, since $\\Om=W-a_2$ ($a_2$ is positive), we find\n\\be\nW>a_2\\quad \\mbox{globally}.\n\\ee\n\\esubeq\nThus, with $\\Om>0$ and $W>a_2$, the problem of constraint bifurcation simply\ndisappears. Note that geometrically, the condition $W>a_2$ represents a\nrestriction on the Cartan scalar curvature, $b_6R>6(a+a_2)$. An equivalent\nform of this relation is obtained by using the identity $R=\\tR-2\\s$, where\n$\\tR$ is Riemannian scalar curvature.\n\nThus, with a suitable choice of the initial conditions, one can ensure the\ngeneric condition $\\D^+\\ne 0$ to hold \\emph{globally}, so that the\nconstraint structure of the spin-$0^+$ sector is described exactly as in\nTable 2. Any other situation, with $W=0$ or $W-a_2=0$, would not be\nacceptable---it would have a variable constraint structure over the\nspacetime, the property that could not survive the process of linearization.\n\n\\subsection{Stability under linearization}\\label{sec53}\n\nNow, we are going to compare the canonical structure of the full nonlinear\ntheory with its linear approximation around maximally symmetric\nbackground.\n\nIn the linear approximation, the condition of canonical stability\n\\eq{5.12} is to be taken in the lowest order (zeroth) approximation. Using\n$\\bR=-6q$, it reduces to\n\\be\n(a+qb_6)(a+qb_6+a_2)\\neq 0\\, . \\lab{5.17}\n\\ee\nThe three cases displayed in Table 3 define characteristic sectors of the\nlinear regime (see Appendix C).\n\\begin{center}\n\\doublerulesep 1.6pt\n\\begin{tabular}{ccccl}\n\\multicolumn{5}{c}{\n Table 3. Canonical stability in the $0^+$ sector}\\\\\n \\hline\\hline\n\\rule{-1pt}{16pt}\n & $a+qb_6$ &~$a+qb_6+a_2$ & DoF & stability \\\\[2pt]\n \\hline\n\\rule[-1pt]{0pt}{16pt}\n(a) & $\\ne 0$ & $\\ne 0$ & 1 & stable \\\\[2pt]\n \\hline\n\\rule[-1pt]{0pt}{16pt}\n(b) & $=0$ & $\\ne 0$ & 0 & unstable \\\\[2pt]\n \\hline\n\\rule[-1pt]{0pt}{16pt}\n(c) & $\\ne 0$ & $=0$ & 1 & stable* \\\\[2pt]\n \\hline\\hline\n\\end{tabular}\n\\end{center}\n\n(a) When the condition \\eq{5.17} is satisfied, the nature of the\nconstraints remains the same as in Table 2, and we have a single\nLagrangian DoF, the massive spin-$0^+$ mode.\n\n(b) Here, all ICs become FC, but only six of them are independent. Thus,\n$N_1 =12 + 6 = 18$, and with $N_2 = 0$, the number of DoF's is zero:\n$N^*=36 -2\\times 18 = 0$.\n\n(c) In this case, $\\tilde\\chi_\\bk$ is not an independent constraint, and\n$\\irr{V}{\\tilde\\Phi}_\\bk$ is FC. As compared to (a), the number and type\nof constraints is changed according to $N_1\\rightarrow N_1+2$,\n$N_2\\rightarrow N_2-4$, but the number of DoF's remains one ($N^*=2$),\ncorresponding to the massless spin-$0^+$ mode.\n\nThe case when both $a+qb_6$ and $a+qb_6+a_2$ vanish is not possible, since\n$a_2\\ne 0$.\n\nTo clarify the case (c), we need a more detailed analysis. Consider first\nthe case (a), in which the constraint $\\tilde\\chi_\\bi$, defined in\n\\eq{C.3}, is replaced by an equivalent expression,\n$\\tilde\\chi{}'_\\bi=\\tilde\\hpi_{\\perp\\bi}\/{\\bar J}$. Then, the pair of SC\nconstraints $(\\irr{V}{\\tilde\\Phi}_\\bk,\\tilde\\chi{}'_\\bk)$, with the\nrelated Dirac brackets, defines the reduced phase space $\\tR(a)$. Next,\nconsider the case (c), where $\\tilde\\chi_\\bi$ does not exist and\n$\\irr{V}{\\tilde\\Phi}_\\bk$ is FC. Here, we can introduce a suitable gauge\ncondition associated to $\\irr{V}{\\tilde\\Phi}_\\bk$, given by\n$\\tilde\\chi''_\\bk=\\tilde\\hpi_{\\perp\\bi}\/{\\bar J}$. The pair\n$(\\irr{V}{\\tilde\\Phi}_\\bk,\\tilde\\chi{}''_\\bk)$ defines the reduced phase\nspace $\\tR(c)$, which coincides with the reduced phase space $\\tR(a)$,\nsubject to the additional condition $a+qb_6+a_2=0$. Thus, the ``massless\"\nnonlinear theory, defined by $a+qb_6+a_2=0$, is essentially (up to a gauge\nfixing) stable under the linearization. The star symbol in Table 3\n(stable*) is used to remind us of this gauge fixing condition.\n\nFor the $M_3$ background ($p=q=0$ and $a\\ne 0$), the case (b) is not\npossible.\n\n\\section{Spin-$\\mb{0^-}$ sector}\n\\setcounter{equation}{0}\n\nFor $(a_1+2a_3)b_5\\ne 0$, the constraints\n$\\irr{A}{\\phi}_{\\bi\\bk},\\irr{A}{\\Phi}_{\\perp\\bi\\bk}$ in Table 1 are\nabsent, and the spin-$0^-$ mode becomes a physical degree of freedom.\nHere, we study canonical features of the spin-$0^-$ sector by using the\nspecific conditions\n\\be\na_3,b_5\\ne 0\\, ,\\qquad a_1=a_2=b_4=b_6=0\\, , \\lab{6.1}\n\\ee\nwhich define the Lagrangian $\\cL^-_G$ as in \\eq{3.8b}.\n\n\\subsection{Hamiltonian and constraints}\n\n\\prg{Primary constraints.} Applying the conditions \\eq{6.1} to the general\nconsiderations of Section 4, we find the following set of the primary\n(sure and if-) constraints:\n\\be\\lab{6.2}\n\\ba{ll}\n\\pi_i{^0}\\approx 0\\,,\\quad & \\Pi_{ij}{^0}\\approx 0\\,, \\\\[4pt]\n\\irr{S}{\\phi}:=\\dis\\frac{{}^S\\hpi}J\\approx 0\\, , &\n\\irr{T}{\\phi}_{\\bi\\bj}\n :=\\dis\\frac{\\irr{T}{\\hpi}_{\\bi\\bj}}J\\approx 0\\,, \\\\[7pt]\n\\phi_{\\perp\\bi}:=\\dis\\frac{\\hpi_{\\perp\\bi}}{J}\\approx 0\\,, & \\\\[7pt]\n\\irr{S}{\\Phi}_\\perp\n :=\\dis\\frac{\\irr{S}{\\hPi}_\\perp}{J}+4a\\approx 0\\,,\\qquad &\n\\irr{T}{\\Phi}_{\\perp\\bi\\bj}\n :=\\dis\\frac{\\irr{T}{\\hPi}_{\\perp\\bi\\bj}}{J}\\approx 0\\,.\n\\ea\n\\ee\nThe dynamical part of the canonical Hamiltonian has the form\n\\bea\n\\cH_\\perp&=&J\\left[\\frac{3}{8a_3}(\\irr{A}{\\phi}_{\\bi\\bj})^2\n +\\frac{1}{4b_5}(\\irr{A}{\\Phi}_{\\perp\\bi\\bj})^2\n +\\frac{1}{2b_5}(\\irr{V}{\\Phi}_\\bi)^2\\right] \\nn\\\\\n &&-J\\cL_G^-(\\fT,\\fR)-n_i\\nab_\\a\\pi^{i\\a}\\, ,\n\\eea\nwhere $\\irr{A}{\\phi}_{\\bi\\bj},\\irr{A}{\\Phi}_{\\perp\\bi\\bj}$ and\n$\\irr{V}{\\Phi}_\\bi$ are the ``generalized\" momentum variables defined in\n\\eq{4.3} and \\eq{4.5}, and the total Hamiltonian reads:\n\\be\n\\cH_T=\\cH_c+\\frac{1}{2}\\irr{S}{u}\\irr{S}{\\phi}\n +\\irr{T}{u}^{\\bi\\bj}\\,\\irr{T}{\\phi}_{\\bi\\bj}\n +u^{\\perp\\bi}\\phi_{\\perp\\bi}\n +\\frac{1}{2}\\irr{S}{v}^\\perp\\irr{S}{\\Phi}_{\\perp}\n +\\irr{T}{v}^{\\perp\\bi\\bj}\\,\\irr{T}{\\Phi}_{\\perp\\bi\\bj}\\, .\n\\ee\n\n\\prg{Secondary constraints.} The consistency conditions of the primary\nconstraints $\\pi_i{^0}$ and $\\Pi_{ij}{^0}$ produce the usual secondary\nconstraints:\n\\bsubeq\n\\be\n\\cH_\\perp\\approx 0\\,,\\quad \\cH_\\a\\approx 0\\, ,\n\\qquad \\cH_{ij}\\approx 0\\, , \\lab{6.5a}\n\\ee\nwhere\n\\bea\n&&\\cH_\\a\\approx\\irr{A}\\hpi^{\\bi\\bj}T_{\\bi\\a\\bj}\n +\\irr{A}\\hPi_{\\perp\\bi\\bj}R_{\\perp}{^\\bi}{_\\a}{^\\bj}\n +R^{\\bi\\bj}{}_{\\a\\bj}\\irr{V}\\hPi_{\\bi}\n -2aJR_{\\perp\\a}-b^i{_\\a}\\nab_\\b \\pi_i{^\\b}\\, , \\nn\\\\\n&&\\cH_{\\bi\\bj}\\approx aT_{\\perp\\bi\\bj}\n +\\frac{\\irr{A}{\\hpi}_{\\bi\\bj}}{2J}\n +\\frac{\\irr{V}{\\Pi}_\\bk}{2J}T^\\bk{}_{\\bi\\bj}\n +\\nab_{[\\bi}\\frac{\\irr{V}{\\hPi}_{\\bj]}}{J}\\, , \\nn\\\\\n&&\\cH_{\\perp\\bi}\\approx a\\fV_\\bi\n +\\frac{\\irr{A}{\\hPi}_{\\perp\\bm\\bn}}{2J}T^{\\bm\\bn}{}_\\bi\n +\\frac{\\irr{V}{\\hPi}^\\bm}{2J}T_{\\perp\\bi\\bm}\n +\\frac{1}{2}\\nab_\\bm\\frac{\\irr{A}\\hPi_{\\perp\\bi}{}^\\bm}J\\, .\n\\eea\n\\esubeq\n\nUsing the PB algebra between the primary ICs\n$Y_M=(\\irr{S}{\\phi},\\irr{T}{\\phi},\\phi_{\\perp,\\bk}, \\irr{S}{\\Phi},\n\\irr{T}{\\Phi})$ (Appendix D), one finds that generically, for\n$\\irr{A}{\\hpi}_{\\bi\\bk}\\ne 0$, they are SC; their consistency conditions\nresult in the determination of the corresponding multipliers\n$(\\irr{S}{u},\\irr{T}{u},u_{\\perp,\\bk}, \\irr{S}{v},\\irr{T}{v})$. Moreover,\nthe secondary constraints \\eq{6.5a}, corrected by the contributions of the\ndetermined multipliers, are FC, so that their consistency conditions are\ntrivially satisfied. Thus, in the generic case, the consistency algorithm\nis completed at the level of secondary constraints.\n\nThe first two terms in $\\cH_\\perp$, proportional to the squares of\n$\\irr{A}{\\phi}_{\\bi\\bk}$ and $\\irr{A}{\\Phi}_{\\perp\\bi\\bk}$, describe the\ncontribution of the spin-$0^-$ mode to the kinetic energy density, see\nTable 1. This contribution is positive definite for\n\\be\na_3>0\\, ,\\qquad b_5>0\\, .\n\\ee\nAt the same time, the contribution of the third term, the square of\n$\\irr{V}{\\Phi}_\\bk$, becomes negative definite (``ghost\"), which is a\n\\emph{serious problem} for the physical interpretation. As we shall see,\nthis is not the only problem.\n\n\\subsection{Constraint bifurcation}\n\nBased on the PB algebra of the (eight) primary ICs $Y_M$, we can now\ncalculate the determinant of the $8\\times 8$ matrix\n$\\D^-_{MN}=\\{Y_M,Y_N\\}$ (Appendix D); the result takes the form\n\\be\n\\D^-\\sim \\irr{A}\\hpi_{\\bi\\bj}\\irr{A}\\hpi^{\\bi\\bj}\n \\left(\\frac{4a^2}{J^2}+\\frac{1}{8J^4}\\irr{A}\\hPi_{\\perp\\bm\\bn}\n \\irr{A}\\hPi^{\\perp\\bm\\bn}\\right)^2\\, . \\lab{6.7}\n\\ee\nSince the second factor is always positive definite, $\\D^-$ remains\ndifferent from zero only if\n\\be\n\\irr{A}{\\hpi}_{\\bm\\bn}\\ne 0\\, .\n\\ee\nThis condition holds everywhere except on a set of measure zero, so that\n$\\D^-\\ne 0$ almost everywhere. Thus, generically, the eight primary ICs\nare SC, as shown in Table 4; the primes in $\\cH'_\\perp,\\cH'_\\a$ and\n$\\cH'_{ij}$ denote the presence of corrections induced by the determined\nmultipliers.\n\\begin{center}\n\\doublerulesep 1.8pt\n\\begin{tabular}{lll}\n\\multicolumn{3}{l}{\\hspace{0pt}Table 4. Generic constraints in\n the $0^-$ sector} \\\\\n \\hline\\hline\n\\rule{0pt}{12pt}\n&~First class \\phantom{x}&~ Second class \\phantom{x} \\\\\n \\hline\n\\rule[-1pt]{0pt}{16pt}\n\\phantom{x}Primary &~$\\pi_i{^0}$, $\\Pi_{ij}{^0}$ &~ $Y_M$ \\\\\n \\hline\n\\rule[-1pt]{0pt}{19pt}\n\\phantom{x}Secondary &~$\\cH'_{\\perp}$, $\\cH'_\\a$, $\\cH'_{ij}$\n &~ \\\\\n \\hline\\hline\n\\end{tabular}\n\\end{center}\nUsing $N=18$, $N_1=12$ and $N_2=8$, we find $N^*=2N-2N_1-N_2=4$.\nSurprisingly, the theory exhibits \\emph{two} Lagrangian DoF: one is the\nexpected spin-$0^-$ mode, and the other is the spin-1 ``ghost\" mode,\nrepresented canonically by $\\irr{V}{\\Phi}_\\bk$.\n\nIn Appendix E, we analyze the nature of the critical condition\n$\\irr{A}{\\hpi}_{\\bm\\bn}=0$. In the region of spacetime where it holds, we\nfind the phenomenom of constraint bifurcation: the number of DoF is\nchanged to zero. Although such a situation is \\emph{canonically unstable\nunder linearization}, it is interesting to examine basic aspects of the\nlinearizad theory.\n\n\\subsection{Linearization}\n\nIn the linearized theory, the term $\\irr{A}{\\bar\\hpi}_{\\bj\\bk}$ in the\ndeterminant $\\D^-$ takes the form\n\\be\n\\irr{A}{\\bar\\hpi}_{\\bj\\bk}=-2a_3\\ve_{\\perp\\bj\\bk}p\\, .\n\\ee\nHence, the canonical structure of the linearized theory crucially depends\non the value of the background parameter $p$, as shown in Table 5.\n\n\\begin{center}\n\\doublerulesep 1.6pt\n\\begin{tabular}{cccl}\n\\multicolumn{4}{c}{\n Table 5. Canonical instability in the $0^-$ sector}\\\\\n \\hline\\hline\n\\rule{-1pt}{16pt}\n & & DoF & stability \\\\[2pt]\n \\hline\n\\rule[-1pt]{0pt}{16pt}\n$(\\a)$ & $p\\ne 0$ & 2 & stable almost everywhere \\\\[2pt]\n \\hline\n\\rule[-1pt]{0pt}{16pt}\n$(\\b)$ & $p=0$ & 1 & unstable \\\\[2pt]\n \\hline\\hline\n\\end{tabular}\n\\end{center}\n\n$(\\a)$ For $p\\ne 0$ (Riemann--Cartan background, massless spin-$0^-$\nmode), the determinant $\\bar\\D^-$ is positive definite, all the primary\nICs are SC, as in the generic sector of the full nonlinear theory, and\nconsequently, $N^*=4$. However, this is not true in the critical region\n$\\irr{A}{\\hpi}_{\\bi\\bj}=0$, where $N^*=0$ and the theory is canonically\nunstable.\n\n$(\\b)$ For $p=0$ (Riemannian background, massive or massless spin-$0^-$\nmode), the situation is changed (Appendix F). First, the determinant\n$\\bar\\D^-$ vanishes, since the primary IC $\\tilde\\phi_{\\perp\\bi}$\n\\emph{commutes} with itself, see \\eq{D.1}. By calculating its consistency\ncondition (which was not needed for $p\\ne 0$), one finds its secondary\npair $\\tilde\\chi_\\bi$. Now, the PB of $\\tilde\\phi_{\\perp\\bi}$ with the\nmodified secondary pair $\\tilde\\chi'_\\bi=\\tilde\\chi_\\bi-\\tilde\\cH_\\bi$\ndoes not vanish. Thus, there are two SC constraints more than in the case\n$(\\a)$ so that $N^*=2$, and we have the canonical instability under\nlinearization.\n\nThus, in both cases $(\\a)$ and $(\\b)$, the theory canonically unstable.\n\n\\section{Concluding remarks}\n\\setcounter{equation}{0}\n\nIn this paper, we studied the Hamiltonian structure of the general\nparity-invariant model of 3D gravity with propagating torsion, described\nby the eight-parameter PGT Lagrangian \\eq{2.1}. Because of the complexity\nof the problem, we focused our attention on the scalar sector, containing\n$J^P=0^+$ or $0^-$ modes with respect to maximally symmetric background.\nBy investigating fully nonlinear ``constraint bifurcation\" effects as well\nas the canonical stability under linearization, we were able to identify\nthe set of dynamically acceptable values of parameters for the spin-$0^+$\nsector, as shown in Table 3. On the other hand, the spin-$0^-$ sector is\nfound to be canonically unstable for any choice of parameters, see Table\n5. Transition from an (A)dS to Minkowski background simplifies the\nresults.\n\nFurther analysis involving higher spin sectors is left for future studies.\n\n\\section*{Acknowledgements}\n\nIt is a pleasure to thank Vladimir Dragovi\\'c for a helpful discussion.\nThis work was supported by the Serbian Science Foundation under Grant No.\n171031.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{The characterization model}\\label{Model}\n\nGiven a multiplexing PNR detector, there is a problem with counting the photon number due to several internal effects distorting the measurement statistics\\cite{Dovrat}. The incident photon statistics can however be reconstructed if the distortion effects are well quantified. We consider several detection parameters: efficiency, number of SPD elements (finite detector size), dark count rate, and cross-talk rate. To date, there lacks an analytical model for all of these effects. In particular, the combined effects of finite-size with cross-talk are not well known\\cite{DovratSim}. Now, we present an analytical model which incorporates all of these effects.\n\n\\subsection{Loss}\\label{sec:loss}\n\nWhen a photon hits the detector, there is a non-zero probability that either the avalanche process will not start or will stop before a detection occurs\\cite{saleh}. This is an intrinsic property of any realistic device, but can also be attributed to inefficient light coupling to the device. In such a scenario the photon is considered lost. The detection efficiency is then defined as the probability for detecting a single photon. We assume the detection efficiency is uniform for all SPD elements and is denoted by $\\eta$. If $n$ photons hit the detector, the probability for $m$ elements to be activated is given by a binomial distribution\\cite{Dovrat}:\n\n\\begin{eqnarray}\\label{loss}\nM^{\\eta}_{loss}(m,n) =\\binom{n}{m}\\eta^m(1-\\eta)^{n-m}\\,,\n\\end{eqnarray}\nwhere $\\binom{n}{m}=\\frac{n!}{m!(n-m)!}$ for $n\\geq m \\geq 0$ or zero otherwise.\nHere we assume each photon hits a different element. This assumption is not valid in general but will be later corrected for by including the effects of finite detector size. \t\n\n \t\n\\subsection{Finite-size}\n\nEach individual element of the multiplexing PNR detector is an SPD. As such, the signal from each element does not depend on the number of photons hitting it. Therefore, if more than one photon hits an element, only one can be detected, causing a non-linear loss of photons and a distortion of the incident photon statistics. The probability for $m$ photons to hit $k$ different elements at an $N$-element detector, is\\cite{Paul96}:\n\n\\begin{eqnarray}\\label{FS}\nM^N_{FS}(k,m) =\\frac{1}{N^m}\\binom{N}{k}k!S(m,k)\\,,\n\\end{eqnarray}\nwhere $ S(m,k) = \\frac{1}{k!}\\sum_{j=0}^k(-1)^{k-j}\\binom{k}{j} j^m $ are known as the Stirling numbers of the second kind\\cite{wolfram}.\n \t\n\\subsection{Dark-counts}\n\nAfter $k$ elements fire due to photon detections, there are still $(N-k)$ elements which are free to be activated due to a dark-count \\--- a false event without a photon hit. This is typically due to thermal electrons. We assume that each element has equal probability $d$ for this event to occur. We define $p$ to be the total number of elements that fire, including those elements which report a dark count. Then $p-k$ is the number of elements that fire due to a dark count event. The probability for $(p-k)$ elements to be activated due to dark-counts where $(N-k)$ elements are available is\n\n\\begin{eqnarray}\\label{DC}\nM^{d}_{DC}(p,k) =\\binom{N-k}{p-k}d^{(p-k)}(1-d)^{N-p}\\,.\n\\end{eqnarray}\n \t\n\\subsection{Cross-talk}\n\nCross-talk is an effect where a recombination of an electron and a hole generates a photon and this photon is detected in a non-activated neighbor element\\cite{Buzhan}.\nAll multiplexing PNR detectors suffer from this effect but, it is not relevant when the SPDs are distant and the cross-talk counts can be temporally filtered. Where the detector is a SPD array, the cross-talk counts cannot be filtered and the cross-talk effect is relevant.\nCross-talk is most likely to happen at nearest neighbor element and we neglect other scenarios.\n\nUp to date, a few cross-talk models are available\\cite{Eraerds, Akiba, Afek, Dovrat}. Each has its own advantages and disadvantages, but none of them takes into account the finite size of the detector. Thus, we introduce a new model for the cross-talk: \nuntil this point, uniformity was the only assumption. Yet, in order to solve analytically the cross-talk effect, more assumptions must be made. The probability of cross-talk strongly dependents on the number of non-activated neighbors, but it is impossible to know how many nearest neighbors are available. Instead, we check how many nearest neighbors are available on average, where $p$ elements already were activated, and we plug this number in as the total effective number of nearest neighbors, $\\rm{ENN}=4(1-\\frac{p}{N})\\frac{N-\\sqrt{N}}{N-1}$. This linearly dependent formula is reasonable as it is zero if all elements are not available $(p=N)$. On the other limit, if all elements are available the $\\rm{ENN}$ nears four, a limit imposed by the rectangular detector's edge. We derive this formula simply by randomly choosing $p$ elements and counting their nearest neighbors, and then averaging many different configurations.\n\nWe define $x$ to be the probability for cross-talk to one of the available nearest neighbors. We neglect terms proportional to higher powers of $x$ by assuming $x \\ll 1$. In particular, we neglect cross-talk generated by another cross-talk and more than one cross-talk event per element. Thus, the probability for one element to generate a cross-talk event is $4x(1-\\frac{p}{N})\\frac{N-\\sqrt{N}}{N-1}$ and for $p$ elements to generate $\\ell$ cross-talk events is just a binomial combination. Thus, under the mentioned assumptions, the probability for $(s-p)$ elements to be activated by cross-talks from $p$ elements is,\n\n\\begin{eqnarray}\\label{XT}\n\\nonumber& M^{\\tilde x}_{XT}(s,p) =\\binom{p}{s-p}\\left(\\tilde x\\left(1-\\frac{p}{N}\\right)\\right)^{s-p}\\left(1-\\tilde x\\left(1-\\frac{p}{N}\\right)\\right)^{2p-s}\\\\\n\\end{eqnarray}\nwhere we define $\\tilde x = 4x\\frac{N-\\sqrt{N}}{N-1}$.\n\n\\subsection{The detected probabilities}\n\nThe real photon number probabilities ($\\vec P_{\\rm{real}}$) is related to the detected photon number probabilities ($\\vec P_{\\rm{det}}$) by\n\n\\begin{eqnarray}\\label{real}\n \\vec P_{\\rm{det}} = \\mathbf{M}_{XT}\\cdot\\mathbf{M}_{D}\\cdot \\mathbf{M}_{FS}\\cdot \\mathbf{M}_{L}\\cdot \\vec P_{\\rm{real}}\\,,\n\\end{eqnarray}\nwhere $\\mathbf{M}_{XT},\\mathbf{M}_{D},\\mathbf{M}_{FS},\\mathbf{M}_{L}$ are matrices quantifying the cross-talk, dark-counts, finite-size and loss effects, respectively. The ordering of the loss and finite size matrices is important here, but we do not show a proof of the correct ordering here. Instead, we observe that Eq. \\ref{real} agrees with previous theoretical results that do not take matrix ordering into account\\cite{Fitch03,Paul96}.\n\nWe first calculate the detected statistics for an $n$-photon Fock state, a state with a fixed number of $n$ photons; we can then generalize to any other state by averaging the results over the real photon statistics. The probability for $s$ detection events to occur due to an incident $n$-photon Fock state after accounting for all the distorting effects is,\n\n\\begin{eqnarray}\\label{ProbTOT}\n&& P^{n}_{\\rm{det}}(s|\\eta,d,N,\\tilde{x}) = \\\\\n\\nonumber &&\\sum_{p=0}^s\\binom{p}{s-p}\\left(\\tilde x\\left(1-\\frac{p}{N}\\right)\\right)^{(s-p)}\\left(1-\\tilde x\\left(1-\\frac{p}{N}\\right)\\right)^{2p-s} \\\\\n\\nonumber && \\binom{N}{p}\\sum_{j=0}^p\\binom{p}{j}(-1)^{p-j}(1-d)^{N-j}\\left(1-\\eta+\\frac{j\\eta}{N}\\right)^n\\,.\n\\end{eqnarray}\nThis result is proven in appendix \\ref{appTH1} and agrees with previous analytical results substituting $\\tilde{x}=0, d=0$\\cite{Paul96} and $\\eta=1$\\cite{Fitch03}.\nThe agreement between the results shows that the loss should be operated before the finite-size effect as mentioned above.\n\n\\section{Experimental setup for SPD calibration}\\label{Setup}\n\nFrom this point we focus on a single SPD, i.e. for $N=1$. In this case, there are only two possibilities; there is either a detection event or not. Mathematically it means $s=0$ or $1$ in Eq. \\ref{ProbTOT}, and thus the cross-talk summation vanishes. \n\nIn order to calibrate the detector, we have used both single-mode squeezed vacuum (SMSV) and two-mode squeezed vacuum (TMSV) states with a calibrated adjustable attenuator on the SV light. Although only one of these states must be used, we show that both schemes will work for calibration purposes. The attenuated SV light is directed towards a bandpass filter and then sent into a single mode fiber coupled into the detector. If a TMSV state is used, the two modes are fixed to orthogonal polarizations and spatially combined with a polarizing beam splitter (PBS) (see Fig. \\ref{fig1}).\n\nWe found it convenient to define the odds $O_{\\rm{det}}^{\\rm{n}}(\\eta,d,1,0) \\equiv \\frac{P_{\\rm{det}}^{\\rm{n}}(s=1|\\eta,d,1,0)} {P_{\\rm{det}}^{\\rm{n}}(s=0|\\eta,d,1,0)}$ of a detection event. We also replace $\\eta \\rightarrow \\eta t $, where $t$ is the transmission of the calibrated adjustable attenuator, and henceforth $\\eta$ is the fixed efficiency and $t$ is a variable.\nFollowing these changes, Eq. \\ref{ProbTOT} is now reduced to:\n\\begin{align}\nO_{\\rm{det}}^{\\rm{SMSV}}(\\eta t,d,1,0) = \\left(\\frac{\\sqrt{1+(2-\\eta t)\\eta \\bar{n}t}}{1-d}-1\\right) \\approx \\frac{(1-\\frac{\\eta t}{2})\\eta \\bar{n}t+d}{1-d}\\,, \\label{ProbRatioSMSV}\n\\end{align}\n\\begin{align}\n&O_{\\rm{det}}^{\\rm{TMSV}}(\\eta t,d,1,0) = \\frac{(1-\\frac{\\eta t}{2})\\eta \\bar{n}t+d}{1-d} \\label{ProbRatioTMSV}\n\\,.\n\\end{align}\nHere Eqs. \\ref{ProbRatioSMSV} and \\ref{ProbRatioTMSV} are for SMSV and TMSV states, respectively.\nThe approximation is the Taylor expansion for $\\bar{n}\\ll1$ and the full derivation is found in appendix \\ref{appTH3}.\n\nExperimentally, the probability of detection is given by the ratio of the number of detection events to the number of pump pulses. This probability is measured while varying the transmission of the Neutral Density Filter (NDF). The efficiency parameter is then extracted from a second-order fit to Eq. \\ref{ProbRatioSMSV} or Eq. \\ref{ProbRatioTMSV}. We can also calibrate the multiplexed PNRs in a similar manner, though we do not demonstrate it in this manuscript.\n\n\\begin{figure}[t]\n\\centering\n\\fbox{\\includegraphics[width=\\linewidth]{fig1.png}}\n\\caption{The experimental setup. The upper part is the setup where a SMSV state is used and the lower part where a TMSV state is used. DM - dichroic mirror, PBS - polarizing beam splitter, HWP - half wave-plate, NDF - variable neutral density filter, BD - beam dump. Note that SPD2 is only for comparison to the two detector calibration procedure.}\n\\label{fig1}\n\\end{figure}\n\n\nThe experimental setup is described in Fig. \\ref{fig1}.\n780\\,nm photon pairs are generated by a spontaneous parametric down-conversion (SPDC) process from a 2\\,mm thick $\\beta$-BaB$_2$O$_4$ (BBO) crystal using a 390\\,nm doubled Ti:Sapphire pulsed laser.\nIn the first part, we have used a collinear type-I SPDC to generate a horizontally polarized SMSV state. In the second part, we have used a non-collinear type-II SPDC to generate a TMSV state. The two modes have been set in orthogonal polarization modes, and spatially overlapped by a PBS. For comparison, another SPD has been used to measure the detection efficiency in the two detector method. For clarity, the beam path to the reference detector is indicated by a dashed line, where a beamsplitter (BS) or half-wave plate (HWP) were added to divert photons to the reference detector. We note that we have used a calibrated NDF as a convenient method for a known attenuation. Any other self-calibrated attenuation method can be used; for instance, the attenuation of SMSV can be applied by a single rotating polarizer. \n\n\\section{Experimental results}\\label{Results}\n\nThe present scheme is useful for evaluating the detection efficiency of a SPD, due to the unique photon statistics of the SV light and the non-linear loss of the SPD. The non-linear loss alters the linear dependency of the single counts on the attenuation and the detection efficiency is extracted from the curvature.\n\nThe SPD counts were accumulated for one second for a range of $40$ different attenuation values of the NDF. The probability for a photon detection is measured by the single counts divided by the total number of experiment runs. We repeat the experiment for two separate SPDs using both SMSV and TMSV in order to demonstrate the ability to calibrate detectors of different efficiency. The results are presented in Fig. \\ref{fig2}. In each of the four measurements the data is fit to a second-order polynomial, i.e. $a_2t^2+a_1t+a_0$. According to Eqs. \\ref{ProbRatioSMSV} and \\ref{ProbRatioTMSV} the efficiency is $\\eta = -2\\frac{a_2}{a_1}$.\n\n\\begin{figure}[t]\n\\centering\n\\fbox{\\includegraphics[width=\\linewidth]{fig2.pdf}}\n\\caption{(Color Online) The odds of a detection event as a function of the NDF transmission for two separate detectors.\nSolid and empty symbols denote data from using TMSV and SMSV, respectively. Solid and dashed lines are fits to Eqs. \\ref{ProbRatioSMSV} and \\ref{ProbRatioTMSV}, respectively. SPD\\#1 is represented in blue circles and SPD\\#2 is represented in pink boxes. Error bars are assumed to be due to Poissonian noise and are smaller than the symbol sizes, thus they are not displayed.}\n\\label{fig2}\n\\end{figure}\n\nIn table \\ref{table} the results for the efficiency calibration by the presented single detector method are summarized. Those results are compared to the two detector method showing good agreement between the two methods, for all used detectors and for both experimental setups. The detection efficiency is lower in the SMSV setup due to weaker coupling to single-mode fiber, which is probably caused by spatial walk-off inside the non-linear crystal. This inefficient coupling is a loss factor well observed by both calibration methods.\n\n\\begin{table}[b]\n\\centering\n\\caption{The efficiencies measured by the presented single detector method ($\\eta_1$) and the two detector method ($\\eta_2$). Note that the SMSV efficiencies are lower than in the TMSV case due to weaker coupling into the single-mode fiber.}\n\\begin{tabular}{cccc}\n\\hline\nSPD $\\#$ & SV light & $\\eta_1$ & $\\eta_2$ \\\\\n\\hline\n$1$ & SMSV & $11.3\\pm1.1\\%$ & $11.8\\pm0.9\\%$ \\\\\n$2$ & SMSV & $7.4\\pm0.9\\%$ & $8.1\\pm0.9\\%$ \\\\\n$1$ & TMSV & $17.4\\pm1.0\\%$ & $17.3\\pm0.8\\%$ \\\\\n$2$ & TMSV & $12.7\\pm0.9\\%$ & $11.7\\pm0.8\\%$ \\\\\n\\hline\n\\end{tabular}\n \\label{table}\n\\end{table}\n\nIn order to show the presented method is valid for any pump intensity, we repeated the experiment using TMSV and SPD \\#1 for different pump intensities. The results of this process are shown in Fig. \\ref{fig3}. As before, we fit the measurements to a second-order polynomial and the efficiency is calculated from the polynomial coefficients. \n\n\\begin{figure}[t]\n\\centering\n\\fbox{\\includegraphics[width=\\linewidth]{fig3.pdf}}\n\\caption{(Color Online) The odds of a detection event as a function of the NDF transmission for SPD \\#1 when the pump power is varied.\nGreen diamonds are for pump power of 250 mW, pink downward triangles for 240 mW, blue triangles for 215 mW, red circles for 180 mW and black boxes for 145 mW. Error bars are assumed to be due to Poissonian noise and are smaller than the symbol sizes, thus they are not displayed.}\n\\label{fig3}\n\\end{figure}\n\nThe results for different pump powers are summarized in table \\ref{table2}. A good agreement is shown between different pump powers, where a standard deviation of $0.5\\%$ is found. The standard deviation consists with the error values of the detection efficiency which were calculated separately. \n\n\\begin{table}[htbp]\n\\centering\n\\caption{The efficiencies as measured by the single detector method ($\\eta_1$) with SPD1 and TMSV light for different pump powers.}\n\\begin{tabular}{cc}\n\\hline\nPump power & $\\eta_1$ \\\\\n\\hline\n$145\\,\\rm{mW}$ & $17.9\\pm0.8\\%$ \\\\\n$180\\,\\rm{mW}$ & $16.5\\pm0.9\\%$ \\\\\n$215\\,\\rm{mW}$ & $17.2\\pm0.8\\%$ \\\\\n$240\\,\\rm{mW}$ & $16.8\\pm0.7\\%$ \\\\\n$250\\,\\rm{mW}$ & $17.6\\pm0.7\\%$ \\\\\n\\hline\n\\end{tabular}\n \\label{table2}\n\\end{table}\n\n\\section{Summary}\\label{Summary}\n\nWe have presented a model to characterize a PNR detector based on SPDs. This model predicts the detected photon statistics in the presence of loss, finite size, dark counts and cross-talk. The model is valid also for a single SPD. The predicted statistics show an efficiency dependence when detecting SV light. Thus, the efficiency can be measured without a reference detector. We have experimentally measured the efficiency and successfully compared it to the two detector method.\n\n\\section{Acknowledgements}\nJPD and NMS would like to acknowledge support from the Army Research Office, the Defense Advanced Research Project Agency, and the Louisiana Economic Development Assistantship Program.\n\n\\section{appendix}\n\n\\subsection{Probability calculation}\\label{appTH1}\nWe first replace the matrix products in Eq. \\ref{real} into summations and substitute the matrix values according to Eqs. \\ref{loss} - \\ref{XT}:\n\n\\begin{eqnarray}\\label{ProbTOT2}\n&& P^{n}_{\\rm{det}}(s|\\eta,d,N,\\tilde{x}) =\\\\\n\\nonumber &&\\sum_{p=0}^N\\binom{p}{s-p}\\left(\\tilde x\\left(1-\\frac{p}{N}\\right)\\right)^{s-p}\\left(1-\\tilde x\\left(1-\\frac{p}{N}\\right)\\right)^{2p-s}\\\\\n\\nonumber &&\\sum_{k=0}^N\\binom{N-k}{p-k}d^{(p-k)}(1-d)^{N-p}\\\\\n\\nonumber && \\sum_{m=0}^N\\frac{1}{N^m}\\binom{N}{k}k!S(m,k)\\binom{n}{m}\\eta^m(1-\\eta)^{n-m}\\,.\n\\end{eqnarray}\nNow, we focus on the two last lines in Eq. \\ref{ProbTOT2}. We notice that $m\\leq{n}$ and $k\\leq{p}\\,$, because the loss effect cannot increase the photon number and dark-counts cannot decrease it. After reorder the summations and substitute $\\binom{N}{k}\\binom{N-k}{p-k} = \\binom{N}{p}\\binom{p}{k}$ we get:\n\\begin{eqnarray}\\label{ProbTOT3}\n&& \\binom{N}{p} \\sum_{m=0}^n \\frac{1}{N^m} \\binom{n}{m}\\eta^m(1-\\eta)^{n-m}\\\\\n\\nonumber&& \\sum_{k=0}^p\\binom{p}{k}d^{(p-k)}(1-d)^{N-p}\\sum_{j=0}^k(-1)^{k-j}\\binom{k}{j}j^m\\,.\n\\end{eqnarray}\nWe reorder the summations in the second line, use $\\binom{p}{k}\\binom{k}{j} = \\binom{p}{j}\\binom{p-j}{k-j} $ and replace the summation index $k\\rightarrow k-j$, resulting in the second line to be:\n\\begin{eqnarray}\\label{ProbTOT4}\n\\sum_{j=0}^p\\binom{p}{j}j^m\\sum_{k=0}^{p-j}\\binom{p-j}{k}(-1)^k d^{p-k-j}(1-d)^{N-p}\\,.\n\\end{eqnarray}\nThe inner summation equals to $(1-d)^{N-j}(-1)^{p-j} $. Substituting this in Eq. \\ref{ProbTOT3} and reordering the summations, we get:\n\\begin{eqnarray}\\label{ProbTOT5}\n\\nonumber&\\binom{N}{p}\\sum_{j=0}^p\\binom{p}{j}(1-d)^{N-j}(-1)^{p-j}\n\\sum_{m=0}^n\\binom{n}{m} \\left(\\frac{\\eta j}{N}\\right)^m (1-\\eta)^{n-m}\\,.\\\\\n\\end{eqnarray}\nHowever the second summation is a binomial expansion and regrouping it restores the third line of Eq. \\ref{ProbTOT}.\nThe second line remains almost as is. The only change is that we change the upper limit to $s$, the number of activated elements by signal photon and dark counts, as we neglect higher-order cross-talks, which means the number of cross-talks is limited by the number of already activated elements. The accurate upper limit is $\\min(p,N-p)$, but we checked it numerically and the difference is negligible.\n\n\\subsection{Calculating SPD probabilities for SV states}\\label{appTH3}\n\nSubstituting $N=1$ in Eq. \\ref{ProbTOT}, i.e.:\n\n\\begin{align}\\label{ProbRatio2}\n P^{n}_{\\rm{det}}(s|\\eta,d,1,\\tilde{x}) =\\sum_{j=0}^p\\binom{p}{j}(-1)^{p-j}(1-d)^{1-j}\\left(1-\\eta+j\\eta\\right)^n\\,.\n\\end{align}\nThe first summation in Eq. \\ref{ProbTOT} vanishes as a SPD has no neighbors to cross-talk to. We write the probabilities for no detection and for one photon detection explicitly:\n\n\\begin{eqnarray}\n&& P^{n}_{\\rm{det}}(0|\\eta,d,1,0)=(1-d)\\left(1-\\eta\\right)^n\\,, \\\\\n\\label{P0_Fock}\n&&P^{n}_{\\rm{det}}(1|\\eta,d,1,0)=1-(1-d)\\left(1-\\eta\\right)^n\\,.\n\\label{P1_Fock}\n\\end{eqnarray}\nThe probability to have zero photon counts is just the probability to not detect $n$ photons times the probability to not have dark-counts. The probability to get one photon detection is just the complementary probability.\nNext, we average over the photon statistics of the SV state.\n\nFor a TMSV state any mode has photon statistics of $P(n)=(1-x)x^n$ where $x$ is related to $\\bar{n}$, the average photon number, by $x=\\frac{\\bar{n}}{1+\\bar{n}} $. After combining the two modes spatially the probability for $2n$-photons is $P_{TMSV}(2n) = (1-x)x^n$. We average on the statistics and get:\n\\begin{eqnarray}\n& P^{\\rm{TMSV}}_{\\rm{det}}(0|\\eta,d,1,0)=\\frac{(1-d)(1-x)}{1-x(1-\\eta)^2}\\,,\\\\\n\\label{P0_TMSV}\n& P^{\\rm{TMSV}}_{\\rm{det}}(1|\\eta,d,1,0)=1-\\frac{(1-d)(1-x)}{1-x(1-\\eta)^2}\\,.\n\\label{P1_TMSV}\n\\end{eqnarray}\nTaking the ratio of the two last equations gives Eq. \\ref{ProbRatioTMSV}.\n\n\nFor a SMSV state the photon statistics is $P_{SMSV}(n)=\\cos^2{\\frac{n\\pi}{2}}\\frac{n!}{2^n((\\frac{n}{2})!)^2}\\frac{\\tanh{r}^n}{\\cosh{r}}$, where $r$ is the squeezed parameter. After the averaging we get:\n\\begin{eqnarray}\n& P^{\\rm{SMSV}}_{\\rm{det}}(0|\\eta,d,1,0)=(1-d)\\frac{1}{\\sqrt{1+(2\\eta-\\eta^2)\\bar{n}}}\\,,\\\\\n\\label{P0_SMSV}\n& P^{\\rm{SMSV}}_{\\rm{det}}(1|\\eta,d,1,0)=1-(1-d)\\frac{1}{\\sqrt{1+(2\\eta-\\eta^2)\\bar{n}}}\\,,\n\\label{P1_SMSV}\n\\end{eqnarray}\nwhere $\\bar{n} = \\sinh^2{r}$ is the average photon number and the hyperbolic function identities, $ \\cosh{\\big(\\tanh^{-1}{r}\\big)}=\\frac{1}{\\sqrt{1-r^2}}\\,,\\,\\cosh^2{r}-\\sinh^2{r}=1 $, were used.\nTaking the ratio of the two last equations yields Eq. \\ref{ProbRatioSMSV}.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}