diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzdxje" "b/data_all_eng_slimpj/shuffled/split2/finalzzdxje" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzdxje" @@ -0,0 +1,5 @@ +{"text":"\\section{\\label{}}\n\n\n\\section{Introduction}\n\\label{sec:Introduction}\n\nWe consider the problem of Bayesian inference from cosmological data, in the common scenario where we can generate synthetic data through forward simulations, but where the exact likelihood function is intractable. The generative process can be extremely general: it may be a noisy non-linear dynamical system involving an unrestricted number of latent variables. Likelihood-free inference methods, also known as approximate Bayesian computation \\citep[ABC, see][for reviews]{Marin2011,Lintusaari2017a} replace likelihood calculations with data model evaluations. In recent years, they have emerged as a viable alternative to likelihood-based techniques, when the simulator is sufficiently cheap. Applications in cosmology include measuring cosmological parameters from type Ia supernovae \\citep{Weyant2013} and weak lensing peak counts \\citep{Lin2015}, analysing the galaxy halo connection \\citep{Hahn2017}, inferring the photometric and size evolution of galaxies \\citep{Carassou2017}, measuring cosmological redshift distributions \\citep{Kacprzak2018}, estimating the ionising background from the Lyman-$\\alpha$ and Lyman-$\\beta$ forests \\citep{Davies2018}.\n\nIn its simplest form, ABC takes the form of likelihood-free rejection sampling and involves forward simulating data from parameters drawn from the prior, then accepting parameters when the discrepancy (by some measure) between simulated data and observed data is smaller than a user-specified threshold $\\varepsilon$. Such an approach tends to be extremely expensive since many simulated data sets get rejected, due to the lack of knowledge about the relation between the model parameters and the corresponding discrepancy. Variants of likelihood-free rejection sampling such as Population (or Sequential) Monte Carlo ABC \\citep[\\textsc{pmc}-\\textsc{abc} or \\textsc{smc}-\\textsc{abc}, see][for implementations aimed at astrophysical applications]{Akeret2015,Ishida2015,Jennings2017} improve upon this scheme by making the proposal adaptive; however, they do not use a probabilistic model for the relation between parameters and discrepancies (also known as a surrogate surface), so that their practical use usually necessitates $\\mathcal{O}(10^4-10^6)$ evaluations of the simulator. \n\nIn this paper, we address the challenging problem where the number of simulations is extremely limited, e.g. to a few thousand, rendering the use of sampling-based ABC methods impossible. To this end, we use Bayesian optimisation for likelihood-free inference \\citep[{\\textsc{bolfi}},][]{GutmannCorander2016}, an algorithm which combines probabilistic modelling of the discrepancy with optimisation to facilitate likelihood-free inference. Since it was introduced, {\\textsc{bolfi}} has been applied to various statistical problems in science, including inference of the Ricker model \\citep{GutmannCorander2016}, the Lotka-Volterra predator-prey model and population genetic models \\citep{Jaervenpaeae2018}, pathogen spread models \\citep{Lintusaari2017a}, atomistic structure models in materials \\citep{Todorovic2017}, and cognitive models in human-computer interaction \\citep{Kangasraeaesioe2017}. This work aims at introducing {\\textsc{bolfi}} in cosmological data analysis and at presenting its first cosmological application. We focus on computable parametric approximations to the true likelihood (also known as synthetic likelihoods), rendering the approach completely $\\varepsilon$-free. Recently, \\citet{Jaervenpaeae2017} introduced an acquisition function for Bayesian optimisation (the expected integrated variance), specifically tailored to perform efficient and accurate ABC. We extend their work by deriving the expression of the expected integrated variance in the parametric approach. This acquisition function measures the expected uncertainty in the estimate of the {\\textsc{bolfi}} posterior density, which is due to the limited number of simulations, over the future evaluation of the simulation model. The next simulation location is proposed so that this expected uncertainty is minimised. As a result, high-fidelity posterior inferences can be obtained with orders of magnitude fewer simulations than with likelihood-free rejection sampling. As examples, we demonstrate the use of {\\textsc{bolfi}} on the problems of summarising Gaussian signals and inferring cosmological parameters from the Joint Lightcurve Analysis (JLA) supernovae data set \\citep{Betoule2014}.\n\nThe structure of this paper is as follows. In section \\ref{sec:Inference of simulator-based statistical models}, we provide a review of the formalism for the inference of simulator-based statistical models. In section \\ref{sec:Regression and Optimisation for likelihood-free inference}, we describe {\\textsc{bolfi}} and discuss the regression and optimisation strategies. In particular, we provide the optimal acquisition rule for ABC in the parametric approach to likelihood approximation. Applications are given in section \\ref{sec:Applications}. The developed method is discussed in section \\ref{sec:Discussion} in the context of cosmological data analysis. Section \\ref{sec:Conclusion} concludes the paper. Mathematical details and descriptions of the case studies are presented in the appendices. \n\n\\section{Inference of simulator-based statistical models}\n\\label{sec:Inference of simulator-based statistical models}\n\n\\subsection{Simulator-based statistical models}\n\\label{ssec:Simulator-based statistical models}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tikzpicture}\n\t\\pgfdeclarelayer{background}\n\t\\pgfdeclarelayer{foreground}\n\t\\pgfsetlayers{background,main,foreground}\n\n\t\\tikzstyle{probability}=[draw, thick, text centered, rounded corners, minimum height=1em, minimum width=1em, fill=green!20]\n\t\\tikzstyle{variabl}=[draw, thick, text centered, circle, minimum height=1em, minimum width=1em]\n\n\t\\def0.7{0.7}\n\t\\def2.0{2.0}\n\n\n \\node (thetaprobaii) [probability]\n {$\\mathpzc{P}(\\boldsymbol{\\uptheta})$};\n \\path (thetaprobaii.south)+(0,-0.7) node (thetaii) [variabl]\n {$\\boldsymbol{\\uptheta}$};\n \\path (thetaii.south)+(0,-0.7) node (dprobaii) [probability]\n {$\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$};\n \\path (dprobaii.south)+(0,-0.7) node (dii) [variabl]\n {$\\textbf{d}$};\n \n \n \\path (thetaprobaii.west)+(-2.0,0) node (thetaprobai) [probability]\n {$\\mathpzc{P}(\\boldsymbol{\\uptheta})$};\n \\path (thetaprobai.south)+(0,-0.7) node (thetai) [variabl]\n {$\\boldsymbol{\\uptheta}$};\n \\path (thetai.south)+(0,-0.7) node (di) [variabl]\n {$\\textbf{d}$};\n\n\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetaprobaii) -- (thetaii);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetaii) -- (dprobaii);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (dprobaii) -- (dii);\n\n\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetaprobai) -- (thetai);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetai) -- (di);\t\n\n\n\\end{tikzpicture}\n\\end{center}\n\\caption{Hierarchical representation of the exact Bayesian problem for simulator-based statistical models of different complexities: a deterministic simulator (left), and a stochastic simulator (right).\\label{fig:BHM_exact}}\n\\end{figure}\n\nSimulator-based statistical models (also known as generative models) can be written in a hierarchical form (figure \\ref{fig:BHM_exact}), where $\\boldsymbol{\\uptheta}$ are the parameters of interest, and $\\textbf{d}$ the simulated data. $\\mathpzc{P}(\\boldsymbol{\\uptheta})$ is the prior probability distribution of $\\boldsymbol{\\uptheta}$ and $\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$ is the sampling distribution of $\\textbf{d}$ given $\\boldsymbol{\\uptheta}$.\n\nThe simplest case (figure \\ref{fig:BHM_exact}, left) is when the simulator is a deterministic function of its input and does not use any random variable, i.e.\n\\begin{equation}\n\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta}) = \\updelta_\\mathrm{D}(\\textbf{d} - \\boldsymbol{\\hat{\\mathrm{d}}}(\\boldsymbol{\\uptheta})) ,\\label{eq:Dirac_deterministic_simulator}\n\\end{equation}\nwhere $\\updelta_\\mathrm{D}$ is a Dirac delta distribution and $\\boldsymbol{\\hat{\\mathrm{d}}}$ a deterministic function of $\\boldsymbol{\\uptheta}$.\n\nIn a more generic scenario (figure \\ref{fig:BHM_exact}, right), the simulator is stochastic, in the sense that the data are drawn from an overall (but often unknown analytically) probability distribution function (pdf) $\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$. Equation \\eqref{eq:Dirac_deterministic_simulator} does not hold in this case. The scatter between different realisations of $\\textbf{d}$ given the same $\\boldsymbol{\\uptheta}$ can have various origins. In the simplest case, it only reflects the intrinsic uncertainty, which is of interest. More generically, additional nuisance parameters can be at play to produce the data $\\textbf{d}$ and will contribute to the uncertainty. This ``latent space'' can often be hundred-to-multi-million dimensional. Simulator-based cosmological models are typically of this kind: although the physical and observational processes simulated are repeatable features about which inferences can be made, the particular realisation of Fourier phases of the data is entirely noise-driven. Ideally, phase-dependent quantities should not contribute to any measure of match or mismatch between model and data.\n\n\\subsection{The exact Bayesian problem}\n\\label{ssec:The exact Bayesian problem}\n\nThe inference problem is to evaluate the probability of $\\boldsymbol{\\uptheta}$ given $\\textbf{d}$,\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\textbf{d}) = \\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta}) \\, \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})}{\\mathpzc{P}(\\textbf{d})},\n\\label{eq:exact_problem_Bayes}\n\\end{equation}\nfor the observed data $\\textbf{d}_\\mathrm{O}$, i.e.\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\textbf{d})_\\mathrm{|\\textbf{d}=\\textbf{d}_O} = \\mathcal{L}(\\boldsymbol\\uptheta) \\, \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})}{Z_\\textbf{d}} ,\n\\end{equation}\nwhere the exact likelihood for the problem is defined as\n\\begin{equation}\n\\mathcal{L}(\\boldsymbol\\uptheta) \\equiv \\mathpzc{P}(\\textbf{d}|\\boldsymbol\\uptheta)_\\mathrm{|\\textbf{d}=\\textbf{d}_O} .\n\\end{equation}\nIt is generally of unknown analytical form. The normalisation constant is $Z_\\textbf{d} \\equiv \\mathpzc{P}(\\textbf{d})_\\mathrm{|\\textbf{d}=\\textbf{d}_O}$, where $\\mathpzc{P}(\\textbf{d})$ is the marginal distribution of $\\textbf{d}$.\n\n\\subsection{Approximate Bayesian computation}\n\\label{ssec:Approximate Bayesian computation}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tikzpicture}\n\t\\pgfdeclarelayer{background}\n\t\\pgfdeclarelayer{foreground}\n\t\\pgfsetlayers{background,main,foreground}\n\n\t\\tikzstyle{probability}=[draw, thick, text centered, rounded corners, minimum height=1em, minimum width=1em, fill=green!20]\n\t\\tikzstyle{variabl}=[draw, thick, text centered, circle, minimum height=1em, minimum width=1em]\n\n\t\\def0.7{0.7}\n\n \\node (thetaproba) [probability]\n {$\\mathpzc{P}(\\boldsymbol{\\uptheta})$};\n \\path (thetaproba.south)+(0,-0.7) node (theta) [variabl]\n {$\\boldsymbol{\\uptheta}$};\n \\path (theta.south)+(0,-0.7) node (dproba) [probability]\n {$\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$};\n \\path (dproba.south)+(0,-0.7) node (d) [variabl]\n {$\\textbf{d}$};\n \\path (d.south)+(0,-0.7) node (phiproba) [probability]\n {$\\mathpzc{P}(\\boldsymbol{\\Phi}|\\textbf{d})$};\n \\path (phiproba.south)+(0,-0.7) node (phi) [variabl]\n {$\\boldsymbol{\\Phi}$};\n\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (thetaproba) -- (theta);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (theta) -- (dproba);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (dproba) -- (d);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (d) -- (phiproba);\n\t\\path [draw, line width=0.7pt, arrows={-latex}] (phiproba) -- (phi);\n\n\\end{tikzpicture}\n\\end{center}\n\\caption{Hierarchical representation of the approximate Bayesian inference problem for simulator-based statistical models, with a compression of the raw data to a set of summary statistics.\\label{fig:BHM_approx}}\n\\end{figure}\n\nInference of simulator-based statistical models is usually based on a finite set of simulated data $\\textbf{d}_{\\boldsymbol{\\uptheta}}$, generated with parameter value $\\boldsymbol{\\uptheta}$, and on a measurement of the discrepancy between simulated data and observed data $\\textbf{d}_\\mathrm{O}$. This discrepancy is used to define an approximation to the exact likelihood $\\mathcal{L}(\\boldsymbol{\\uptheta})$. The approximation happens on multiple levels.\n\nOn a physical and statistical level, the approximation consists of compressing the full data $\\textbf{d}_\\mathrm{O}$ to a set of summary statistics $\\boldsymbol{\\Phi}_\\mathrm{O}$ before performing inference. Similarly, simulated data $\\textbf{d}_{\\boldsymbol{\\uptheta}}$ are compressed to simulated summary statistics $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$. This can be seen as adding a layer to the Bayesian hierarchical model (figure \\ref{fig:BHM_approx}). The purpose of this operation is to filter out the information in $\\textbf{d}$ that is not deemed relevant to the inference of $\\boldsymbol{\\uptheta}$, so as to reduce the dimensionality of the problem. Ideally, $\\boldsymbol{\\Phi}$ should be \\textit{sufficient} for parameters $\\boldsymbol{\\uptheta}$, i.e. formally $\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}) = \\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi},\\textbf{d})$ or equivalently $\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\Phi},\\boldsymbol{\\uptheta}) = \\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\Phi})$, which happens when the compression is lossless. However, sufficient summary statistics are generally unknown or even impossible to design; therefore the compression from $\\textbf{d}$ to $\\boldsymbol{\\Phi}$ will usually be lossy. The approximate inference problem to be solved is now $\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}) = \\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta}) \\, \\dfrac{\\mathpzc{P}(\\boldsymbol{\\uptheta})}{\\mathpzc{P}(\\boldsymbol{\\Phi})}$ for the observed summary statistics $\\boldsymbol{\\Phi}_\\mathrm{O}$, i.e.\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi})_\\mathrm{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_O} = L(\\boldsymbol\\uptheta) \\, \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})}{Z_{\\boldsymbol{\\Phi}}} .\n\\label{eq:approx_problem_Bayes}\n\\end{equation}\nIn other words, $\\mathcal{L}(\\boldsymbol{\\uptheta})$ is replaced by\n\\begin{equation}\nL(\\boldsymbol{\\uptheta}) \\equiv \\mathpzc{P}(\\boldsymbol\\Phi|\\boldsymbol\\uptheta)_\\mathrm{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_O} ,\n\\label{eq:L_theta}\n\\end{equation}\nand $Z_\\textbf{d}$ by $Z_{\\boldsymbol{\\Phi}} \\equiv \\mathpzc{P}(\\boldsymbol{\\Phi})_{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_\\mathrm{O}}$. Inference of model \\ref{fig:BHM_approx} gives\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol\\uptheta, \\textbf{d} | \\boldsymbol\\Phi) \\propto \\mathpzc{P}(\\boldsymbol\\Phi|\\textbf{d}) \\, \\mathpzc{P}(\\textbf{d}|\\boldsymbol\\uptheta) \\, \\mathpzc{P}(\\boldsymbol\\uptheta),\n\\label{eq:BHM_approx_expansion}\n\\end{equation}\nwith, after marginalisation over $\\textbf{d}$,\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol\\uptheta | \\boldsymbol\\Phi) = \\int \\mathpzc{P}(\\boldsymbol\\uptheta, \\textbf{d} | \\boldsymbol\\Phi) \\, \\mathrm{d}\\textbf{d} .\n\\label{eq:BHM_approx_marginalisation}\n\\end{equation}\nTherefore, the approximate likelihood $L(\\boldsymbol{\\uptheta})$ must satisfy\n\\begin{equation}\nL(\\boldsymbol{\\uptheta}) \\propto \\int \\mathpzc{P}(\\boldsymbol\\Phi|\\textbf{d})_\\mathrm{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_O} \\, \\mathpzc{P}(\\textbf{d}|\\boldsymbol\\uptheta) \\, \\mathrm{d}\\textbf{d} .\n\\label{eq:BHM_approx_likelihood}\n\\end{equation}\nIn many cases, the compression from $\\textbf{d}$ to $\\boldsymbol{\\Phi}$ is deterministic, i.e.\n\\begin{equation}\n\\mathpzc{P}(\\boldsymbol{\\Phi}|\\textbf{d}) = \\updelta_\\mathrm{D}(\\boldsymbol{\\Phi} - \\boldsymbol{\\hat{\\Phi}}(\\textbf{d})) ,\n\\label{eq:Dirac_compression}\n\\end{equation}\nwhich simplifies the integral over $\\textbf{d}$ in equations \\eqref{eq:BHM_approx_marginalisation} and \\eqref{eq:BHM_approx_likelihood}.\n\nOn a practical level, $L(\\boldsymbol{\\uptheta})$ is still of unknown analytical form (which is a property of $\\mathpzc{P}(\\boldsymbol\\Phi|\\boldsymbol\\uptheta)$ inherited from $\\mathpzc{P}(\\textbf{d}|\\boldsymbol{\\uptheta})$ in model \\ref{fig:BHM_approx}). Therefore, it has to be approximated using the simulator. We denote by $\\widehat{L}^N(\\boldsymbol{\\uptheta})$ an estimate of $L(\\boldsymbol{\\uptheta})$ computed using $N$ realisations of the simulator. The limiting approximation, in the case where infinite computer resources were available, is denoted by $\\widetilde{L}(\\boldsymbol{\\uptheta})$, such that\n\\begin{equation}\n\\widehat{L}^N(\\boldsymbol{\\uptheta}) \\xrightarrow[N \\rightarrow \\infty]{} \\widetilde{L}(\\boldsymbol{\\uptheta}) .\n\\end{equation}\nNote that $\\widetilde{L}(\\boldsymbol{\\uptheta})$ can be different from $L(\\boldsymbol{\\uptheta})$, depending on the assumptions made to construct $\\widehat{L}^N(\\boldsymbol{\\uptheta})$. These are discussed in section \\ref{ssec:Computable approximations of the likelihood}.\n\n\\subsection{Computable approximations of the likelihood}\n\\label{ssec:Computable approximations of the likelihood}\n\n\\subsubsection{Deterministic simulators}\n\\label{sssec:Deterministic simulators}\n\nThe simplest possible case is when the simulator does not use any random variable, i.e. $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ is an entirely deterministic function of $\\boldsymbol{\\uptheta}$ (see figure \\ref{fig:BHM_exact}, left). Equivalently, all the conditional probabilities appearing in equation \\eqref{eq:BHM_approx_expansion} reduce to Dirac delta distributions given by equations \\eqref{eq:Dirac_deterministic_simulator} and \\eqref{eq:Dirac_compression}. In this case, one can directly use the approximate likelihood given by equation \\eqref{eq:L_theta}, complemented by an assumption on the functional shape of $\\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta})$.\n\n\\subsubsection{Parametric approximations and the synthetic likelihood}\n\\label{sssec:Parametric approximations and the synthetic likelihood}\n\nWhen the simulator is not deterministic, the pdf $\\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta})$ is unknown analytically. Nonetheless, in some situations, it may be reasonably assumed to follow specific parametric forms.\n\nFor example, if $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ is obtained through averaging a sufficient number of independent and identically distributed variables contained in $\\textbf{d}$, the central limit theorem suggests that a Gaussian distribution is appropriate, i.e. $\\widetilde{L}(\\boldsymbol{\\uptheta}) = \\exp\\left[\\tilde{\\ell}(\\boldsymbol{\\uptheta})\\right]$ with \n\\begin{equation}\n-2 \\tilde{\\ell}(\\boldsymbol{\\uptheta}) \\equiv \\log \\left| 2\\pi \\boldsymbol{\\Sigma}_{\\boldsymbol{\\uptheta}} \\right| + (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}})^\\intercal \\boldsymbol{\\Sigma}_{\\boldsymbol{\\uptheta}}^{-1} (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}}),\n\\end{equation}\nwhere the mean and covariance matrix,\n\\begin{equation}\n\\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}} \\equiv \\mathrm{E}\\left[ \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}} \\right] \\enskip \\mathrm{and} \\enskip \\boldsymbol{\\Sigma}_{\\boldsymbol{\\uptheta}} \\equiv \\mathrm{E}\\left[ (\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}-\\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}}) (\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}-\\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}})^\\intercal \\right],\n\\end{equation}\ncan depend on $\\boldsymbol{\\uptheta}$. This is an approximation of $L(\\boldsymbol{\\uptheta})$, unless the summary statistics $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ are indeed Gaussian-distributed. $\\boldsymbol{\\upmu}_{\\boldsymbol{\\uptheta}}$ and $\\boldsymbol{\\Sigma}_{\\boldsymbol{\\uptheta}}$ are generally unknown, but can be estimated using the simulator: given a set of $N$ simulations $\\lbrace \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}^{(i)} \\rbrace$, drawn independently from $\\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta})$, one can define\n\\begin{equation}\n\\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}} \\equiv \\mathrm{E}^N\\left[ \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}} \\right] \\enskip \\mathrm{and} \\enskip \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}} \\equiv \\mathrm{E}^N\\left[ (\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}-\\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}}) (\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}-\\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}})^\\intercal \\right],\n\\label{eq:mean_covariance_empirical}\n\\end{equation}\nwhere $\\mathrm{E}^N$ stands for the empirical average over the set of simulations. A computable approximation of the likelihood is therefore $\\widehat{L}^N(\\boldsymbol{\\uptheta}) = \\exp\\left[ \\hat{\\ell}^N(\\boldsymbol{\\uptheta}) \\right]$, where\n\\begin{equation}\n-2 \\hat{\\ell}^N(\\boldsymbol{\\uptheta}) \\equiv \\log \\left| 2\\pi \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}} \\right| + (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}})^\\intercal \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}^{-1} (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}}).\n\\label{eq:synthetic_likelihood}\n\\end{equation}\nDue to the approximation of the expectation $\\mathrm{E}$ with an empirical average $\\mathrm{E}^N$, both $\\boldsymbol{\\hat{\\upmu}}_{\\boldsymbol{\\uptheta}}$ and $\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}$ become random objects. The approximation of the likelihood $\\widehat{L}^N(\\boldsymbol{\\uptheta})$ is therefore a random function with some intrinsic uncertainty itself, and its computation is a stochastic process. This is further discussed using a simple example in section \\ref{ssec:Summarising Gaussian signals}.\n\nThe approximation given in equation \\eqref{eq:synthetic_likelihood}, known as the synthetic likelihood \\citep{Wood2010,Price2017}, has already been applied successfully to perform approximate inference in several scientific fields. However, as pointed out by \\citet{SellentinHeavens2016}, for inference from Gaussian-distributed summaries $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ with an estimated covariance matrix $\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}$, a different parametric form, namely a multivariate $t$-distribution, should rather be used. The investigation of a synthetic $t$-likelihood is left to future investigations.\n\nIn section \\ref{ssec:Summarising Gaussian signals} and appendix \\ref{apx:Summarising Gaussian signals}, we extend previous work on the Gaussian synthetic likelihood and introduce a Gamma synthetic likelihood for case where the $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ are (or can be assumed to be) Gamma-distributed. \n\n\\subsubsection{Non-parametric approximations and likelihood-free rejection sampling}\n\\label{sssec:Non-parametric approximations and likelihood-free rejection sampling}\n\nAn alternative to assuming a parametric form for $L(\\boldsymbol{\\uptheta})$ is to replace it by a kernel density estimate of the distribution of a discrepancy between simulated and observed summary statistics, i.e.\n\\begin{equation}\n\\widetilde{L}(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}\\left[ \\kappa(\\Delta_{\\boldsymbol{\\uptheta}}) \\right],\n\\end{equation}\nwhere $\\Delta_{\\boldsymbol{\\uptheta}}$ is a non-negative function of $\\boldsymbol{\\Phi}_\\mathrm{O}$ and $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ (usually of $\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$) which can also possibly depend on $\\boldsymbol{\\uptheta}$ and any variable used internally by the simulator, and the kernel $\\kappa$ is a non-negative, univariate function independent of $\\boldsymbol{\\uptheta}$ (usually with a maximum at zero). A computable approximation of the likelihood is then given by\n\\begin{equation}\n\\widehat{L}^N(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}^N\\left[ \\kappa(\\Delta_{\\boldsymbol{\\uptheta}}) \\right] .\n\\end{equation}\n\nFor likelihood-free inference, $\\kappa$ is often chosen as the uniform kernel on the interval $\\left[ 0, \\varepsilon \\right)$, i.e. $\\kappa(u) \\propto \\chi_{\\left[ 0, \\varepsilon \\right)}(u)$, where $\\varepsilon$ is called the threshold and the indicator function $\\chi_{\\left[ 0, \\varepsilon \\right)}$ equals one if $u \\in \\left[ 0, \\varepsilon \\right)$ and zero otherwise. This yields\n\\begin{equation}\n\\widetilde{L}(\\boldsymbol{\\uptheta}) \\propto \\mathpzc{P}(\\Delta_{\\boldsymbol{\\uptheta}} \\leq \\varepsilon) \\quad \\mathrm{and} \\quad \\widehat{L}^N(\\boldsymbol{\\uptheta}) \\propto \\mathpzc{P}^N(\\Delta_{\\boldsymbol{\\uptheta}} \\leq \\varepsilon),\n\\label{eq:approximate_likelihood_acceptance}\n\\end{equation}\nwhere $\\mathpzc{P}^N(\\Delta_{\\boldsymbol{\\uptheta}} \\leq \\varepsilon)$ is the empirical probability that the discrepancy is below the threshold. $\\widehat{L}^N(\\boldsymbol{\\uptheta})$ can be straightforwardly evaluated by running simulations, computing $\\Delta_{\\boldsymbol{\\uptheta}}$ and using $\\Delta_{\\boldsymbol{\\uptheta}} \\leq \\varepsilon$ as a criterion for acceptance or rejection of proposed samples. Such an approach is often simply (or mistakenly) referred to as approximate Bayesian computation (ABC) in the astrophysics literature, although the more appropriate and explicit denomination is likelihood-free rejection sampling \\citep[see e.g.][]{Marin2011}.\n\nIt is interesting to note that the parametric approximate likelihood approach of section \\ref{sssec:Parametric approximations and the synthetic likelihood} can be embedded into the non-parametric approach. Indeed, $\\Delta_{\\boldsymbol{\\uptheta}}$ can be defined as\n\\begin{equation}\n\\Delta^{\\textbf{C}_{\\boldsymbol{\\uptheta}}}_{\\boldsymbol{\\uptheta}} \\equiv \\log|2\\pi \\textbf{C}_{\\boldsymbol{\\uptheta}}| + (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}})^\\intercal \\textbf{C}_{\\boldsymbol{\\uptheta}}^{-1} (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}})\n\\end{equation}\nfor some positive semidefinite matrix $\\textbf{C}_{\\boldsymbol{\\uptheta}}$. The second term is the square of the Mahalanobis distance, which includes the Euclidean distance as a special case, when $\\textbf{C}_{\\boldsymbol{\\uptheta}}$ is the identity matrix. Using an exponential kernel $\\kappa(u) = \\exp(-u\/2)$ and $\\textbf{C}_{\\boldsymbol{\\uptheta}} = \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}$ gives $\\widetilde{L}(\\boldsymbol{\\uptheta}) = \\mathrm{E}\\left[ \\kappa(\\Delta_{\\boldsymbol{\\uptheta}}^{\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}}) \\right]$ and $\\widehat{L}^N(\\boldsymbol{\\uptheta}) = \\mathrm{E}^N\\left[ \\kappa(\\Delta_{\\boldsymbol{\\uptheta}}^{\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}}) \\right]$ with\n\\begin{eqnarray}\n-2 \\log\\left[\\kappa(\\Delta_{\\boldsymbol{\\uptheta}}^{\\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}}) \\right] & = & \\log \\left| 2\\pi \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}} \\right| \\\\\n& & + (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}})^\\intercal \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}^{-1} (\\boldsymbol{\\Phi}_\\mathrm{O} - \\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}), \\nonumber\n\\end{eqnarray}\nthe form of which is similar to equation \\eqref{eq:synthetic_likelihood}. In fact, \\citet[][proposition 1]{GutmannCorander2016} show that the synthetic likelihood satisfies\n\\begin{eqnarray}\n-2\\tilde{\\ell}(\\boldsymbol{\\uptheta}) & = & J(\\boldsymbol{\\uptheta}) + \\mathrm{constant}, \\quad \\mathrm{and}\\label{eq:l_J_proposition_1}\\\\\n-2\\hat{\\ell}^N(\\boldsymbol{\\uptheta}) & = & \\widehat{J}^N(\\boldsymbol{\\uptheta}) + \\mathrm{constant},\\label{eq:l_J_proposition_2}\n\\end{eqnarray}\nwhere \n\\begin{equation}\nJ(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}\\left[ \\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}} \\right]\n\\label{eq:def_J}\n\\end{equation}\nand\n\\begin{equation}\n\\widehat{J}^N(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}^N\\left[ \\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}} \\right]\n\\label{eq:def_J_N}\n\\end{equation}\nare respectively the expectation and the empirical average of the discrepancy $\\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}}$, for $\\textbf{C}_{\\boldsymbol{\\uptheta}}= \\boldsymbol{\\hat{\\Sigma}}_{\\boldsymbol{\\uptheta}}$.\n\n\\section{Regression and Optimisation for likelihood-free inference}\n\\label{sec:Regression and Optimisation for likelihood-free inference}\n\n\\subsection{Computational difficulties with likelihood-free rejection sampling}\n\\label{ssec:Computational difficulties with likelihood-free rejection sampling}\n\nWe have seen in section \\ref{ssec:Computable approximations of the likelihood} that computable approximations $\\widehat{L}^N(\\boldsymbol{\\uptheta})$ of the likelihood $L(\\boldsymbol{\\uptheta})$ are stochastic processes, due to the use of simulations to approximate intractable expectations. In the most popular ABC approach, i.e. likelihood-free rejection sampling (see section \\ref{sssec:Non-parametric approximations and likelihood-free rejection sampling}), the expectations are approximated by empirical probabilities that the discrepancy is below the threshold $\\varepsilon$. While this approach allows inference of simulator-based statistical models with minimal assumptions, it suffers from several limitations that can make its use impossible in practice.\n\\begin{enumerate}\n\\item It rejects most of the proposed samples when $\\varepsilon$ is small, leading to a computationally inefficient algorithm.\n\\item It does not make assumptions about the shape or smoothness of the target function $L(\\boldsymbol{\\uptheta})$, hence accepted samples cannot ``share'' information in parameter space.\n\\item It uses a fixed proposal distribution (typically the prior $\\mathpzc{P}(\\boldsymbol{\\uptheta})$) and does not make use of already accepted samples to update the proposal of new points.\n\\item It aims at equal accuracy for all regions in parameter space, regardless of the values of the likelihood.\n\\end{enumerate}\n\nTo overcome these issues, the proposed approach follows closely \\citet{GutmannCorander2016}, who combine regression of the discrepancy (addressing issues 1 and 2) with Bayesian optimisation (addressing issues 3 and 4) in order to improve the computational efficiency of inference of simulator-based models. In this work, we focus on parametric approximations of the likelihood; we refer to \\citet{GutmannCorander2016} for a treatment of the non-parametric approach.\n\n\\subsection{Regression of the discrepancy}\n\\label{ssec:Regression of the discrepancy}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{GP_illustration.pdf} \n\\caption{Illustration of Gaussian process regression in one dimension, for the target test function $f: \\theta \\mapsto 2 - \\exp\\left[-(\\theta - 2)^2\\right] - \\exp\\left[-(\\theta - 6)^2\/10\\right] - 1\/ (\\theta^2 + 1)$ (dashed line). Training data are acquired (red dots); they are subject to a Gaussian observation noise with standard deviation $\\sigma_\\mathrm{n} = 0.03$. The blue line shows the mean prediction $\\mu(\\theta)$ of the Gaussian process regression, and the shaded region the corresponding $2\\sigma(\\theta)$ uncertainty. Gaussian processes allow interpolating and extrapolating predictions in regions of parameter space where training data are absent.\\label{fig:GP_illustration}}\n\\end{center}\n\\end{figure}\n\nThe standard approach to obtain a computable approximate likelihood relies on empirical averages (equations \\eqref{eq:mean_covariance_empirical} and \\eqref{eq:def_J_N}). However, such sample averages are not the only way to approximate intractable expectations. Equations \\eqref{eq:l_J_proposition_1} and \\eqref{eq:def_J} show that, up to constants and the sign, $\\tilde{\\ell}(\\boldsymbol{\\uptheta})$ can be interpreted as a regression function with the model parameters $\\boldsymbol{\\uptheta}$ (the ``predictors'') as the independent input variables and the discrepancy $\\Delta_{\\boldsymbol{\\uptheta}}$ as the response variable. Therefore, in the present approach, we consider an approximation of the intractable expectation defining $J(\\boldsymbol{\\uptheta})$ in equation \\eqref{eq:def_J} based on a regression analysis of $\\Delta_{\\boldsymbol{\\uptheta}}$, instead of sample averages. Explicitly, we consider\n\\begin{equation}\n\\widehat{J}^{(\\mathrm{t})}(\\boldsymbol{\\uptheta}) \\equiv \\mathrm{E}^{(\\mathrm{t})}\\left[ \\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}} \\right],\n\\label{eq:def_J_t}\n\\end{equation}\nwhere the superscript $(\\mathrm{t})$ stands for ``training'' and the expectation $\\mathrm{E}^{(\\mathrm{t})}$ is taken under the probabilistic model defined in the following.\n\nInferring $J(\\boldsymbol{\\uptheta})$ via regression requires a training data set $\\lbrace (\\boldsymbol{\\uptheta}^{(i)} , \\Delta_{\\boldsymbol{\\uptheta}}^{(i)}) \\rbrace\\vspace*{-2pt}$ where the discrepancies are computed from the simulated summary statistics $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}^{(i)}$. Building this training set requires to run simulations, but does not involve an accept\/reject criterion as does likelihood-free rejection sampling (thus addressing issue 1, see section \\ref{ssec:Computational difficulties with likelihood-free rejection sampling}). A regression-based approach also allows incorporating a smoothness assumption about $J(\\boldsymbol{\\uptheta})$. In this way, samples of the training set can ``share'' the information of the computed $\\Delta_{\\boldsymbol{\\uptheta}}$ in the neighbourhood of $\\boldsymbol{\\uptheta}$ (thus addressing issue 2). This suggests that fewer simulated data are needed to reach a certain level of accuracy when learning the target function $J(\\boldsymbol{\\uptheta})$.\n\nIn this work, we rely on Gaussian process (GP) regression in order to construct a prediction for $J(\\boldsymbol{\\uptheta})$. There are several reasons why this choice is advantageous for likelihood-free inference. First, GPs are a general-purpose regressor, able to deal with a large variety of functional shapes for $J(\\boldsymbol{\\uptheta})$, including potentially complex non-linear, or multi-modal features. Second, GPs provide not only a prediction (the mean of the regressed function), but also the uncertainty of the regression. This is useful for actively constructing the training data via Bayesian optimisation, as we show in section \\ref{ssec:Acquisition rules}. Finally, GPs allow extrapolating the prediction into regions of the parameter space where no training points are available. These three properties are shown in figure \\ref{fig:GP_illustration} for a multi-modal test function subject to observation noise.\n\nWe now briefly review Gaussian process regression. Suppose that we have a set of $t$ training points, $(\\boldsymbol{\\Theta}, \\textbf{f}) \\equiv \\lbrace (\\boldsymbol{\\uptheta}^{(i)}, f^{(i)} = f(\\boldsymbol{\\uptheta}^{(i)}) \\rbrace$, of the function $f$ that we want to regress. We assume that $f$ is a Gaussian process with prior mean function $m(\\boldsymbol{\\uptheta})$ and covariance function $\\kappa(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}')$ also known as the kernel \\citep[see][]{RasmussenWilliams2006}. The joint probability distribution of the training set is therefore $\\mathpzc{P}(\\textbf{f}|\\boldsymbol{\\Theta}) \\propto \\exp\\left[ \\ell(\\textbf{f}|\\boldsymbol{\\Theta}) \\right]$, where the exponent $\\ell(\\textbf{f}|\\boldsymbol{\\Theta})$ is\n\\begin{equation}\n- \\frac{1}{2} \\sum_{i,j=1}^t \\left[f^{(i)}-m(\\boldsymbol{\\uptheta}^{(i)})\\right]^\\intercal \\kappa(\\boldsymbol{\\uptheta}^{(i)},\\boldsymbol{\\uptheta}^{(j)})^{-1} \\left[f^{(j)}-m(\\boldsymbol{\\uptheta}^{(j)})\\right] .\n\\end{equation}\nThe mean function $m(\\boldsymbol{\\uptheta})$ and the kernel $\\kappa(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}')$ define the functional shape and smoothness allowed for the prediction. Standard choices are respectively a constant and a squared exponential (the radial basis function, RBF), subject to additive Gaussian observation noise with variance $\\sigma_\\mathrm{n}^2$. Explicitly, $m(\\boldsymbol{\\uptheta}) \\equiv C$ and \n\\begin{equation}\n\\kappa(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}') \\equiv \\sigma_f^2 \\exp\\left[ -\\frac{1}{2} \\sum_p \\left( \\frac{\\theta_p - \\theta_p'}{\\lambda_p} \\right)^2 \\right] + \\sigma_\\mathrm{n}^2 \\, \\updelta_\\mathrm{K}(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}').\n\\end{equation}\nThe $\\theta_p$ and $\\theta_p'$ are the components of $\\boldsymbol{\\uptheta}$ and $\\boldsymbol{\\uptheta}'$, respectively. In the last term, $\\updelta_\\mathrm{K}(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}')$ is one if and only if $\\boldsymbol{\\uptheta} = \\boldsymbol{\\uptheta}'$ and zero otherwise. The hyperparameters are $C$, the $\\lambda_p$ (the length scales controlling the amount of correlation between points, and hence the allowed wiggliness of $f$), $\\sigma_f^2$ (the signal variance, i.e. the marginal variance of $f$ at a point $\\boldsymbol{\\uptheta}$ if the observation noise was zero), and $\\sigma_\\mathrm{n}^2$ (the observation noise). For the results of this paper, GP hyperparameters were learned from the training set using L-BFGS \\citep{L-BFGS}, a popular optimiser for machine learning, and updated every time the training set was augmented with ten samples.\n\nThe predicted value $f_\\star$ at a new point $\\boldsymbol{\\uptheta}_\\star$ can be obtained from the fact that $(\\lbrace \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star \\rbrace, \\lbrace \\textbf{f} , f_\\star \\rbrace)$ form jointly a random realisation of the Gaussian process $f$. Thus, the target pdf $\\mathpzc{P}(f_\\star|\\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star)$ can be obtained from conditioning the joint pdf $\\mathpzc{P}(\\textbf{f},f_\\star | \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star)$ to the values of the training set $\\textbf{f}$. The result is \\citep[see][section 2.7]{RasmussenWilliams2006}\n\\begin{eqnarray}\n\\mathpzc{P}(f_\\star|\\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star) & \\propto & \\exp\\left[ -\\frac{1}{2} \\left( \\frac{f_\\star - \\mu(\\boldsymbol{\\uptheta}_\\star)}{\\sigma(\\boldsymbol{\\uptheta}_\\star)} \\right)^2 \\right], \\label{eq:GP_posterior_predictive_distribution}\\\\\n\\mu(\\boldsymbol{\\uptheta}_\\star) & \\equiv & m(\\boldsymbol{\\uptheta}_\\star) + \\uline{\\textbf{K}}_\\star^\\intercal \\uuline{\\textbf{K}}^{-1} (\\textbf{f} - \\textbf{m}), \\label{eq:GP_mean}\\\\\n\\sigma^2(\\boldsymbol{\\uptheta}_\\star) & \\equiv & K_{\\star\\star} - \\uline{\\textbf{K}}_\\star^\\intercal \\uuline{\\textbf{K}}^{-1} \\uline{\\textbf{K}}_\\star, \\label{eq:GP_variance}\n\\end{eqnarray}\nwhere we use the definitions\n\\begin{eqnarray}\nK_{\\star\\star} & \\equiv & \\kappa(\\boldsymbol{\\uptheta}_\\star, \\boldsymbol{\\uptheta}_\\star), \\label{eq:GP_notation_def_1}\\\\\n\\textbf{m} & \\equiv & \\left(m(\\boldsymbol{\\uptheta}^{(i)})\\right)^\\intercal \\quad \\mathrm{for}~\\boldsymbol{\\uptheta}^{(i)} \\in \\boldsymbol{\\Theta}, \\label{eq:GP_notation_def_2}\\\\\n\\uline{\\textbf{K}}_\\star & \\equiv & \\left(\\kappa(\\boldsymbol{\\uptheta}_\\star, \\boldsymbol{\\uptheta}^{(i)})\\right)^\\intercal \\quad \\mathrm{for}~\\boldsymbol{\\uptheta}^{(i)} \\in \\boldsymbol{\\Theta}, \\label{eq:GP_notation_def_3}\\\\\n(\\uuline{\\textbf{K}})_{ij} & \\equiv & \\kappa(\\boldsymbol{\\uptheta}^{(i)}, \\boldsymbol{\\uptheta}^{(j)}) \\quad \\mathrm{for}~\\lbrace \\boldsymbol{\\uptheta}^{(i)}, \\boldsymbol{\\uptheta}^{(j)} \\rbrace \\in \\boldsymbol{\\Theta}^2. \\label{eq:GP_notation_def_4}\n\\end{eqnarray}\n\n\\subsection{Bayesian optimisation}\n\\label{ssec:Bayesian optimisation}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{BO_illustration_11.pdf} \n\\includegraphics[width=0.49\\textwidth]{BO_illustration_12.pdf} \\\\\n\\includegraphics[width=0.49\\textwidth]{BO_illustration_13.pdf} \n\\includegraphics[width=0.49\\textwidth]{BO_illustration_14.pdf} \n\\caption{Illustration of four consecutive steps of Bayesian optimisation to learn the test function of figure \\ref{fig:GP_illustration}. For each step, the top panel shows the training data points (red dots) and the regression (blue line and shaded region). The bottom panel shows the acquisition function (the expected improvement, solid green line) with its maximiser (dashed green line). The next acquisition point, i.e. where to run a simulation to be added to the training set, is shown in orange; it differs from the maximiser of the acquisition function by a small random number. The acquisition function used is the expected improvement, aiming at finding the minimum of $f$. Hyperparameters of the regression kernel are optimised after each acquisition. As can observed, Bayesian optimisation implements a trade-off between exploration (evaluation of the target function where the variance is large, e.g. after 12 points) and exploitation (evaluation of the target function close to the predicted minimum, e.g. after 11, 13, and 14 points). \\label{fig:BO_illustration}}\n\\end{center}\n\\end{figure*}\n\nThe second major ingredient of the proposed approach is Bayesian optimisation, which allows the inference of the regression function $J(\\boldsymbol{\\uptheta})$ while avoiding unnecessary computations. It allows active construction of the training data set $\\lbrace (\\boldsymbol{\\uptheta}^{(i)} , \\Delta_{\\boldsymbol{\\uptheta}}^{(i)}) \\rbrace$, updating the proposal of new points using the regressed $\\widehat{J}^{(\\mathrm{t})}(\\boldsymbol{\\uptheta})$ (thus addressing issue 3 with likelihood-free rejection sampling, see section \\ref{ssec:Computational difficulties with likelihood-free rejection sampling}). Further, since we are mostly interested in the regions of the parameter space where the variance of the approximate posterior is large (due to its stochasticity), the acquisition rules can prioritise these regions, so as to obtain a better approximation of $J(\\boldsymbol{\\uptheta})$ there (thus addressing issue 4).\n\nBayesian optimisation is a decision-making framework under uncertainty, for the automatic learning of unknown functions. It aims at gathering training data in such a manner as to evaluate the regression model the least number of times while revealing as much information as possible about the target function and, in particular, the location of the optimum or optima. The method proceeds by iteratively picking predictors to be probed (i.e. simulations to be run) in a manner that trades off \\textit{exploration} (parameters for which the outcome is most uncertain) and \\textit{exploitation} (parameters which are expected to have a good outcome for the targeted application). In many contexts, Bayesian optimisation has been shown to obtain better results with fewer simulations than grid search or random search, due to its ability to reason about the interest of simulations before they are run \\citep[see][for a review]{Brochu2010}. Figure \\ref{fig:BO_illustration} illustrates Bayesian optimisation in combination with Gaussian process regression, applied to finding the minimum of the test function of figure \\ref{fig:GP_illustration}.\n\nIn the following, we give a brief overview of the elements of Bayesian optimisation used in this paper. In order to add a new point to the training data set $(\\boldsymbol{\\Theta}, \\textbf{f}) \\equiv \\lbrace (\\boldsymbol{\\uptheta}^{(i)}, f^{(i)} = f(\\boldsymbol{\\uptheta}^{(i)}) \\rbrace$, Bayesian optimisation uses an acquisition function $\\mathcal{A}(\\boldsymbol{\\uptheta})$ that estimates how useful the evaluation of the simulator at $\\boldsymbol{\\uptheta}$ will be in order to learn the target function. The acquisition function is constructed from the posterior predictive distribution of $f$ given the training set $(\\boldsymbol{\\Theta}, \\textbf{f})$, i.e. from the mean prediction $\\mu(\\boldsymbol{\\uptheta})$ and the uncertainty $\\sigma(\\boldsymbol{\\uptheta})$ of the regression analysis (equations \\eqref{eq:GP_mean} and \\eqref{eq:GP_variance}). The optimum of the acquisition function in parameter space determines the next point $\\boldsymbol{\\uptheta}_\\star \\equiv \\mathrm{argopt}_{\\boldsymbol{\\uptheta}} \\mathcal{A}(\\boldsymbol{\\uptheta})$ to be evaluated by the simulator ($\\mathrm{argopt} = \\mathrm{argmax}$ or $\\mathrm{argmin}$ depending on how the acquisition function is defined), so that the training set can be augmented with $(\\boldsymbol{\\uptheta}_\\star, f(\\boldsymbol{\\uptheta}_\\star))$. The acquisition function is a scalar function whose evaluation should be reasonably expensive, so that its optimum can be found by simple search methods such as gradient descent. \n\nThe algorithm needs to be initialised with an initial training set. In numerical experiments, we found that building this initial set by drawing from the prior (as would typically be done in likelihood-free rejection sampling) can result in difficulties with the first iterations of Gaussian process regression. Uniformly-distributed points within the boundaries of the GP are also a poor choice, as they will result in an uneven initial sampling of the parameter space. To circumvent this issue, we build the initial training set using a low-discrepancy quasi-random Sobol sequence \\citep{Sobol1967}, which covers the parameter space more evenly.\n\n\\subsection{Expressions for the approximate posterior}\n\\label{ssec:Expressions for the approximate posterior}\n\nAs discussed in section \\ref{ssec:Regression of the discrepancy}, using $\\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}}$ as the regressed quantity directly gives an estimate of $J(\\boldsymbol{\\uptheta})$ in equation \\eqref{eq:def_J}. The response variable is thus $f(\\boldsymbol{\\uptheta}) \\equiv \\Delta_{\\boldsymbol{\\uptheta}}^{\\textbf{C}_{\\boldsymbol{\\uptheta}}}$ and the regression then gives\n\\begin{equation}\n\\widehat{J}^{(\\mathrm{t})}(\\boldsymbol{\\uptheta}) = \\mu(\\boldsymbol{\\uptheta}).\n\\label{eq:J_t_equals_mu}\n\\end{equation}\n\nIn the parametric approach to likelihood approximation, this is equivalent to an approximation of $-2\\tilde{\\ell}(\\boldsymbol{\\uptheta}) = -2\\log \\widetilde{L}(\\boldsymbol{\\uptheta})$ (see equation \\eqref{eq:l_J_proposition_1}). The expectation of the (unnormalised) approximate posterior is therefore directly given as (see equation \\eqref{eq:approx_problem_Bayes})\n\\begin{equation}\n\\mathrm{E}^{(\\mathrm{t})} \\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\right] \\equiv \\mathpzc{P}(\\boldsymbol{\\uptheta}) \\exp\\left( -\\frac{1}{2} \\mu(\\boldsymbol{\\uptheta}) \\right),\n\\label{eq:approximate_posterior_expectation}\n\\end{equation}\nwhere $\\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\approx Z_{\\boldsymbol{\\Phi}} \\times \\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi})_{|\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_\\mathrm{O}}$.\n\nThe estimate of the variance of $f(\\boldsymbol{\\uptheta})$ can also be propagated to the approximate posterior, giving\n\\begin{equation}\n\\mathrm{V}^{(\\mathrm{t})} \\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\right] \\equiv \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})^2}{4} \\exp\\left[ -\\mu(\\boldsymbol{\\uptheta}) \\right] \\sigma^2(\\boldsymbol{\\uptheta}) .\n\\label{eq:approximate_posterior_variance}\n\\end{equation}\nDetails of the computations can be found in appendix \\ref{sapx:Expressions for the approximate posterior}.\n\nExpressions for the {\\textsc{bolfi}} posterior in the non-parametric approach with the uniform kernel can also be derived \\citep[][lemma 3.1]{Jaervenpaeae2017}. As this paper focuses on the parametric approach, we refer to the literature for the former case.\n\n\\subsection{Acquisition rules}\n\\label{ssec:Acquisition rules}\n\n\\subsubsection{Expected improvement}\n\\label{sssec:Expected improvement}\n\nStandard Bayesian optimisation uses acquisition functions that estimate how useful the next evaluation of the simulator will be in order to find the minimum or minima of the target function. While several other choices are possible \\citep[see e.g.][]{Brochu2010}, in this work we discuss the acquisition function known as \\textit{expected improvement} (EI). The \\textit{improvement} is defined by $I(\\boldsymbol{\\uptheta}_\\star) = \\max\\left[\\min(\\textbf{f}) - f(\\boldsymbol{\\uptheta}_\\star), 0\\right]$, and the expected improvement is $\\mathrm{EI}(\\boldsymbol{\\uptheta}_\\star) \\equiv \\mathrm{E}^{(\\mathrm{t})}\\left[ I(\\boldsymbol{\\uptheta}_\\star) \\right]$, where the expectation is taken with respect to the random observation assuming decision $\\boldsymbol{\\uptheta}_\\star$. For a Gaussian process regressor, this evaluates to \\citep[see][section 2.3]{Brochu2010}\n\\begin{equation}\n\\mathrm{EI}(\\boldsymbol{\\uptheta}_\\star) \\equiv \\sigma(\\boldsymbol{\\uptheta}_\\star) \\left[ z\\Phi(z) + \\phi(z) \\right], \\, \\mathrm{with}~z \\equiv \\frac{\\min(\\textbf{f}) - \\mu(\\boldsymbol{\\uptheta}_\\star)}{\\sigma(\\boldsymbol{\\uptheta}_\\star)},\n\\label{eq:EI}\n\\end{equation}\nor $\\mathrm{EI}(\\boldsymbol{\\uptheta}_\\star) \\equiv 0$ if $\\sigma(\\boldsymbol{\\uptheta}_\\star)=0$, where $\\phi$ and $\\Phi$ denote respectively the pdf and the cumulative distribution function (cdf) of the unit-variance zero-mean Gaussian. The decision rule is to select the location $\\boldsymbol{\\uptheta}_\\star$ that maximises $\\mathrm{EI}(\\boldsymbol{\\uptheta}_\\star)$.\n\nThe EI criterion can be interpreted as follows: since the goal is to find the minimum of $f$, a reward equal to the improvement $\\min(\\textbf{f}) - f(\\boldsymbol{\\uptheta}_\\star)$ is received if $f(\\boldsymbol{\\uptheta}_\\star)$ is smaller than all the values observed so far, otherwise no reward is received. The first term appearing in equation \\eqref{eq:EI} is maximised when evaluating at points with high uncertainty (exploration); and, at fixed variance, the second term is maximised by evaluating at points with low mean (exploitation). The expected improvement therefore automatically captures the exploration-exploitation trade-off as a result of the Bayesian decision-theoretic treatment.\n\n\\subsubsection{Expected integrated variance}\n\\label{sssec:Expected integrated variance}\n\nAs pointed out by \\citet{Jaervenpaeae2017}, in Bayesian optimisation for approximate Bayesian computation, the goal should not be to find the minimum of $J(\\boldsymbol{\\uptheta})$, but rather to minimise the expected uncertainty in the estimate of the approximate posterior over the future evaluation of the simulator at $\\boldsymbol{\\uptheta}_\\star$. Consequently, they propose an acquisition function, known as the \\textit{expected integrated variance} (ExpIntVar or EIV in the following) that selects the next evaluation location to minimise the expected variance of the future posterior density $\\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star)$ over the parameter space. The framework used is Bayesian decision theory. Formally, the loss due to our uncertain knowledge of the approximate posterior density can be defined as\n\\begin{equation}\n\\mathpzc{L}\\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\right] = \\int \\mathrm{V}^{(\\mathrm{t})}\\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}) \\right] \\, \\mathrm{d}\\boldsymbol{\\uptheta},\n\\end{equation}\nand the acquisition rule is to select the location $\\boldsymbol{\\uptheta}_\\star$ that minimises\n\\begin{equation}\n\\begin{split}\n& \\mathrm{EIV}(\\boldsymbol{\\uptheta}_\\star) \\equiv \\mathrm{E}^{(\\mathrm{t})} \\left[ \\mathpzc{L}\\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}, f_\\star, \\boldsymbol{\\uptheta}_\\star) \\right] \\right] \\\\\n& = \\int \\mathpzc{L}\\left[ \\mathpzc{P}_{\\textsc{bolfi}}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi}_\\mathrm{O}, \\textbf{f}, \\boldsymbol{\\Theta}, f_\\star, \\boldsymbol{\\uptheta}_\\star) \\right] \\mathpzc{P}(f_\\star|\\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star) \\, \\mathrm{d}f_\\star\n\\end{split}\n\\end{equation}\nwith respect to $\\boldsymbol{\\uptheta}_\\star$, where we have to marginalise over the unknown simulator output $f_\\star$ using the probabilistic model $\\mathpzc{P}(f_\\star|\\textbf{f}, \\boldsymbol{\\Theta}, \\boldsymbol{\\uptheta}_\\star)$ (equations \\eqref{eq:GP_posterior_predictive_distribution}--\\eqref{eq:GP_variance}).\n\n\\citet[][proposition 3.2]{Jaervenpaeae2017} derive the expressions for the expected integrated variance for a GP model in the non-parametric approach. In appendix \\ref{apx:Derivations of the mathematical results}, we extend this work and derive the ExpIntVar acquisition function and its gradient in the parametric approach. The result is the following: under the GP model, the expected integrated variance after running the simulation model with parameter $\\boldsymbol{\\uptheta}_\\star$ is given by\n\\begin{equation}\n\\mathrm{EIV}(\\boldsymbol{\\uptheta}_\\star) = \\int \\frac{\\mathpzc{P}(\\boldsymbol{\\uptheta})^2}{4} \\exp\\left[ -\\mu(\\boldsymbol{\\uptheta}) \\right] \\left[ \\sigma^2(\\boldsymbol{\\uptheta}) - \\tau^2(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star) \\right] \\, \\mathrm{d}\\boldsymbol{\\uptheta},\n\\label{eq:EIV}\n\\end{equation}\nwith\n\\begin{equation}\n\\tau^2(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star) \\equiv \\dfrac{\\mathrm{cov}^2(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star)}{\\sigma^2(\\boldsymbol{\\uptheta}_\\star)},\n\\label{eq:def_tau}\n\\end{equation}\nwhere $\\mathrm{cov}(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star) \\equiv \\kappa(\\boldsymbol{\\uptheta},\\boldsymbol{\\uptheta}_\\star) - \\uline{\\textbf{K}}^\\intercal \\uuline{\\textbf{K}}^{-1}\\vspace{-4pt} \\uline{\\textbf{K}}_\\star$ is the GP posterior predicted covariance between the evaluation point $\\boldsymbol{\\uptheta}$ in the integral and the candidate location for the next evaluation $\\boldsymbol{\\uptheta}_\\star$. Note that in addition to the notations given by equations \\eqref{eq:GP_notation_def_1}--\\eqref{eq:GP_notation_def_4}, we have introduced the vector\n\\begin{equation}\n\\uline{\\textbf{K}} \\equiv \\left(\\kappa(\\boldsymbol{\\uptheta}, \\boldsymbol{\\uptheta}^{(i)})\\right)^\\intercal \\quad \\mathrm{for}~\\boldsymbol{\\uptheta}^{(i)} \\in \\boldsymbol{\\Theta}.\n\\label{eq:GP_notation_def_5}\n\\end{equation}\n\nIt is of interest to examine when the integrand in equation \\eqref{eq:EIV} is small. As for the EI (equation \\eqref{eq:EI}), optimal values are found when the mean of the discrepancy $\\mu(\\boldsymbol{\\uptheta})$ is small or the variance $\\sigma^2(\\boldsymbol{\\uptheta})$ is large. This effect is what yields the trade-off between exploitation and exploration for the ExpIntVar acquisition rule. However, unlike in standard Bayesian optimisation strategies such as the EI, the trade-off is a non-local process (due to the integration over the parameter space), and also depends on the prior, so as to minimise the uncertainty in the posterior (and not likelihood) approximation.\n\nComputing the expected integrated variance requires integration over the parameter space. In this work, the integration is performed on a regular grid of $50$ points per dimension within the GP boundaries. In high dimension, the integral can become prohibitively expensive to compute on a grid. As discussed by \\citet{Jaervenpaeae2017}, it can then be evaluated with Monte Carlo or quasi-Monte Carlo methods such as importance sampling.\n\nIn numerical experiments, we have found that the ExpIntVar criterion (as any acquisition function for Bayesian optimisation) has some sensitivity to the initial training set. In particular, the initial set (built from a Sobol sequence or otherwise) shall sample sufficiently well the GP domain, which shall encompass the prior. This ensures that the prior volume is never wider than the training data. Under this condition, as \\citet{Jaervenpaeae2017}, we have found that ExpIntVar is stable, in the sense that it produces consistent {\\textsc{bolfi}} posteriors over different realisations of the initial training data set and simulator outputs.\n\n\\subsubsection{Stochastic versus deterministic acquisition rules}\n\\label{sssec:Stochastic versus deterministic acquisition rules}\n\nThe above rules do not guarantee that the selected $\\boldsymbol{\\uptheta}_\\star$ is different from a previously acquired $\\boldsymbol{\\uptheta}^{(i)}$. \\citet[][see in particular appendix C]{GutmannCorander2016} found that this can result in a poor exploration of the parameter space, and propose to add a stochastic element to the decision rule in order to avoid getting stuck at one point. In some experiments, we followed this prescription by adding an ``acquisition noise'' of strength $\\sigma_\\mathrm{a}^p$ to each component of the optimiser of the acquisition function. More precisely, $\\boldsymbol{\\uptheta}_\\star$ is sampled from the Gaussian distribution $\\mathpzc{G}(\\boldsymbol{\\uptheta}_\\mathrm{opt}, \\textbf{D})$, where $\\boldsymbol{\\uptheta}_\\mathrm{opt} \\equiv \\mathrm{argopt}_{\\boldsymbol{\\uptheta}} \\mathcal{A}(\\boldsymbol{\\uptheta})$ and $\\textbf{D}$ is the diagonal covariance matrix of components $(\\sigma_\\mathrm{a}^p)^2$. The $\\sigma_\\mathrm{a}^p$ are chosen to be of order $\\lambda_p\/10$.\n\nFor a more extensive discussion and comparison of various stochastic and deterministic acquisition rules, the reader is referred to \\citet{Jaervenpaeae2017}.\n\n\\section{Applications}\n\\label{sec:Applications}\n\nIn this section, we show the application of {\\textsc{bolfi}} to several application studies. In particular, we discuss the simulator and the computable approximation of the likelihood to be used, and compare {\\textsc{bolfi}} to likelihood-free rejection sampling in terms of computational efficiency. In all cases, we show that {\\textsc{bolfi}} reduces the amount of required simulations by several orders of magnitude.\n\nIn section \\ref{ssec:Summarising Gaussian signals}, we discuss the toy problem of summarising Gaussian signals (i.e. inferring the unknown mean and\/or variance of Gaussian-distributed data). In section \\ref{ssec:Supernova cosmology}, we show the first application of {\\textsc{bolfi}} to a real cosmological problem using actual observational data: the inference of cosmological parameters from supernovae data. For each test case, we refer to the corresponding section in the appendices for the details of the data model and inference assumptions.\n\n\\subsection{Summarising Gaussian signals}\n\\label{ssec:Summarising Gaussian signals}\n\nA simple toy model can be constructed from the general problem of summarising Gaussian signals with unknown mean, or with unknown mean and variance. This example allows for the comparison of {\\textsc{bolfi}} and likelihood-free rejection sampling to the true posterior conditional on the full data, which is known analytically. All the details of this model are given in appendix \\ref{apx:Summarising Gaussian signals}.\n\n\\subsubsection{Unknown mean, known variance}\n\\label{sssec:Unknown mean, known variance}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{Gaussian_mean_illustration.pdf} \n\\caption{Illustration of {\\textsc{bolfi}} for a one-dimensional problem, the inference of the unknown mean $\\mu$ of a Gaussian. \\textit{Lower panel}. The discrepancy $\\Delta_\\mu$ (i.e. twice the negative log-likelihood) is a stochastic process due to the limited computational resources. Its mean and the $2\\sigma$ credible interval are shown in red. The dashed red line shows one realisation of the stochastic process as a function of $\\mu$. Simulations at different $\\mu$ are shown as black dots. {\\textsc{bolfi}} builds a probabilistic model for the discrepancy, the mean and $2\\sigma$ credible interval of which are shown in blue. \\textit{Upper panel}. The expectation of the (rescaled) {\\textsc{bolfi}} posterior and its $2\\sigma$ credible interval are shown in comparison to the exact posterior for the problem. The dashed red line shows the posterior obtained from the corresponding realisation of the stochastic process of the lower panel. \\label{fig:Gaussian_mean_illustration}}\n\\end{center}\n\\end{figure}\n\nWe first consider the problem, already discussed by \\citet{GutmannCorander2016}, where the data $\\textbf{d}$ are a vector of $n$ components drawn from a Gaussian with unknown mean $\\mu$ and known variance $\\sigma^2_\\mathrm{true}$. The empirical mean $\\Phi^1$ is a sufficient summary statistic for the problem of inferring $\\mu$. The distribution of simulated $\\Phi^1_\\mu$ takes a simple form, $\\Phi^1_\\mu \\sim \\mathpzc{G}\\left( \\mu, \\sigma^2_\\mathrm{true}\/n \\right)$. Using here the true variance, the discrepancy and synthetic likelihood are\n\\begin{equation}\n\\Delta^1_\\mu = -2 \\hat{\\ell}^N_1(\\mu) = \\log \\left(\\frac{2\\pi \\sigma^2_\\mathrm{true}}{n} \\right) + n\\frac{(\\Phi^1_\\mathrm{O}-\\hat{\\mu}^1_\\mu)^2}{\\sigma^2_\\mathrm{true}},\n\\end{equation}\nwhere $\\hat{\\mu}^1_\\mu$ is an average of $N$ realisations of $\\Phi^1_\\mu$. In figure \\ref{fig:Gaussian_mean_illustration} (lower panel), the black dots show simulations of $\\Delta^1_\\mu$ for different values of $\\mu$. We have $\\hat{\\mu}^1_\\mu \\sim \\mathpzc{G}\\left( \\mu, \\sigma^2_\\mathrm{true}\/(Nn) \\right)$, therefore the stochastic process defining the discrepancy can be written\n\\begin{equation}\n\\Delta^1_\\mu = \\log \\left(\\frac{2\\pi \\sigma^2_\\mathrm{true}}{n} \\right) + n\\frac{(\\Phi^1_\\mathrm{O}- \\mu -g )^2}{\\sigma^2_\\mathrm{true}}, \\quad g \\sim \\mathpzc{G}\\left(0, \\sigma^2_g \\right),\n\\end{equation}\nwhere $\\sigma^2_g \\equiv \\sigma^2_\\mathrm{true}\/(Nn)$. Each realisation of $g$ gives a different mapping $\\mu \\mapsto \\Delta^1_\\mu$. In figure \\ref{fig:Gaussian_mean_illustration}, we show one such realisation in the lower panel, and the corresponding approximate posterior in the upper panel. Using the percent point function (inverse of the cdf) of the Gaussian $\\mathpzc{G}\\left(0, \\sigma^2_g \\right)$, we also show in red the mean and $2\\sigma$ credible interval of the true stochastic process.\n\nThe GP regression using the simulations shown as the training set is represented in blue in the lower panel of figure \\ref{fig:Gaussian_mean_illustration}. The corresponding {\\textsc{bolfi}} posterior and its variance, defined by equations \\eqref{eq:approximate_posterior_expectation} and \\eqref{eq:approximate_posterior_variance}, are shown in purple in the upper panel. The uncertainty in the estimate of the posterior (shaded purple region) is due to the limited number of available simulations (and not to the noisiness of individual training points). It is the expectation of this uncertainty under the next evaluation of the simulator which is minimised in parameter space by the ExpIntVar acquisition rule.\n\n\\subsubsection{Unknown mean and variance}\n\\label{sssec:Unknown mean and variance}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{Gaussian_mean_variance.pdf} \n\\caption{Prior and posterior for the joint inference of the mean and variance of Gaussian signals. The prior and exact posterior (from the analytic solution) are Gaussian-inverse-Gamma distributed and shown in blue and orange, respectively. In the left panel, the approximate rejection-sampling posterior, based on $5,000$ samples accepted out of $\\sim 350,000$ simulations, is shown in green. It loosely encloses the exact posterior. In the right panel, the approximate {\\textsc{bolfi}} posterior, based on $2,500$ simulations only, is shown in red. It is a much finer approximation of the exact posterior. For all distributions, the $1\\sigma$, $2\\sigma$ and $3\\sigma$ contours are shown.\\label{fig:Gaussian_mean_variance}}\n\\end{center}\n\\end{figure*}\n\nWe now consider the problem where the full data set $\\textbf{d}$ is a vector of $n$ components drawn from a Gaussian with unknown mean $\\mu$ and unknown variance $\\sigma^2$. The aim is the two-dimensional inference of $\\boldsymbol{\\uptheta} \\equiv (\\mu, \\sigma^2)$. Evidently, the true likelihood $\\mathcal{L}(\\mu, \\sigma^2)$ for this problem is the Gaussian characterised by $(\\mu, \\sigma^2)$. The Gaussian-inverse-Gamma distribution is the conjugate prior for this likelihood. It is described by four parameters. Adopting a Gaussian-inverse-Gamma prior characterised by $(\\alpha, \\beta, \\eta, \\lambda)$ yields a Gaussian-inverse-Gamma posterior characterised by $(\\alpha', \\beta', \\eta', \\lambda')$ given by equations \\eqref{eq:Gaussian_analytic_solution_alpha}--\\eqref{eq:Gaussian_analytic_solution_lambda}. This is the analytic solution to which we compare our approximate results.\n\nFor the numerical approach, we forward model the problem using a simulator that draws from the prior, simulates $N = 10$ realisations of the Gaussian signal, and compresses them to two summary statistics, the empirical mean and variance, respectively $\\Phi^1$ and $\\Phi^2$. The graphical probabilistic model is given in figure \\ref{fig:BHM_Gaussian_model}. It is a noise-free simulator without latent variables (of the type given by figure \\ref{fig:BHM_exact}, right) completed by a deterministic compression of the full data. Note that the vector $\\boldsymbol{\\Phi} \\equiv (\\Phi^1 , \\Phi^2)$ is a sufficient statistic for the inference of $(\\mu, \\sigma^2)$. To perform likelihood-free inference, we also need a computable approximation $\\widehat{L}^N(\\mu, \\sigma^2)$ of the true likelihood. We derive such an approximation in section \\ref{sapx:Derivation of the Gaussian-Gamma synthetic likelihood for likelihood-free inference} using a parametric approach, under the assumptions (exactly verified in this example) that $\\Phi^1$ is Gaussian-distributed and $\\Phi^2$ is Gamma-distributed. We name it the Gaussian-Gamma synthetic likelihood.\n\nThe posterior obtained from likelihood-free rejection sampling is shown in green in figure \\ref{fig:Gaussian_mean_variance} (left) in comparison to the prior (in blue) and the analytic posterior (in orange). It was obtained from $5,000$ accepted samples using a threshold of $\\varepsilon = 4$ on $-2\\hat{\\ell}^N$. The entire run required $\\sim 350,000$ forward simulations in total, the vast majority of which have been rejected. The rejection-sampling posterior is a fair approximation to the true posterior, unbiased but broader, as expected from a rejection-sampling method. \n\nFor comparison, the posterior obtained via {\\textsc{bolfi}} is shown in red in figure \\ref{fig:Gaussian_mean_variance} (right). {\\textsc{bolfi}} was initialised using a Sobol sequence of $20$ members to compute the original surrogate surface, and Bayesian optimisation with the ExpIntVar acquisition function and acquisition noise was run to acquire $230$ more samples. As can be observed, {\\textsc{bolfi}} allows very precise likelihood-free inference; in particular, the $1\\sigma$, $2\\sigma$ and $3\\sigma$ contours (the latter corresponding to the $0.27\\%$ least likely events) of the analytic posterior are reconstructed almost perfectly. The overall cost to get these results is only $2,500$ simulations with {\\textsc{bolfi}} versus $\\sim 350,000$ with rejection sampling (for a poorer approximation of the analytic posterior), which corresponds to a reduction by $2$ orders of magnitude. \n\n\\subsection{Supernova cosmology}\n\\label{ssec:Supernova cosmology}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=\\textwidth]{Supernovae.pdf} \n\\caption{Prior and posterior distributions for the joint inference of the matter density of the Universe, $\\Omega_\\mathrm{m}$, and the dark energy equation of state, $w$, from the JLA supernovae data set. The prior and exact posterior distribution (obtained from a long MCMC run requiring $\\sim 6 \\times 10^6$ data model evaluations) are shown in blue and orange, respectively. In the left panel, the approximate rejection-sampling posterior, based on $5,000$ samples accepted out of $\\sim 450,000$ simulations, is shown in green. In the right panel, the approximate {\\textsc{bolfi}} posterior, based on $6,000$ simulations only, is shown in red. For all distributions, the $1\\sigma$, $2\\sigma$ and $3\\sigma$ contours are shown. \\label{fig:Supernovae}}\n\\end{center}\n\\end{figure*}\n\nIn this section, we present the first application of {\\textsc{bolfi}} to a cosmological inference problem. Specifically, we perform an analysis of the Joint Lightcurve Analysis (JLA) data set, consisting of the B-band peak apparent magnitudes $m_\\mathrm{B}$ of $740$ type Ia supernovae (SN Ia) with redshift $z$ between $0.01$ and $1.3$ \\citep{Betoule2014}: $\\textbf{d}_\\mathrm{O} \\equiv \\left( m_{\\mathrm{B},\\mathrm{O}}^k \\right)$ for $k \\in \\llbracket 1,740 \\rrbracket$. The details of the data model and inference assumptions are given in appendix \\ref{apx:Supernova cosmology}. For the purpose of validating {\\textsc{bolfi}}, we assume a Gaussian synthetic likelihood (see section \\ref{sapx:Discrepancy}), allowing us to demonstrate the fidelity of the {\\textsc{bolfi}} posterior against the exact likelihood-based solution obtained via Markov Chain Monte Carlo (MCMC). This analysis can also be compared to the proof of concept for another likelihood-free method, {\\textsc{delfi}} \\citep[Density Estimation for Likelihood-Free Inference,][]{Papamakarios2016,Alsing2018}, as the assumptions are very similar.\n\nAs described in appendix \\ref{apx:Supernova cosmology}, the full problem is six dimensional; however, in this work, we focus on the inference of the two physically relevant quantities, namely $\\Omega_\\mathrm{m}$ (the matter density of the Universe) and $w$ (the equation of state of dark energy, assumed constant), and marginalise over the other four (nuisance) parameters ($\\alpha$, $\\beta$, $M_\\mathrm{B}$, $\\delta\\hspace{-0.1em}M$). We assume a Gaussian prior,\n\\begin{equation}\n\\begin{pmatrix}\n\\Omega_\\mathrm{m} \\\\\nw\n\\end{pmatrix} \\sim\n\\mathpzc{G}\\left[\n\\begin{pmatrix}\n0.3 \\\\\n-0.75\n\\end{pmatrix},\n\\begin{pmatrix}\n0.4^2 & -0.24 \\\\\n-0.24 & 0.75^2\n\\end{pmatrix}\n\\right],\n\\label{eq:SNe_prior_Omegam_w}\n\\end{equation}\nwhich is roughly aligned with the direction of the well-known $\\Omega_\\mathrm{m}-w$ degeneracy. We generated $10^6$ samples (out of $\\sim 6\\times 10^6$ data model evaluations) of the posterior for the exact six-dimensional Bayesian problem via MCMC \\citep[performed using the \\textsc{emcee} code,][]{Foreman-Mackey2013}, ensuring sufficient convergence to characterise the $3\\sigma$ contours of the distribution.\\footnote{The final Gelman-Rubin statistic \\citep{Gelman1992} was $R -1 \\leq 5 \\times 10^{-4}$ for each of the six parameters.} The prior and the exact posterior are shown in blue and orange, respectively, in figure \\ref{fig:Supernovae}.\n\nFor likelihood-free inference, the simulator takes as input $\\Omega_\\mathrm{m}$ and $w$ and simulates $N$ realisations of the magnitudes $m_\\mathrm{B}$ of the 740 supernovae at their redshifts. Consistently with the Gaussian likelihood used in the MCMC analysis, we assume a Gaussian synthetic likelihood with a fixed covariance matrix $\\textbf{C}$. The observed data $\\textbf{d}_\\mathrm{O}$ and the covariance matrix $\\textbf{C}$ are shown in figure \\ref{fig:JLA_Hubble_correlation}. \n\nThe approximate posterior obtained from likelihood-free rejection sampling is shown in green in figure \\ref{fig:Supernovae}. It was obtained from $5,000$ accepted samples using a (conservative) threshold of $\\varepsilon = 650$ on $\\Delta_{(\\Omega_\\mathrm{m},w)}$, chosen so that the acceptance ratio was not below $0.01$. The entire run required $\\sim 450,000$ simulations in total. The approximate posterior obtained via {\\textsc{bolfi}} is shown in red in figure \\ref{fig:Supernovae}. {\\textsc{bolfi}} was initialised with a Sobol sequence of $20$ samples, and $100$ acquisitions were performed according to the ExpIntVar criterion, without acquisition noise. The {\\textsc{bolfi}} posterior is a much finer approximation to the true posterior than the one obtained from likelihood-free rejection sampling. It is remarkable that only $100$ acquisitions are enough to learn the non-trivial banana shape of the posterior. Only the $3\\sigma$ contour \\citep[which is usually not shown in cosmology papers, e.g.][]{Betoule2014} notably deviates from the MCMC posterior. This is due to the fact that we used one realisation of the stochastic process defining $\\Delta_{(\\Omega_\\mathrm{m},w)}$ and only $N=50$ realisations per $(\\Omega_\\mathrm{m},w)$; the marginalisation over the four nuisance parameters is therefore partial, yielding slightly smaller credible contours. However, a better approximation could be obtained straightfowardly, if desired, by investing more computational resources (increasing $N$), without requiring more acquisitions. \n\nAs we used $N=50$, the total cost for {\\textsc{bolfi}} is $6,000$ simulations. This is a reduction by $\\sim 2$ orders of magnitude with respect to likelihood-free rejection sampling ($\\sim 450,000$ simulations) and $3$ orders of magnitude with respect to MCMC sampling of the exact posterior ($6 \\times 10^6$ simulations). It is also interesting to note that our {\\textsc{bolfi}} analysis required a factor of $\\sim 3$ fewer simulations than the recently introduced \\textsc{delfi}\\ procedure \\citep{Alsing2018}, which used $20,000$ simulations drawn from the prior for the analysis of the JLA.\\footnote{A notable difference is that {\\textsc{delfi}} allowed the authors to perform the joint inference of the six parameters of the problem, whereas we only get the distribution of $\\Omega_\\mathrm{m}$ and $w$. However, since these are the only two physically interesting parameters, inference of the nuisance parameters is not deemed crucial for this example.}\n\n\\section{Discussion}\n\\label{sec:Discussion}\n\n\\subsection{Benefits and limitations of the proposed approach for cosmological inferences}\n\\label{ssec:Benefits and limitations of the proposed approach for cosmological inferences}\n\nAs noted in the introduction, likelihood-free rejection sampling, when at all viable, is extremely costly in terms of the number of required simulations. In contrast, the {\\textsc{bolfi}} approach relies on a GP probabilistic model for the discrepancy, and therefore allows the incorporation of a smoothness assumption about the approximate likelihood $L(\\boldsymbol{\\uptheta})$. The smoothness assumption allows simulations in the training set to ``share'' information about their value of $\\Delta_{\\boldsymbol{\\uptheta}}$ in the neighbourhood of $\\boldsymbol{\\uptheta}$, which suggests that fewer simulations are needed to reach a certain level of accuracy. Indeed, the number of simulations required is typically reduced by $2$ to $3$ orders of magnitude, for a better final approximation of the posterior, as demonstrated by our tests in section \\ref{sec:Applications} and in the statistical literature \\citep[see][]{GutmannCorander2016}. \n\nA second benefit of {\\textsc{bolfi}} is that it actively acquires training data through Bayesian optimisation. The trade-off between computational cost and statistical performance is still present, but in a modified form: the trade-off parameter is the size of the training set used in the regression. Within the training set, the user is free to choose which areas of the parameter space should be prioritised, so as to approximate the regression function more accurately there. In contrast, in ABC strategies that rely on drawing from a fixed proposal distribution (often the prior), or variants such as \\textsc{pmc}-\\textsc{abc}, a fixed computational cost needs to be paid per value of $\\boldsymbol{\\uptheta}$ regardless of the value of $\\Delta_{\\boldsymbol{\\uptheta}}$. \n\nFinally, by focusing on parametric approximations to the exact likelihood, the approach proposed in this work is totally ``$\\varepsilon$-free'', meaning that no threshold (which is often regarded as an unappealing \\textit{ad hock} element) is required. As likelihood-based techniques, the parametric version of {\\textsc{bolfi}} has the drawback that assuming a wrong form for the synthetic likelihood or miscalculating values of its parameters (such as the covariance matrix) can potentially bias the approximate posterior and\/or lead to an underestimation of credible regions. Nevertheless, massive data compression procedures can make the assumptions going into the choice of a Gaussian synthetic likelihood (almost) true by construction (see section \\ref{sssec:Data compression}).\n\nOf course, regressing the discrepancy and optimising the acquisition function are not free of computational cost. However, the run-time for realistic cosmological simulation models can be hours or days. In comparison, the computational overhead introduced by {\\textsc{bolfi}} is negligible.\n\nLikelihood-free inference should also be compared to existing likelihood-based techniques for cosmology such as Gibbs sampling or Hamiltonian Monte Carlo (e.g. \\citealp{Wandelt2004,Eriksen2004} for the cosmic microwave background; \\citealp{Jasche2010b,Jasche2015,Jasche2015BORGSDSS} for galaxy clustering; \\citealp{Alsing2016} for weak lensing). The principal difference between these techniques and {\\textsc{bolfi}} lies in its likelihood-free nature. Likelihood-free inference has particular appeal for cosmological data analysis, since encoding complex physical phenomena and realistic observational effects into forward simulations is much easier than designing an approximate likelihood which incorporates these effects and solving the inverse problem. While the numerical complexity of likelihood-based techniques typically requires to approximate complex data models in order to access required products (conditionals or gradients of the pdfs) and to allow for sufficiently fast execution speeds, {\\textsc{bolfi}} performs inference from full-scale black-box data models. In the future, such an approach is expected to allow previously infeasible analyses, relying on a much more precise modelling of cosmological data, including in particular the complicated systematics they experience. However, while the physics and instruments will be more accurately modelled, the statistical approximation introduced with respect to likelihood-based techniques should be kept in mind.\n\nOther key aspects of {\\textsc{bolfi}} for cosmological data analysis are the arbitrary choice of the statistical summaries and the easy joint treatment of different data sets. Indeed, as the data compression from $\\textbf{d}$ to $\\boldsymbol{\\Phi}$ is included in the simulator (see section \\ref{ssec:Approximate Bayesian computation}), summary statistics do not need to be quantities that can be physically modelled (such as the power spectrum) and can be chosen robustly to model misspecification. For example, for the microwave sky, the summaries could be the cross-spectra between different frequency maps; and for imaging surveys, the cross-correlation between different bands. Furthermore, joint analyses of correlated data sets, which is usually challenging in likelihood-based approaches (as they require a good model for the joint likelihood) can be performed straightforwardly in a likelihood-free approach. \n\nImportantly, as a general inference technique, {\\textsc{bolfi}} can be embedded into larger probabilistic schemes such as Gibbs or Hamiltonian-within-Gibbs samplers. Indeed, as posterior predictive distributions for conditionals and gradients of GPs are analytically tractable, it is easy to obtain samples of the {\\textsc{bolfi}} approximate posterior for use in larger models. {\\textsc{bolfi}} can therefore allow parts of a larger Bayesian hierarchical model to be treated as black boxes, without compromising the tractability of the entire model. \n\n\\subsection{Possible extensions}\n\\label{ssec:Possible extensions}\n\n\\subsubsection{High-dimensional inference}\n\\label{sssec:High-dimensional inference}\n\nIn this proof-of-concept paper, we focused on two-dimensional problems. Likelihood-free inference is in general very difficult when the dimensionality of the parameter space is large, due to the curse of dimensionality, which makes the volume exponentially larger with $\\mathrm{dim}~\\boldsymbol{\\uptheta}$. In {\\textsc{bolfi}}, this difficulty manifests itself in the form of a hard regression problem which needs to be solved. The areas in the parameter space where the discrepancy is small tend to be narrow in high dimension, therefore discovering these areas becomes more challenging as the dimension increases. The optimisation of GP kernel parameters, which control the shapes of allowed features, also becomes more difficult. Furthermore, finding the global optimum of the acquisition function becomes more demanding (especially with the ones designed for ABC such as ExpIntVar, which have a high degree of structure -- see figure \\ref{fig:Supernovae_acquisition}, bottom right panel).\n\nNevertheless, \\citet{Jaervenpaeae2017} showed on a toy simulation model (a Gaussian) that up to ten-dimensional inference is possible with {\\textsc{bolfi}}. As usual cosmological models do not include more than ten free physical parameters, we do not expect this limitation to be a hindrance. Any additional nuisance parameter or latent variable used internally by the simulator (such as $\\alpha$, $\\beta$, $M_\\mathrm{B}$, $\\delta\\hspace{-0.1em}M$ in supernova cosmology, see section \\ref{ssec:Supernova cosmology}) can be automatically marginalised over, by using $N$ realisations per $\\boldsymbol{\\uptheta}$. Recent advances in high-dimensional implementation of the synthetic likelihood \\citep{Ong2017} and high-dimensional Bayesian optimisation \\citep[e.g.][]{Wang2013:BOH:2540128.2540383,Kandasamy2015} could also be exploited. In future work, we will address the problem of high-dimensional likelihood-free inference in a cosmological context.\n\n\\subsubsection{Scalability with the number of acquisitions and probabilistic model for the discrepancy}\n\\label{sssec:Scalability with the number of acquisitions and probabilistic model for the discrepancy}\n\nIn addition to the fundamental issues with high-dimensional likelihood-free inference described in the previous section, practical difficulties can be met.\n\nGaussian process regression requires the inversion of a matrix $\\uuline{\\textbf{K}}\\vspace{-4pt}$ of size $t \\times t$, where $t$ is the size of the training set. The complexity is $\\mathcal{O}(t^3)$, which limits the size of the training set to a few thousand. Improving GPs with respect to this inversion is still subject to research \\citep[see][chapter 8]{RasmussenWilliams2006}. For example, ``sparse'' Gaussian process regression reduces the complexity by introducing auxiliary ``inducing variables''. Techniques inspired by the solution to the Wiener filtering problem in cosmology, such as preconditioned conjugate gradient or messenger field algorithms could also be used \\citep{Elsner2013,KodiRamanah2017,Papez2018}. Another strategy would be to divide the regression problem spatially into several patches with a lower number of training points \\citep{Park2017}. Such approaches are possible extensions of the presented method.\n\nIn the GP probabilistic model employed to model the discrepancy, the variance depends only on the training locations, not on the obtained values (see equation \\eqref{eq:GP_variance}). Furthermore, a stationary kernel is assumed. However, depending on the simulator, the discrepancy can show heteroscedasticity (i.e. its variance can depend on $\\boldsymbol{\\uptheta}$ -- see e.g. figure \\ref{fig:Gaussian_mean_illustration}, bottom panel). Such cases could be handled by non-stationary GP kernels or different probabilistic models for the discrepancy, allowing a heteroscedastic regression.\n\n\\subsubsection{Acquisition rules}\n\\label{sssec:Acquisition rules}\n\nAs shown in our examples, attention should be given to the selection of an efficient acquisition rule. Although standard Bayesian optimisation strategies such as the EI are reasonably effective, they are usually too greedy, focusing nearly all the sampling effort near the estimated minimum of the discrepancy and gathering too little information about other regions in the domain (see figure \\ref{fig:Supernovae_acquisition}, bottom left panel). This implies that, unless the acquisition noise is high, the tails of the posterior will not be as well approximated as the modal areas. In contrast, the ExpIntVar acquisition rule, derived in this work for the parametric approach, addresses the inefficient use of resources in likelihood-free rejection sampling by directly targeting the regions of the parameter space where improvement in the estimation accuracy of the approximate posterior is needed most. In our experiments, ExpIntVar seems to correct -- at least partially -- for the well-known effect in Bayesian optimisation of overexploration of the domain boundaries, which becomes more problematic in high dimension.\n\nAcquisition strategies examined so far in the literature \\citep[see][for a comparative study]{Jaervenpaeae2017} have focused on single acquisitions and are all ``myopic'', in the sense that they reason only about the expected utility of the next acquisition, and the number of simulations left in a limited budget is not taken into account. Improvement of acquisition rules enabling batch acquisitions and non-myopic reasoning are left to future extensions of {\\textsc{bolfi}}.\n\n\\subsubsection{Data compression}\n\\label{sssec:Data compression}\n\nIn addition to the problem of the curse of dimensionality in parameter space, discussed in section \\ref{sssec:High-dimensional inference}, likelihood-free inference usually suffers from difficulties in the measuring the (mis)match between simulations and observations if the data space also has high dimension. As discussed in section \\ref{ssec:Approximate Bayesian computation}, simulator-based models include a data compression step. The comparison in data space can be made more easily if $\\mathrm{dim}~\\boldsymbol{\\Phi}$ is reduced. In future work, we will therefore aim at combining {\\textsc{bolfi}} with massive and (close to) optimal data compression strategies. These include \\textsc{moped} \\citep{Heavens2000}, the score function \\citep{AlsingWandelt2018}, or information-maximising neural networks \\citep{Charnock2018}. Using such efficient data compression techniques, the number of simulations required for inference with {\\textsc{bolfi}} will be reduced even more, and the number of parameters treated could be increased.\n\nParametric approximations to the exact likelihood depend on quantities that have to be estimated using the simulator (typically for the Gaussian synthetic likelihood, the inverse covariance matrix of the summaries). Unlike supernova cosmology where the covariance matrix is easily obtained, in many cases it is prohibitively expensive to run enough simulations to estimate the required quantities, especially when they vary with the model parameters. In this context, massive data compression offers a way forward, reducing enormously the number of required simulations and making the analysis feasible when otherwise it might be essentially impossible \\citep{Heavens2017,Gualdi2018}.\n\nAn additional advantage of several data compression strategies is that they support the choice of a Gaussian synthetic likelihood. Indeed, the central limit theorem (for \\textsc{moped}) or the form of the network's reward function (for information-maximising neural networks) assist in giving the compressed data a near-Gaussian distribution. Furthermore, testing the Gaussian assumption for the synthetic likelihood will be far easier in a smaller number of dimensions than in the original high-dimensional data space.\n\n\\subsection{Parallelisation and computational efficiency}\n\\label{ssec:Parallelisation and computational efficiency}\n\nWhile MCMC sampling has to be done sequentially, {\\textsc{bolfi}} lends itself to more parallelisation. In an efficient strategy, a master process performs the regression and decides on acquisition locations, then dispatches simulations to be run by different workers. In this way, many simulations can be run simultaneously in parallel, or even on different machines. This allows fast application of the method and makes it particularly suitable for grid computing. Extensions of the probabilistic model and of the acquisition rules, discussed in section \\ref{sssec:Scalability with the number of acquisitions and probabilistic model for the discrepancy} and \\ref{sssec:Acquisition rules}, would open the possibility of doing asynchronous acquisitions. Different workers would then work completely independently and decide on their acquisitions locally, while just sharing a pool of simulations to update their beliefs given all the evidence available.\n\nWhile the construction of the training set depends on the observed data $\\boldsymbol{\\Phi}_\\mathrm{O}$ (through the acquisition function), simulations can nevertheless be reused as long as summaries $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ are saved. This means that if one acquires new data $\\boldsymbol{\\Phi}_\\mathrm{O}'$, the existing $\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}$ (or a subset of them) can be used to compute the new discrepancy $\\Delta_{\\boldsymbol{\\uptheta}}(\\boldsymbol{\\Phi}_{\\boldsymbol{\\uptheta}}, \\boldsymbol{\\Phi}_\\mathrm{O}')$. Building an initial training set in this fashion can massively speed up the inference of $\\mathpzc{P}(\\boldsymbol{\\uptheta}|\\boldsymbol{\\Phi})_{\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_\\mathrm{O}'}$, whereas likelihood-based techniques would require a new MCMC.\n\n\\subsection{Comparison to previous work}\n\\label{ssec:Comparison to previous work}\n\nAs discussed in the introduction, likelihood-free rejection sampling is not a viable strategy for various problems that {\\textsc{bolfi}} can tackle. In recent work, an other algorithm for scalable likelihood-free inference in cosmology \\citep[{\\textsc{delfi}},][]{Papamakarios2016,Alsing2018} was introduced. The approach relies on estimating the joint probability $\\mathpzc{P}(\\boldsymbol{\\uptheta},\\boldsymbol{\\Phi})$ via density estimation. This idea also relates to the work of \\citet{Hahn2018}, who fit the sampling distribution of summaries $\\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta})$ using Gaussian mixture density estimation or independent component analysis, before using it for parameter estimation. This section discusses the principal similarities and differences.\n\nThe main difference between {\\textsc{bolfi}} and {\\textsc{delfi}} is the data acquisition. Training data are actively acquired in {\\textsc{bolfi}}, contrary to {\\textsc{delfi}} which, in the simplest scheme, draws from the prior. The reduction in the number of simulations for the inference of cosmological parameters (see section \\ref{ssec:Supernova cosmology}) can be interpreted as the effect of the Bayesian optimisation procedure in combination with the ExpIntVar acquisition function. Using a purposefully constructed surrogate surface instead of a fixed proposal distribution, {\\textsc{bolfi}} focuses the simulation effort to reveal as much information as possible about the target posterior. In particular, its ability to reason about the quality of simulations before they are run is an essential element. Acquisition via Bayesian optimisation almost certainly remains more efficient than even the \\textsc{pmc} version of {\\textsc{delfi}}, which learns a better proposal distribution but still chooses parameters randomly. In future cosmological applications with simulators that are expensive and\/or have a large latent space, an active data acquisition procedure could be crucial in order to provide a good model for the noisy approximate likelihood in the interesting regions of parameter space, and to reduce the computational cost. This comes at the expense of a reduction of the parallelisation potential: with a fixed proposal distribution (like in {\\textsc{delfi}} and unlike in {\\textsc{bolfi}}), the entire set of simulations can be run at the same time.\n\nThe second comment is related to the dimensionality of problems which can be addressed. Like {\\textsc{delfi}}, {\\textsc{bolfi}} relies on a probabilistic model to make ABC more efficient. However, the quantities employed differ, since in {\\textsc{delfi}} the relation between the parameters $\\boldsymbol{\\uptheta}$ and the summary statistics $\\boldsymbol{\\Phi}$ is modelled (via density estimation), while {\\textsc{bolfi}} focuses on the relation between the parameters $\\boldsymbol{\\uptheta}$ and the discrepancy $\\Delta_{\\boldsymbol{\\uptheta}}$ (via regression). Summary statistics are multi-dimensional while the discrepancy is a univariate scalar quantity. Thus, {\\textsc{delfi}} requires to solve a density estimation problem in $\\mathrm{dim}~\\boldsymbol{\\uptheta} + \\mathrm{dim}~\\boldsymbol{\\Phi}$ (which equals $2 \\times \\mathrm{dim}~\\boldsymbol{\\uptheta}$ if the compression from \\citealp{AlsingWandelt2018} is used), while {\\textsc{bolfi}} requires to solve a regression problem in $\\mathrm{dim}~\\boldsymbol{\\uptheta}$. Both tasks are expected to become more difficult as $\\mathrm{dim}~\\boldsymbol{\\uptheta}$ increases (a symptom of the curse of dimensionality, see section \\ref{sssec:High-dimensional inference}), but the upper limits on $\\mathrm{dim}~\\boldsymbol{\\uptheta}$ for practical applications may differ. Further investigations are required to compare the respective maximal dimensions of problems that can be addressed by {\\textsc{bolfi}} and {\\textsc{delfi}}.\n\nFinally, as argued by \\citet{Alsing2018}, {\\textsc{delfi}} readily provides an estimate of the approximate evidence. In contrast, as in likelihood-based techniques, integration over parameter space is required with {\\textsc{bolfi}} to get\n\\begin{equation}\nZ_{\\boldsymbol{\\Phi}} = \\left(\\int \\mathpzc{P}(\\boldsymbol{\\Phi}|\\boldsymbol{\\uptheta}) \\, \\mathrm{d}\\boldsymbol{\\uptheta} \\right)_{\\boldsymbol{\\Phi}=\\boldsymbol{\\Phi}_\\mathrm{O}}.\n\\end{equation}\nHowever, due to the GP model, the integral can be more easily computed, using the same strategies as for the integral appearing in ExpIntVar (see section \\ref{sssec:Expected integrated variance}): only the GP predicted values are required at discrete locations on a grid (in low dimension) or at the positions of importance samples. A potential caveat is that {\\textsc{delfi}} has only been demonstrated to work in combination with the score function \\citep{AlsingWandelt2018}, which is necessary to reduce the dimensionality of $\\boldsymbol{\\Phi}$ before estimating the density.\\footnote{In contrast, section \\ref{ssec:Supernova cosmology} showed, for the same supernovae problem, that {\\textsc{bolfi}} can still operate if the comparison is done in the full $740$-dimensional data space.} The score function produces summaries that are only sufficient up to linear order in the log-likelihood. However, in ABC, care is required to perform model selection if the summary statistics are insufficient. Indeed, \\citet[][equation 1]{Robert2011} show that, in such a case, the approximate Bayes factor can be arbitrarily biased and that the approximation error is unrelated to the computational effort invested in running the ABC algorithm. Moreover, sufficiency for models $\\mathcal{M}_1$ and $\\mathcal{M}_2$ alone, or even for both of them -- even if approximately realised via Alsing \\& Wandelt's procedure -- does not guarantee sufficiency to compare the two different models $\\mathcal{M}_1$ and $\\mathcal{M}_2$ \\citep{Didelot2011}. As the assumptions behind {\\textsc{bolfi}} do not necessarily necessitate to reduce $\\mathrm{dim}~\\boldsymbol{\\Phi}$ ($\\Delta_{\\boldsymbol{\\uptheta}}$ is always a univariate scalar quantity, see above), these difficulties could be alleviated with {\\textsc{bolfi}} by carefully designing sufficient summary statistics for model comparison within the black-box simulator, if they exist.\n\n\\section{Conclusion}\n\\label{sec:Conclusion}\n\nLikelihood-free inference methods allow Bayesian inference of the parameters of simulator-based statistical models with no reference to the likelihood function. This is of particular interest for data analysis in cosmology, where complex physical and observational processes can usually be simulated forward but not handled in the inverse problem. \n\nIn this paper, we considered the demanding problem of performing Bayesian inference when simulating data from the model is extremely costly. We have seen that likelihood-free rejection sampling suffers from a vanishingly small acceptance rate when the threshold $\\varepsilon$ goes to zero, leading to the need for a prohibitively large number of simulations. This high cost is largely due to the lack of knowledge about the functional relation between the model parameters and the discrepancy. As a response, we have described a new approach to likelihood-free inference, {\\textsc{bolfi}}, that uses regression to infer this relation, and optimisation to actively build the training data set. A crucial ingredient is the acquisition function derived in this work, with which training data are acquired such that the expected uncertainty in the final estimate of the posterior is minimised.\n\nIn case studies, we have shown that {\\textsc{bolfi}} is able to precisely recover the true posterior, even far in its tails, with as few as $6,000$ simulations, in contrast to likelihood-free rejection sampling or likelihood-based MCMC techniques which require orders of magnitude more simulations. The reduction in the number of required simulations accelerated the inference massively.\n\nThis study opens up a wide range of possible extensions, discussed in section \\ref{ssec:Possible extensions}. It also allows for novel analyses of cosmological data from fully non-linear simulator-based models, as required e.g. for the cosmic web \\citep[see the discussions in][]{Leclercq2015ST,Leclercq2016CIT,Leclercq2017DMSHEET}. Other applications may include the cosmic microwave background, weak gravitational lensing or intensity mapping experiments. We therefore anticipate that {\\textsc{bolfi}} will be a major ingredient in principled, simulator-based inference for the coming era of massive cosmological data.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStrong infrared behavior is a characteristic feature of the inflationary physics \\cite{w1} (for reviews see e.g. \\cite{r1,r2}). The modes are continuously pushed from subhorizon to superhorizon regimes, which enlarges short distance quantum effects on cosmologically interesting scales. Most of the time, the loop corrections contain infrared infinities that must properly be handled by viable physical reasoning. The so called infrared logarithms\\footnote{In the presence of the entropy perturbations the infrared loop divergences are power law rather than logarithmic, see \\cite{pl}.} show up in loops as a reminiscent of the peculiar infrared behavior (see e.g. \\cite{il1,il2,il3,il4,il5,il6,il7,il8,il9,il10,il11}). \n\nIn this paper, we apply cosmological perturbation theory to the standard scalar slow-roll inflationary model in the minisuperspace approximation. The minisuperspace theory is expected to capture the dynamics of the zero modes of the full theory. Besides, it is free from loop infinities and renormalization issues, which are intricate in the presence of gravity, and it still contains special features related to the gauge invariance and nonlinearities. Therefore the results obtained in the minisuperspace approximation must shed light on some crucial questions in the full theory and our aim here is to examine the appearance of the infrared logarithms. \n\nWe first consider the single scalar field model and obtain the {\\it complete} gauge fixed action for the curvature perturbation $\\zeta$. The corresponding Hamiltonian can be expanded in powers of the momentum conjugate to $\\zeta$, which becomes an expansion in the inverse powers of the background scale factor of the universe $a_B(t)$. Asymptotically at late times, the quadratic momentum term, which is still nonlinear in $\\zeta$, dominates the dynamics. In this case, $\\zeta$ is conserved and no $\\ln a_B$ behavior appears, which is consistent with \\cite{zc1,zc2}. We then add a self-interacting spectator scalar to the system. This time there arises a specific asymptotically dominant interaction term, which yields an $(\\ln a_B)^n$ correction to $\\zeta$ in the $n$'th order perturbation theory. The emergence of this infrared logarithm is similar to what has been observed in the field theory calculations. For models having large number of e-folds such a correction may invalidate the perturbation theory. On the other hand, in the minisuperspace theory a nonperturbative argument shows that asymptotically $\\zeta$ has actually a slowly evolving $\\ln a_B$ correction to the constant mode, which indicates other infrared logarithms involving higher powers of $\\ln a_B$ might be the artifacts of the perturbation theory. \n\n\\section{Single Scalar Slow-roll Inflation}\n\nWe start from the following minisuperspace action:\n\\begin{equation}\\label{1}\nS=L^3\\int dt\\, a^3 \\, \\left\\{\\frac{1}{N} \\left[-6\\frac{\\dot{a}^2}{a^2}+\\fr12 \\dot{\\Phi}^2\\right]-NV(\\Phi)\\right\\},\n\\end{equation}\nwhere the dot denotes the time derivative, $N$ is the lapse function, $a(t)$ is the scale factor of the universe, $\\Phi$ is the inflaton and $V(\\Phi)$ is the inflaton potential (we set the reduced Planck mass $M_p=1$. The proper $M_p$ factors can easily be reinstated by dimensional analysis as we will do below). The minisuperspace action \\eq{1} can be obtained from the usual Einstein-Hilbert action by setting $N^i=0$, $h_{ij}=a^2\\delta_{ij}$, where $N$, $N^i$ and $h_{ij}$ refer to the standard ADM decomposition of the metric, and by assuming that the variables $N$, $a$ and $\\Phi$ depend only on time. The parameter $L$ denotes the size of the comoving spatial coordinates and the factor $L^3$ in \\eq{1} arises from their integration reducing the field theory to a quantum mechanical system. \n\nThe action \\eq{1} is invariant under a local time transformation with the parameter $k^0$:\n\\begin{equation}\\label{2}\n\\delta N=k^0\\dot{N}+N\\dot{k}^0,\\hs{10}\\delta\\Phi=k^0\\dot{\\Phi},\\hs{10}\\delta a=k^0\\dot{a}.\n\\end{equation}\nTo fix the gauge invariance one may define the background field variables $a_B$ and $\\Phi_B$ obeying\n\\begin{equation}\\label{3}\n6H_B^2=\\fr12\\dot{\\Phi}_B^2+V(\\Phi_B),\\hs{10}\\dot{H}_B^2=-\\fr14\\dot{\\Phi}_B^2,\n\\end{equation}\nwhere $H_B=\\dot{a}_B\/a_B$. Next, one may introduce the fluctuation fields $\\zeta$ and $\\phi$ as\n\\begin{equation}\\label{4}\na=a_Be^{\\zeta},\\hs{10}\\Phi=\\Phi_B+\\phi,\n\\end{equation}\nand impose the gauge $\\phi=0$. After algebraically solving the lapse $N$ from its own equation of motion one may obtain \n\\begin{equation}\\label{5}\nS=-2L^3\\int dt\\,a_B^3V_B\\,e^{3\\zeta}\\,\\left[1+\\frac{12H_B}{V_B}\\dot{\\zeta}+\\frac{6}{V_B}\\dot{\\zeta}^2\\right]^{1\/2},\n\\end{equation}\nwhere $V_B=V(\\Phi_B)$. We take the background \\eq{3} to be a slow-roll inflationary solution with $\\dot{\\Phi}_B<0$. \n\nAssuming that the physical size of the pre-inflationary patch is determined by the initial Hubble parameter $H_i$, one has\n\\begin{equation}\\label{6}\na_B(t_i)L=\\frac{1}{H_i},\n\\end{equation}\nwhere $t_i$ is the initial time of the inflation. Normalizing the scale factor as \n\\begin{equation}\\label{7}\na_B(t_i)=\\frac{1}{H_i}\n\\end{equation}\ncorresponds to setting $L=1$, which eliminates the unphysical comoving scale from the equations. \n\nIt is instructive to repeat the gauge fixing procedure in the Hamiltonian formulation where the minisuperspace action \\eq{1} can be written as \n\\begin{eqnarray}\n&&S=\\int dt\\left[\\hat{P}_\\zeta \\dot{\\hat{\\zeta}}+P_\\Phi\\dot{\\Phi}-NH\\right],\\nonumber\\\\\n&&H=-\\frac{1}{24}e^{-3\\hat{\\zeta}}\\hat{P}_\\zeta^2+\\fr12e^{-3\\hat{\\zeta}}P_\\Phi^2+e^{3\\hat{\\zeta}}V(\\Phi)\\label{8},\n\\end{eqnarray}\nand $\\hat{\\zeta}=\\ln a$. Expanding the variables around their background values\n\\begin{eqnarray}\n&&\\hat{\\zeta}=\\ln a_B+\\zeta,\\hs{10}\\Phi=\\Phi_B+\\phi,\\hs{10}N=1+n,\\nonumber\\\\\n&&\\hat{P}_\\zeta=-12a_B^3H_B+P_\\zeta,\\hs{10}P_\\Phi=a_B^3\\dot{\\Phi}_B+P_\\phi,\\label{9}\n\\end{eqnarray} \nthe action becomes\n\\begin{equation}\nS=\\int dt \\left[P_\\zeta\\dot{\\zeta}+P_\\phi\\dot{\\phi}-H_F-nC\\right],\\label{10}\n\\end{equation}\nwhere $H_F$ is the fluctuation Hamiltonian involving all variables but $n$ and $C$ is the constraint given by\n\\begin{equation}\\label{11}\nC=-\\frac{1}{24a_B^3}e^{-3\\zeta}\\left(-12a_B^3H_B+P_\\zeta\\right)^2+\\frac{1}{2a_B^3}e^{-3\\zeta}\\left(a_B^3\\dot{\\Phi}_B+P_\\phi\\right)^2+a_B^3e^{3\\zeta}V(\\Phi_B+\\phi).\n\\end{equation}\nAfter imposing the gauge $\\phi=0$, one may solve\\footnote{In solving $P_\\phi$ one should keep in mind that $a_B^3\\dot{\\Phi}_B+P_\\phi<0$ since we are making an expansion around a background solution with $\\dot{\\Phi}_B<0$.} the constraint $C=0$ for $P_\\phi$, which would give the reduced action for $\\zeta$ and $P_\\zeta$ \n\\begin{equation}\\label{12}\nS=\\int\\, dt\\, P_\\zeta\\dot{\\zeta}-\\dot{\\Phi}_B\\left[\\frac{1}{12}\\left(P_\\zeta-12a_B^3H_B\\right)^2-2a_B^6V_Be^{6\\zeta}\\right]^{1\/2}+H_BP_\\zeta+6a_B^3V_B\\zeta.\n\\end{equation}\nIn the phase space path integral quantization, this procedure corresponds to the Faddeev-Popov gauge fixing\n\\begin{equation}\\label{13}\n\\delta(\\phi)\\delta(C) \\det\\left\\{\\phi,C\\right\\}=\\delta(\\phi)\\delta(C)\\frac{\\partial C}{\\partial\\phi}=\\delta(\\phi)\\delta(P_\\phi-P_\\phi^*),\n\\end{equation}\nwhere $P_\\phi^*$ is the solution of $C=0$. One can check that the two actions \\eq{5} and \\eq{12} are related by the Legendre transformation exchanging the Lagrangian and the Hamiltonian. \n\nOne may see that a constant $\\zeta$ solves the equations of motion that follows from \\eq{5} provided that the background equations \\eq{3} are satisfied. At first, this is not obvious from \\eq{5} since it contains a pure $\\zeta$ term with no time derivatives when the square root in \\eq{5} is expanded. In any case, it is possible to add \\eq{5} a total derivative term so that\n\\begin{eqnarray}\nS&=&-2\\int dt\\,a_B^3V_B\\,e^{3\\zeta}\\,\\left[1+\\frac{12H_B}{V_B}\\dot{\\zeta}+\\frac{6}{V_B}\\dot{\\zeta}^2\\right]^{1\/2}+2\\int dt \\,a_B^3\\,e^{3\\zeta}\\left[V_B+6H_B\\dot{\\zeta}\\right],\\label{14}\\\\\n&=&\\int dt\\, a_B^3\\,e^{3\\zeta}\\left[\\frac{3\\dot{\\Phi}_B^2}{V_B}\\dot{\\zeta}^2-\\frac{18H_B\\dot{\\Phi}_B^2}{V_B^2}\\dot{\\zeta}^3+. . . \\right].\\label{15}\n\\end{eqnarray}\nIt is now clear from \\eq{15} that the equation of motion involves only the time derivatives of $\\zeta$ and a constant mode is a trivial solution. Note that by normalization \\eq{7}, the scale factor $a_B$ has mass dimension $-1$. In the Hamiltonian language the extra surface term added in \\eq{14} corresponds to a canonical transformation $P_\\zeta\\to P_\\zeta+12a_B^3H_B-12 a_B^3H_B e^{3\\zeta}$ as compared to \\eq{12}. The Hamiltonian of \\eq{14} can be found as \n\\begin{eqnarray}\nH&=&-a_B^3\\dot{\\Phi}_B^2e^{3\\zeta}\\left[1-\\frac{2H_B}{a_B^3\\dot{\\Phi}_B^2}e^{-3\\zeta}P_\\zeta+\\frac{1}{12a_B^6\\dot{\\Phi}_B^2}e^{-6\\zeta}P_\\zeta^2\\right]^{1\/2}-H_B P_\\zeta+a_B^3\\dot{\\Phi}_B^2e^{3\\zeta},\\label{16}\\\\\n&=&\\frac{V_B}{12a_B^3\\dot{\\Phi}^2}e^{-3\\zeta}P_\\zeta^2+\\frac{H_BV_B}{12a_B^6\\dot{\\Phi}_B^4}e^{-6\\zeta}P_\\zeta^3+. . . \\label{17}\n\\end{eqnarray}\nwhere the dotted terms are suppressed with more powers of the background scale factor. Evidently, the first term in \\eq{17} dominates the dynamics at late times in inflation. \n\nThe interaction picture operators are governed by the free Hamiltonian\n\\begin{equation}\\label{18}\nH_0=\\frac{V_B}{12a_B^3\\dot{\\Phi}_B^2}P_\\zeta^2.\n\\end{equation}\nTheir time evolution can be found as\n\\begin{eqnarray}\n&&\\zeta_I(t)=\\zeta_i+\\fr16\\int_{t_i}^t dt'\\frac{V_B(t')}{a_B(t')^3\\dot{\\Phi}_B(t')^2}\\,P_i,\\nonumber\\\\\n&&P_{\\zeta I}=P_i,\\label{19}\n\\end{eqnarray}\nwhere $\\zeta_i$ and $P_i$ are the initial time independent (Schr\\\"{o}dinger) operators obeying $[\\zeta_i,P_i]=i$. \n\nAn operator in the Heisenberg picture $O_H$ can be related to the corresponding interaction picture operator $O_I$ by\n\\begin{equation}\nO_H=U_I^\\dagger O_I U_I,\\label{20}\n\\end{equation}\nwhere $i\\dot{U}_I=H_IU_I$, $U(t_i)=I$ and $H_I$ is the interaction Hamiltonian in the interaction picture. As shown by Weinberg \\cite{il1}, \\eq{20} can be expanded as\n\\begin{equation}\\label{21}\nO_H(t)=O_I(t)-i\\int_{t_i}^t dt'\\,[O_I(t),H_I(t')]-\\int_{t_i}^t dt''\\int_{t''}^t dt'\\,[[O_I(t),H_I(t')],H_I(t'')]+. . . \n\\end{equation}\nwhere the dotted terms contain more nested commutators of $O_I$ with $H_I$. Eq. \\eq{21} can be used as the basis for the in-in perturbation theory. From \\eq{17} the interaction Hamiltonian can be determined as\n\\begin{equation}\\label{22}\nH_I=\\frac{V_B}{24a_B^3\\dot{\\Phi}_B^2}\\left\\{\\left(e^{3\\zeta_I}-1\\right),P_{\\zeta I}^2\\right\\}+. . .\n\\end{equation}\nwhere we apply symmetric ordering to make $H_I$ Hermitian. \n\nOne may approximate the time integrals during slow-roll inflation by taking (note the normalization \\eq{7}) \n\\begin{equation}\na_B\\simeq \\frac{1}{H_B}e^{H_B(t-t_i)}\\label{23}\n\\end{equation}\nand by treating the slowly changing variables $H_B$, $V_B$ and $\\dot{\\Phi}_B$ as constants. Using \\eq{22} in \\eq{21} for $\\zeta$, one finds that at the end of inflation after $N$ e-folds \n\\begin{equation}\\label{24}\n\\zeta_H=\\zeta_i+\\frac{H_B^2}{12M_p^2\\epsilon}P_i+. . .+O\\left(e^{-3N}\\right),\n\\end{equation}\nwhere the slow-roll parameter is defined as\n\\begin{equation}\n\\epsilon=\\frac{3\\dot{\\Phi}_B^2}{V_B}\\simeq-\\frac{\\dot{H}_B}{H_B},\\label{25}\n\\end{equation}\nand dots denote time independent but nonlinear terms in $\\zeta_i$ and $P_i$ coming from the lower limits of the time integrals in \\eq{21} at $t_i$. Consequently, one sees that at late times $\\zeta_H$ exponentially asymptotes to a constant operator and no infrared logarithms appear.\n\nWe observe that neglecting all but the first term in \\eq{17}, which are exponentially suppressed at late times, gives an explicitly integrable system. Namely, the (classical) equations corresponding to the Hamiltonian\n\\begin{equation}\nH=\\frac{V_B}{12a_B^2\\dot{\\Phi}_B^2}e^{-3\\zeta}P_\\zeta^2,\\label{26}\n\\end{equation}\ncan be integrated to get\n\\begin{eqnarray}\n&&\\zeta(t)=\\zeta_i+\\fr23 \\ln\\left[1+P_i e^{-3\\zeta_i}\\int_{t_i}^t dt'\\frac{V_B}{4a_B^3\\dot{\\Phi}_B^2}\\right],\\nonumber\\\\\n&&P_\\zeta(t)=P_i+P_i^2e^{-3\\zeta_i}\\int_{t_i}^t dt'\\frac{V_B}{4a_B^3\\dot{\\Phi}_B^2}.\\label{27}\n\\end{eqnarray}\nIn the quantum theory \\eq{27} should be true for Heisenberg operators provided that operator orderings are solved in a suitable way. Eq. \\eq{27} shows that the asymptotic change of $\\zeta$ compared to its initial value is determined by the dimensionless parameter $H^2\/(M_p^2\\epsilon)$. \n\nTill now in our discussion we have focused on the evolution of the Heisenberg operator $\\zeta_H$. As for the initial state it is natural to take a minimum uncertainty Gaussian wave function $\\psi(\\zeta_i)$, which has zero mean $<\\zeta_i>=0$ and the deviations $<\\zeta_i>=\\sigma^2$, $=1\/4\\sigma^2$. Although this choice can be motivated from field theory side, the value of the deviation $\\sigma$ cannot be directly deduced from the field theory, which has a continuous spectrum of wave numbers and the zero mode is not isolated (unless the space is not compact). On the other hand, one must also note that the validity of the perturbation theory actually depends on the initial state. Choosing $\\sigma$ to be extremely small would yield a large momentum that may invalidate the perturbative expansion in $P_\\zeta$, at least at early times during inflation when the exponential suppression is not effective yet. \n\nIn showing the constancy of $\\zeta$ in single scalar inflationary models, the consistency condition in the squeezed limit \\cite{c1,c2}, hence the choice of the Bunch-Davies vacuum, plays an important role, see \\cite{c3}. In the minisuperspace model, no such property is needed since the time independence of $\\zeta$ becomes an operator statement, i.e. the Heisenberg picture $\\zeta_H$ exponentially approaches to a constant operator as in \\eq{24}. We anticipate this should also be the case in field theory since at late times semi-classical approximation becomes excellent \\cite{cl} and $\\zeta$ is conserved in the classical theory.\\footnote{Indeed, one naturally expects that some form of minisuperspace description of superhorizon modes, which is similar to the one considered here, must be valid at late times. However, such an approximation, if ever exists, is only possible in a suitable gauge that allows a smooth soft limit, which is not the case for the standard $\\zeta$-gauge because the shift $N^i$ is non-local.} Therefore, the constancy of $\\zeta$ in cosmological perturbation theory must hold not just for the Bunch-Davies vacuum but for a wider range of states. \n\n\\section{Adding a Spectator}\n\nWe have seen in the previous section that the $\\zeta$-self interactions cannot yield infrared logarithms in the minisuperspace perturbation theory. From \\eq{19} and \\eq{23} one sees that $[\\zeta_I(t),\\zeta_I(t')]\\propto 1\/a_B^3$, thus the time integrals in the perturbative series in \\eq{21} can produce an infrared logarithm provided that $H_I\\propto a_B^3$. To produce such an interaction term one may add a self-interacting {\\it massless} spectator scalar $\\varphi$ which has the potential $V(\\varphi)$. It is easy to repeat the gauge fixing in the presence of the spectator to get the following gauge fixed action: \n\\begin{equation}\nS=-2\\int dt\\,a_B^3V_B\\,e^{3\\zeta}\\,\\left[1+\\frac{V(\\varphi)}{V_B}\\right]^{1\/2}\\left[1+\\frac{12H_B}{V_B}\\dot{\\zeta}+\\frac{6}{V_B}\\dot{\\zeta}^2-\\frac{1}{2V_B}\\dot{\\varphi}^2\\right]^{1\/2}.\\label{28}\n\\end{equation}\nBy expanding the square roots one may obtain the free Lagrangian and various interactions, where the quadratic spectator action is given by\n\\begin{equation}\\label{29}\nS=\\fr12\\,\\int \\,dt\\, a_B^3\\,\\dot{\\varphi}^2.\n\\end{equation}\nHence, the interaction picture spectator operators evolve like \n\\begin{eqnarray}\n&&\\varphi_I=\\varphi_i+\\int_{t_i}^t \\frac{dt'}{a_B(t')^3}P_{\\varphi i},\\nonumber\\\\\n&&P_{\\varphi I}=P_{\\varphi i},\\label{30}\n\\end{eqnarray}\nwhere $P_{\\varphi I}$ is the momentum conjugate to $\\varphi_I$, and $\\varphi_i$ and $P_{\\varphi i}$ are time independent initial operators obeying $[\\varphi_i,P_{\\varphi i}]=i$. \n\nAmong the interactions that follow from \\eq{28} we focus on the following one\n\\begin{equation}\nH_I=a_B^3V(\\varphi_I)\\left(e^{3\\zeta_I}-1\\right),\\label{31}\n\\end{equation}\nwhich would potentially yield infrared logarithms in the perturbation theory as noted above. Indeed, from the first order correction in \\eq{21} one may find\n\\begin{equation}\\label{32}\n\\zeta_H=\\zeta_I-\\fr12 \\int_{t_i}^t dt'a_B(t')^3\\,V(\\varphi_I(t'))e^{3\\zeta_I(t')}\\int_{t'}^t\\,dt''\\frac{V_B}{a_B^3\\dot{\\Phi}_B^2}.\n\\end{equation}\nUsing \\eq{23} one can get a late time expansion of the interaction picture operators so that at the end of inflation after $N$ e-folds one has \n\\begin{eqnarray}\n&&\\zeta_I=\\zeta_i+\\frac{H_B^2}{12M_p^2\\epsilon}P_{i}+O\\left(e^{-3N}\\right),\\nonumber\\\\\n&& \\varphi_I=\\varphi_i+\\frac{H_B^2}{3}P_{\\varphi i}+O\\left(e^{-3N}\\right).\\label{33}\n\\end{eqnarray}\nUtilizing this expansion in \\eq{32} gives\n\\begin{equation}\n\\zeta_H(t)=\\zeta_c+\\frac{1}{12H_B^2M_p^2\\epsilon}\\left[1-3\\ln\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)\\right]\\,V(\\varphi_c)\\,e^{3\\zeta_c}+O\\left(e^{-3N}\\right),\\label{34}\n\\end{equation}\nwhere the constant operators $\\zeta_c$ and $\\varphi_c$ are defined from \\eq{33} by\n\\begin{eqnarray}\n&&\\zeta_c=\\zeta_i+\\frac{H_B^2}{12M_p^2\\epsilon}P_{i},\\nonumber\\\\\n&&\\varphi_c=\\varphi_i+\\frac{H_B^2}{3}P_{\\varphi i}.\\label{344}\n\\end{eqnarray}\nAs it is anticipated, the interaction \\eq{31} yields an infrared logarithm in the first order perturbation theory. \n\nThe above calculation hints how one should handle the higher order perturbative corrections. Namely, one should first evaluate the commutators in \\eq{21} using\n\\begin{equation}\n[\\zeta_I(t),\\zeta_I(t')]=\\frac{i}{6}\\int_t^{t'}dt''\\frac{V_B}{a_B^3\\dot{\\Phi}_B^2},\\hs{10}[\\varphi_I(t),\\varphi_I(t')]=i\\int_t^{t'}\\frac{dt''}{a_B^3}.\\label{35}\n\\end{equation}\nOne can then apply the late time expansion of the interaction picture operators given in \\eq{33} and calculate the time integrals of the leading order terms. Using this strategy one may obtain the following second order correction to $\\zeta_H$:\n\\begin{equation}\n\\left[\\frac{V(\\varphi_c)^2}{16M_p^4H_B^2\\epsilon^2}e^{6\\zeta_c}+\\frac{V'(\\varphi_c)^2}{36M_p^2H_B^2\\epsilon}\\left(e^{6\\zeta_c}-e^{3\\zeta_c}\\right)\\right]\\left[1-2\\ln\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)+\\fr32 \\ln^2\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)\\right],\\label{36}\n\\end{equation}\nwhere $V'(\\varphi)=dV\/d\\varphi$. Note that \\eq{36} contains a different type of infrared logarithm, i.e. a log square. \n\nIt is possible to argue that the interaction \\eq{31} yields the factor $\\ln^n(a_B(t)\/a_B(t_i))$ in the $n$'th order perturbation theory. In the $n$'th term of \\eq{21} there are $n$ factors of $a_B^3$ coming from the interaction Hamiltonian and there are $n$ factors of $a_B^{-3}$ coming from $n$-commutators. At late times, these cancel each other and the interaction picture operators asymptote to constant operators as in \\eq{33}. Hence, to leading order one ends up with an $n$-dimensional time integral of a constant operator giving $(t-t_i)^n=H_B^n\\ln^n(a_B(t)\/a_B(t_i))$. \n\nThese findings are consistent with what has been observed in the field theory calculations \\cite{il1,il2,il3}. In the minisuperspace approximation one can further make a nonperturbative estimate as follows: Using the asymptotic form \\eq{33}, the interaction Hamiltonian converges to\n\\begin{equation}\\label{37}\nH_I=a_B^3V(\\varphi_c)\\left(e^{3\\varphi_c}-1\\right)\\left\\{1+O\\left(e^{-3N}\\right)\\right\\}.\n\\end{equation}\nOne can check that at late times the commutator $[H_I(t),H_I(t')]$ is suppressed by a huge factor related to the number of e-folds as compared to the product $H_I(t)H_I(t')$. Therefore, to a very good approximation $H_I(t)$ becomes a self-commuting operator of its argument after some time $t_m$ corresponding to, say, 10 e-folds (the fact that $\\zeta$ has a similar property has been used to argue the classicality of the cosmological perturbations \\cite{cl}). The unitary interaction picture evolution operator can be decomposed like\n\\begin{equation}\nU_I(t,t_i)=U_2(t,t_m)U_1(t_m,t_i).\\label{38}\n\\end{equation}\nSince $H_I(t)$ can be treated as a self-commuting operator when $t>t_m$, one may approximate\n\\begin{equation}\nU_2(t,t_m)=Te^{-i\\int_{t_m}^t dt'H_I(t')}\\simeq e^{-i\\int_{t_m}^t dt'H_I(t')}.\\label{39}\n\\end{equation}\nFurthermore one has\n\\begin{equation}\n\\zeta_H=U_I^\\dagger\\zeta_IU_I=U_1^\\dagger U_2^\\dagger\\zeta_I U_2U_1.\\label{40}\n\\end{equation}\nTo proceed we note \n\\begin{equation}\n[\\zeta_I(t),H_I(t')]=3[\\zeta_I(t),\\zeta_I(t')]H_I(t'),\\label{41}\n\\end{equation}\nthus using \\eq{39} one may find\n\\begin{eqnarray}\n[\\zeta_I(t),U_2]&&\\simeq\\fr12 \\int_{t_m}^tdt'a_B(t')^3V(\\varphi(t'))e^{3\\zeta_I(t')}\\int_t^{t'}dt''\\frac{V_B(t'')}{a_B(t'')^3\\dot{\\Phi}_B(t'')^2}\\,U_2\\nonumber\\\\\n&&\\simeq \\frac{1}{12H_B^2M_p^2\\epsilon}\\left[1-3\\ln\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)\\right]V(\\varphi_c)e^{3\\varphi_c}\\,U_2.\\label{42}\n\\end{eqnarray}\nSo \\eq{40} becomes\n\\begin{equation}\n\\zeta_H\\simeq U_1^\\dagger\\zeta_cU_1+\\frac{1}{12H_B^2M_p^2\\epsilon}\\left[1-3\\ln\\left(\\frac{a_B(t)}{a_B(t_i)}\\right)\\right]U_1^\\dagger V(\\varphi_c)e^{3\\varphi_c}U_1.\\label{43}\n\\end{equation}\nThe unitary operator $U_1$ only mixes the operators up to time $t_m$ and its action merely produces constant operators since these do not depend on the final time. As a result, in \\eq{43} we are able to extract the leading order time dependence of $\\zeta$, which is a single infrared logarithm of the form $\\ln a_B$, and other corrections are exponentially suppressed. On dimensional grounds one may estimate that (in expectation values) $V(\\varphi_c)\\propto H_B^4$, therefore the infrared logarithm correction is suppressed by the factor $H^2\/(M_p^2\\epsilon)$, which is generically small in realistic models. \n\nThe above nonperturbative argument shows that the $(\\ln a_B)^n$ behavior for $n>1$ that arises in the $n$'th order perturbation theory may be an artifact of that approximation. For $t3H_B\/2$. Using \\eq{30m} in \\eq{32}, one may then see that the integrand in \\eq{32} does {\\it not} approach to a time independent operator yielding the infrared logarithm as in the case of a massless spectator, but instead it becomes an exponentially decreasing function of $t'$ whose integral gives a smaller correction for larger mass. \n\n\\section{Conclusions}\n\nIn this paper we investigate the appearance of the infrared logarithms in the cosmological perturbation theory by studying the scalar slow-roll inflationary model in the minisuperspace approximation, which simplifies the field theoretical system involving gravity to a quantum mechanical one. The minisuperspace theory is still highly nontrivial because of the nonlinearities and the local gauge invariance related to the time reparametrizations. We obtain the complete gauge fixed action for the curvature perturbation $\\zeta$, both in the single scalar case and when a self-interacting spectator is added. The full action can be expanded around the inflationary background yielding an infinite number of interaction terms. \n\nIn our analysis we focus on the time evolution of the Heisenberg operators, which can be calculated using in-in perturbation theory. Thus, our findings are state independent provided that the expectation values do not break down the series expansion. We verify that in the single scalar case no infrared logarithms appear and $\\zeta$ exponentially asymptotes to a constant operator. In the presence of a spectator we find that the $n$'th order perturbation theory gives an infrared logarithm of the form $(\\ln a_B)^n$. Note that supposing the existence of a spectator is not unnatural for inflation; in any model where Higgs is not the inflaton, it actually becomes a self-interacting spectator scalar.\n\nIn the minisuperspace approximation it is possible to examine the time evolution of the Heisenberg operators nonperturbatively. Following some time after the beginning of inflation, the interaction picture operators including the interaction Hamiltonian become nearly self-commuting at different times. This allows one to extract the leading order time evolution of $\\zeta$ where all other corrections are exponentially suppressed. In the presence of a spectator, this leading order correction turns out to be a single infrared logarithm $\\ln a_B$. It would be interesting to generalize this argument to field theory to understand the structure of the infrared logarithms in the cosmological perturbation theory. \n\nOne usually attributes the emergence of infrared logarithms to loop effects. This is natural since in field theory calculations they normally appear in loop corrections. However, loops do not exist in the minisuperspace approach yet we still encounter infrared logarithms implied by the Heisenberg picture equations of motion. This indicates that neither the loops nor the modes running in them may not be the primary reason for the existence of infrared logarithms. Indeed, consider as an example the three point function $\\left< \\phi(k_1)\\phi(k_2)\\phi(k_3)\\right>$ of a self-interacting {\\it massless} test scalar field. It is not difficult to see that this correlation function is time dependent {\\it at tree level} because of a cubic interaction term $H_I=g\\, a^3 \\int d^3 x\\,\\phi^3$, even at late times when $k_1$, $k_2$ and $k_3$ become superhorizon. Choosing the vacuum and correspondingly the mode functions determine the precise form of this time dependence, e.g. for the Bunch-Davies vacuum one gets an infrared logarithm. The crucial point is that the cubic interaction induces nontrivial superhorizon evolution which is {\\it not} exponentially suppressed. In single field inflation, one may see that $\\zeta$ self interactions cannot produce such effects mainly because of the shift symmetry (as shown above, pure $\\zeta$ interactions containing no derivatives, which are potentially dangerous, actually disappear after integration by parts). Nevertheless, in the presence of a spectator field there are interactions which yield non-negligible superhorizon evolution both in classical and quantum theories. We expect these conclusions to hold irrespective of the gauge conditions or possible explicit non-localities present in the action. \n\nHence, the emergence of infrared logarithms or some other form of nontrivial superhorizon motion has a dynamical origin related to suitable interactions. Remember that in the minisuperspace model, solving the unitary evolution nonperturbatively gives only a single infrared logarithm as opposed to perturbation theory and this shows the importance of determining dynamics correctly. On the other hand, the initial state chosen is also crucial in fixing the exact form of the superhorizon time dependence. Of course, loops also arise from such interactions sandwiched between states. As discussed in \\cite{proj}, some corrections related to superhorizon modes are projection effects that define the mapping between physical scales of inflation and post-inflationary universe, and these disappear in observable quantities. One way of understanding projection effects is to keep in mind that the decomposition of the metric into background and fluctuation parts is introduced for computational convenience. Strictly, the {\\it full} metric must be used in relating the comoving and physical scales. In the minisuperspace theory any constant piece of $\\zeta$ will disappear as a projection artifact but a time dependent part represents a real physical effect, modifying for instance the Hubble expansion rate. \n\n\n\\begin{acknowledgments}\nThis work, which has been done at C.\\.{I}.K. Ferizli, Sakarya, Turkey without any possibility of using references, is dedicated to my friends at rooms C-1 and E-10 who made my stay bearable at hell for 440 days between 7.10.2016 and 20.12.2017. I am also indebted to the colleagues who show support in these difficult times. \n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\\emph{INTEGRAL} (INTErnational Gamma Ray Astrophysics Laboratory, \\citet{Winetal03}) was launched in 2002 and since then has performed high-quality observations in the energy band from 3 keV up to $\\sim 10$ MeV. The \\emph{INTEGRAL} payload consists of two main soft gamma-ray instruments (the imager IBIS \\citep{Ubeetal03}, and the spectrometer SPI \\citep{Vedetal03}) and two monitors (in X-rays JEM-X \\citep{Lunetal03}, and in optical OMC \\citep{Masetal03}). The wide field-of-view of the imager IBIS provides an ideal opportunity to survey the sky in hard X-rays.\\\\\n\nDuring its first 6 years in orbit, \\emph{INTEGRAL} has covered nearly the whole sky. The observational data have been mainly used to study the soft gamma-ray emission from the Galactic plane (GP) \\citep{Bouetal05, Krietal06} through the Galactic plane scans and the Galactic centre (GC) \\citet{Beletal04, Revetal04, Beletal06, Krietal06} through the Galactic centre deep exposure programme. A number of papers have already presented general surveys \\citep{Bazetal06, Biretal06} of the sky as well as of specific regions \\citep{Gotetal05, Haretal06, Moletal04} and population types \\citep{Baretal06, Basetal06, Sazetal07, Becetal06, Becetal09}.\\\\\n\nThe majority of the classified sources detected by \\emph{INTEGRAL} are either low and high mass X-ray binaries (LMXBs and HMXBs) or AGNs \\citep{Bodetal07}. However, a significant fraction of the detected sources remain unidentified. A special approach to population classification is required for the GC region to resolve the population types because of the high density of sources. Fortunately, the physics of the sources may help us to unveil their type. Indeed, the bulk of the \\emph{INTEGRAL} sources are accreting systems that are expected to be intrinsically variable on multiple timescales depending on the source type and the nature of the variability. For instance, X-ray binaries (XRBs) may exhibit variability on timescales that range from milliseconds (supporting the idea that emission originates close to the compact object in the inner accretion radius) to hours and days, indicating that the variability can originate throughout the accretion flow at multiple radii and propagate inwards to modulate the central X-ray emission \\citep{AreUtt05}. This idea is supported by the known correlation between millisecond\/second and hour\/day scale variability in XRBs \\citep{Utt03}. LMXBs may exhibit flaring behavior with an increase in both emission intensity and hardness over a period of a few hundred to a few thousand seconds. X-ray bursts with rise times of a few seconds and decay times of hunderds of seconds or even several hours \\citep{Baretal02} are also common to these objects. On the other hand, HMXBs are known to exhibit variability on timescales ranging from a fraction of a day up to several days, generated by the clumpiness of the stellar wind acreting onto the compact object \\citep{Ducetal09}. Hour-long outbursts caused by variable accretion rates are observed in supergiant fast X-ray transients, a sub-class of HMXBs discovered by INTEGRAL \\citep{Rometal2009}. Owing to their larger size, AGNs of different types exhibit day-to-month(s) variability depending on the black hole mass \\citep{IshCou09}. Gamma-ray loud blazars have variability timescales in the range from $10^{1.6}$ to $10^{5.6}$~s \\citep{LiaLiu03}. Therefore, a list of \\emph{INTEGRAL} sources with quantitative measurements of their variability would be an important help to classifying the unidentified sources and more detailed studies of their physics.\\\\\n\nThe variability of \\emph{INTEGRAL} sources was addressed in the latest 4th IBIS\/ISGRI survey catalog paper \\citep{Biretal09} when the authors performed the so-called \\textit{bursticity} analysis intended to facilitate the detection of variable sources.\\\\\n\nHere we present a catalog of \\emph{INTEGRAL} variable sources identified in a large fraction of the archival public data. In addition to standard maps produced by the standard data analysis software, we compiled a $\\chi^2$ all-sky map and applied the newly developed method to measure the fractional variability of the sources detected by the IBIS\/ISGRI instrument onboard \\emph{INTEGRAL}. The method is sensitive to variability on timescales longer than those of single ScW exposures ($\\approx 2000$ seconds), i.e., to variability on timescales of hour(s)-day(s)-month(s). The catalog is compiled from the sources detected in the variability map. In addition, we implemented an online service providing the community with all-sky maps in the 20-40, 40-100, and 100-200 keV energy bands generated during the course of this research.\\\\\n\nIn the following, we describe the data selection procedure and the implemented data analysis pipeline (Sect.~\\ref{sec:datana}). In Sect.~\\ref{sec:method}, we outline our systematic approach to the detection of variability in \\emph{INTEGRAL} sources and describe our detection procedure in Sect.~\\ref{sec:detect}. We compile the variability catalog in Sect.~\\ref{sec:catvar}. In Sect.~\\ref{sec:skyview}, we briefly describe the implemented all-sky map online service. We make some concluding remarks in Sect.~\\ref{sec:conclu}.\n\n\\section{Data and analysis}\n\\label{sec:datana}\n\n\\subsection{Data selection and filtering}\n\nSince its launch, \\emph{INTEGRAL} has performed over 800 revolutions each lasting for three days. We utilized the ISDC Data Centre for Astrophysics \\citep{Couetal03} archive\\footnote{http:\/\/isdc.unige.ch} to obtain all public data available up to June 2009 and the Offline Scientific Analysis (OSA) v. 7.0 to process the data. \\\\\n\n\\emph{INTEGRAL} data are organized into science windows (ScWs), each being an individual observation that can be either of pointing or slew type. Each observation (pointing type) lasts 1 -- 3 ksecs. For our analysis, we chose all pointing ScWs with an exposure time of at least 1 ksec. We filtered out revolutions up to and including 0025 belonging to the early performance verification phase, observations taken in staring mode, and ScWs marked as bad time intervals in instrument characteristics data including ScWs taken during solar flares and radiation belt passages. Finally, after the reconstruction of sky images we applied the following statistical filtering. We calculated the standard deviation of the pixel distribution for each ScW and found the mean value of standard deviations for the whole data set. We then rejected all the ScWs in which the standard deviation exceeded the mean for the whole data set by more than 3$\\sigma$. We assumed the distribution of standard deviations of individual and independent ScWs to be normal. While calculating standard deviations in individual ScWs, image pixels were assumed to be independent. Thus, the filtering procedure allowed us to remove all ScWs affected by a high background level. In the end, 43~724 unique pointing-type ScWs were selected for the analysis, giving us a total exposure time of 80.0~Msec and a more than 95 percent sky coverage.\n\n\\subsection{Instrument and background}\n\nIn the present study we use only the low-energy detector layer of the IBIS coded-mask instrument, called ISGRI (\\emph{INTEGRAL} Soft Gamma Ray Imager, \\citet{Lebetal03}), which consists of 16~384 independent CdTe pixels. It features an angular resolution of $12^{\\prime}$ (FWHM) and a source location accuracy of $\\sim$1 arcmin, depending on the signal significance \\citep{Groetal03}. Its field of view (FOV) is $29^{\\circ} \\times 29^{\\circ}$. The fully-coded part of the FOV (FCFOV), i.e., the area of the sky where the detector is fully illuminated by the hard X-ray sources, is $9^{\\circ} \\times 9^{\\circ}$. It operates in the energy range between 15 keV and 1 MeV.\\\\\n\nOver short timescales, the variability of the background of the instrument is assumed to be smaller than the statistical uncertainties. However, this is not the case for mosaic images constructed from long exposures. In general, it is assumed that the mean ISGRI background in each individual pixel changes very little with time, and therefore the standard OSA software provides only one background map for the entire mission. During the construction of the all-sky map, we noted that the quality of the mosaics of the extragalactic sky region depends on the time period over which the data were taken. We therefore, concluded that the long-term variation in the background of the instrument \\citep{Lebetal05} significantly affects the extragalactic sky mosaic. On the other hand, in the GC and inner GP regions (l$\\left\\lbrace-90;90\\right\\rbrace$, b$\\left\\lbrace -20;20\\right\\rbrace$) the standard background maps provided by OSA provide better results (noise distributions are narrower). This might be because of the large number of bright sources and the Galactic ridge emission \\citep{Krietal06}, although we leave this question open for the future research.\\\\\n\nTo produce time-dependent background maps, we extracted raw detector images for each \\emph{INTEGRAL} revolution (3 days) and calculated the mean count rate in each individual pixel during the corresponding time period. To remove the influence of the bright sources on the neighboring background, we fitted and removed these sources from the raw detector images, i.e., in each ScW we constructed a model of the source pattern on the detector (pixel illumination fraction, PIF) and fitted the raw detector images using the model\n\\begin{equation}\nS_{k,l}=\\sum_{i=1}^M f_i \\times PIF_{k,l}+B,\n\\end{equation}\nwhere $S_{k,l}$ are the detector count rate, $PIF_{k,l}$ are the respective pattern model of source $i$ in the detector pixel with coordinates $(k,l)$, $f_i, i=1..M$ is the flux of source $i$ in the given ScW, and $B$ is the mean background level. This procedure was applied to all the detected sources in the FOV. The stability of the fitting procedure was tested using a large set ($>1~000$) of simulated ScWs with variable source fluxes. The results of the fit were normally distributed around the expected source flux, and therefore we can conclude that our procedure is sufficiently accurate to remove the point sources\nfrom the construction of the background maps. The results of the fitting procedure were then used to create a transformed detector image, $\\hat S_{k,l}$, defined as\n\\begin{equation}\n\\hat S_{k,l}=S_{k,l}-\\sum_{i=1}^M f_i \\times PIF_{k,l}.\n\\end{equation}\nBackground maps were then constructed by averaging the transformed detector images of a given data set.\n\nFrom our time-dependent background maps, we found that the shape of the ISGRI background varies with time, in particular after each solar flare. A long-term change in the background was noticed as well. This result agrees with the findings of \\citet{Lebetal05}. To take these variations into account, we generated background maps for each spacecraft revolution and in the image reconstruction step applied them to the extragalactic sky region.\n\nBesides the real physical background of the sky, there is also artificial component, because IBIS\/ISGRI is a coded-mask instrument with a periodic mask pattern. Therefore, the deconvolution of ISGRI images creates structures of fake sources that usually appear around bright sources. Apart from the periodicity of the mask, insufficient knowledge of the response function leads to residuals in the deconvolved sky images. The orientation of the spacecraft changes from one observation of the real source to another, so fake sources and structures around the real source contribute to the noise level of the local background. To reduce this contribution, we used a method described in Sect.~\\ref{subsec:imarec}.\\\\\n\n\\subsection{Image reconstruction}\n\\label{subsec:imarec}\nAfter producing the background maps as described in the previous subsection, we started the analysis of the data using the standard Offline Scientific Analysis (OSA) package, version 7.0, distributed by ISDC \\citep{Couetal03}. For image reconstruction, we used a modified version of the method described in \\citet{Ecketal08}. It is known that screws and glue strips attaching the IBIS mask to the supporting structure can introduce systematic effects in the presence of very bright sources \\citep{Nevetal09}. To remove these effects, we identified the mask areas where screws and glue absorb incoming photons, and we disregarded the pixels illuminated by these mask areas for the 11 brightest sources in the hard X-ray band. No more than 1\\% of the detector area was disregarded for each of the brightest sources. For weaker sources, the level of systematic errors produced by the standard OSA software was found to be consistent with the noise, so the modified method was not required. Finally, we summed all the processed images weighting by variances to create the all-sky mosaic. For this work, we produced mosaics in 3 energy bands (20-40, 40-100, and 100-200 keV). Both our all-sky map images and corresponding exposure maps are available online and we direct the reader to our online web service\\footnote{http:\/\/skyview.virgo.org.ua}. As an example, we provide here the image of the inner part (36$^{\\circ}$ by 12$^{\\circ}$) of the Galaxy in the 20-40 keV energy band (see Fig.~\\ref{fig:gc}).\n\n\\begin{figure*}[!t]\n\\begin{center}\n\\includegraphics[width=0.40\\textwidth]{fig1a.eps}\n\\includegraphics[width=0.14\\textwidth]{fig1b.eps}\n\\includegraphics[width=0.40\\textwidth]{fig1c.eps}\n\\caption{Lightcurves and variability map of HMXB 4U~1700-377 and LMXB GX~349+2. The solid line indicates the mean flux of the sources during the observation time, the dotted line shows the mean flux minus $S_{int}$, the dashed line shows the mean flux plus $S_{int}$.}\n\\label{fig:lc}\n\\end{center}\n\\end{figure*}\n\n\\section{Method of variability detection}\n\\label{sec:method}\nThe variability of \\emph{INTEGRAL} sources can be analyzed in a standard way by studying the inconsistency of the detected signal with that expected from a constant source by performing the $\\chi^2$ test. Here we consider introducing a variability measurement for the \\emph{INTEGRAL} sources and show how to apply it to the specific case of the coded-mask instrument. For an alternative approach based on the maximum likelihood function for the determination of intrinsic variability of X-ray sources the reader is referred to \\citet{Almetal00} and \\citet{Becetal07}.\n\nThe \\emph{INTEGRAL} data are naturally organized by pointings (ScW) with average duration of $\\sim 1-3$~ksec. Therefore, the simplest way to detect the variability of a source on ksec and longer timescales is to analyse the evolution of the flux from the source on a ScW-by-ScW basis. We define $F_i$ and $\\sigma_i^2$ to be the flux and the variance of a given source, respectively, in the $i$-th ScW. The weighted mean flux from the source is then given by\n\\begin{equation}\n\\langle F \\rangle=\\frac {\\sum_{i=1}^N \\frac {F_i}{\\sigma_i^2}} {\\sum_{i=1}^N \\frac{1}{\\sigma_i^2}},\n\\end{equation}\nwhere $N$ is the total number of ScWs. The variance of the source's flux, which is the mean squared deviation of the flux from its mean value during the observation time, is given by\n\\begin{equation}\nS_{tot}^2 = \\frac {\\sum_{i=1}^N \\frac{(F_i - \\langle F \\rangle)^2}{\\sigma_i^2}} {\\sum_{i=1}^N \\frac{1}{\\sigma_i^2}} = \\chi^2\\sigma^2,\n\\label{SV}\n\\end{equation}\nwhere $\\chi^2 = \\sum_{i=1}^N \\frac{(F_i - \\langle F \\rangle)^2}{\\sigma_i^2}$ and $\\sigma^2 = \\left(\\sum_{i=1}^N \\frac{1}{\\sigma_i^2}\\right)^{-1}$ is the variance of the weighted mean flux.\n\nHowever, in addition to intrinsic variance of the source, this value includes the uncertainty in the flux measurements during individual ScWs, i.e., the contribution of the noise. If the source variance is caused only by the noise, i.e., $F_i = \\langle F \\rangle \\pm \\sigma_i$, Eq.~(\\ref{SV}) is given by $S_{noise}^2 = N \\sigma^2$. To eliminate the noise contribution, we can subtract the noise term of the variance from the source variance and derive the \\emph{intrinsic variance} of the source\n\n\\begin{equation}\nS_{int}^2 = \\chi^2\\sigma^2 - N\\sigma^2.\n\\label{intvar}\n\\end{equation}\nWhen all measurement errors are equal ($\\sigma_i = \\sigma_0$, $\\sigma^2 = \\sigma_{0}^2\/N$), our case reduces to the method used by \\cite{Nanetal97}\n\n\\begin{equation}\nS_{int}^2 = \\frac {1}{N} \\sum_{i=1}^N (F_i - \\overline{F})^2 - \\sigma_{0}^2,\n\\end{equation}\nwhere $\\overline{F}$ is the unweighted mean flux and $S_{int}^2$ is called the \\emph{excess variance}. In the absence of measurement errors, our case reduces to the standard definition of the variance\n\n\\begin{equation}\nS_{int}^2 = \\frac {1}{N} \\sum_{i=1}^N (F_i - \\overline{F})^2.\n\\end{equation}\n\nGiven that different sources have different fluxes, the variability of sources can be quantified by using the normalized measure of variability, which we call here the \\emph{fractional variability}\n\\begin{equation}\nV = \\frac {S_{int}}{\\langle F \\rangle}.\n\\label{simplefracvar}\n\\end{equation}\nHowever, in reality, if one were to apply the above method to detect the variable sources in a crowded field (i.e., containing many sources) of a coded-mask instrument such as IBIS, one would infer {\\it all} the detected sources to be highly variable. This is because in coded-mask instruments, each source casts a shadow of the mask on the detector plane. If there are several sources in the field of view, each of them produces a shadow that is spread over the whole detector plane. Some detector pixels are illuminated by more than one source. If the signal in a detector pixel is variable, one can tell, only with a certain probability, which of the sources illuminating this pixel is responsible for the variable signal. Thus, in a coded-mask instrument, the presence of bright variable sources in the field of view introduces an ``artificial'' variability for all the other sources illuminating the same pixels. Since the overlap between the PIF of the bright variable source and the sources at different positions on the sky varies with the position on the sky, one is also unable to determine in advance the level of this ``artificial'' variability in a given region of the deconvolved sky image.\n\nTo overcome this difficulty, one has to measure the variability of the flux not only directly in the sky pixels at the position of the source of interest, but also in the background pixels around the source. Obviously, the ``artificial'' variability introduced by the nearby bright sources is similar in the adjacent background pixels to that in the pixel(s) at the source position. Therefore, one can produce the variability map for the whole sky and compare the values of variability at the position of the source of interest to the mean values of variability in the adjacent background pixels. The variable sources should be visible as local excesses in the variability map of the region of interest. If a source can be localized in the variability image, then the true fractional variability of the source is calculated as\n\n\\begin{equation}\nV_r = \\frac {\\sqrt{S_{int,s}^2 - S_{int,b}^2}} {\\langle F_s \\rangle - \\langle F_b \\rangle},\n\\label{fracvar}\n\\end{equation}\nwhere the subscript $b$ represents the values of the background in the area adjacent to the source and the subscript $s$ the values taken from the source position.\\\\\n\nTo illustrate the method, we present the lightcurves (Fig.~\\ref{fig:lc}) of two objects that are typical bright \\emph{INTEGRAL} sources: the HMXB 4U~1700-377, which is a very bright and very variable source ($V_{r} \\simeq 104$~\\%), and the LMXB GX~349+2, which is a moderately bright and variable source ($V_{r} \\simeq 45$~\\%). The solid line indicates the mean flux of the sources, $\\langle F \\rangle$. We can see that the mean flux deviation (dotted lines), calculated as the square root of the intrinsic variance, $S_{int}^2$, measures the average flux variation of the sources during the corresponding time. However, we note that in the present case we consider calculations based solely on a lightcurve. If one wishes to obtain a fractional variability value dividing the mean flux deviation by the mean flux, one will obtain the $V$ value, but not $V_{r}$, i.e., the contribution of bright variable neighbor sources is not treated properly. It is impossible to extract the variability of the background, $S_{int,b}$, and the mean background flux $\\left\\langle F_b \\right\\rangle$ using the source lightcurve only. A number of lightcurves of the neighboring pixels should also be compiled to estimate $S_{int,b}$ and $\\left\\langle F_b \\right\\rangle$. This number should be sufficiently high to obtain good estimates. Therefore, an all-sky approach is justified. In the current example, no sources are much brighter in the vicinity of the ones considered that could strongly affect them, so the difference between $V$ and the catalog $V_r$ value is around 3\\%, but for the weak sources in the vicinity of bright ones the difference would be higher.\n\n\\begin{figure*}[!th]\n\\begin{center}\n\\includegraphics[width=0.95\\textwidth]{fig2a.eps}\n\\includegraphics[width=0.95\\textwidth]{fig2b.eps}\n\\caption{Inner parts (36$^{\\circ}$ by 12$^{\\circ}$) of the INTEGRAL\/ISGRI all-sky maps in Galactic coordinates, Aitoff projection. The significance image (top) in the 20-40 keV energy band, has square root scaling. The bottom image shows the corresponding intrinsic variance map, and also has square root scaling. The circle shows the inner 4$^{\\circ}$ for which the variability background extraction was made from the box region.}\n\\label{fig:gc}\n\\end{center}\n\\end{figure*}\n\nLooking at Eq.~\\ref{fracvar}, one can indeed see that the effect of ``artificial'' fractional variability is strong for moderate and faint sources in the vicinity of the bright variable sources, while for bright sources the effect is small. The ``artificial'' variability introduced by the bright sources in their vicinity ($S_{int,b}^2$ for the surrounding sources) is in range from a fraction of a percent up to a few percent of their own variability (dictated by the PIF accuracy). When we consider the persistent source in the vicinity of the bright variable source, $S_{int,s}^2$ is defined by $S_{int,b}^2$ only (i.e., by variability introduced by the bright variable source). For moderate or faint sources, $S_{int,b}$ may well be comparable to their own flux, and if we apply Eq.~\\ref{simplefracvar} directly we will infer substantial fractional variability, which may well be between ten and fifty percent, or even higher. The bright sources are less sensitive to this effect because $S_{int,b}$ will be only a small fraction of their flux. We checked these conclusions by performing simulations of a moderate persistent source ($F = 1$~ct\/s) in the vicinity of a bright variable source ($\\langle F \\rangle = 20$~ct\/s). By applying Eq.~\\ref{simplefracvar} directly to measure the fractional variability of a moderate source, we found that $V \\simeq 25$\\% while Eq.~\\ref{fracvar} infered that the source was constant, i.e., $V_r \\simeq 0$\\%. \n\n\\begin{figure*}[!th]\n\\begin{center}\n\\includegraphics[width=0.61\\textwidth]{fig3a.eps}\n\\includegraphics[width=0.37\\textwidth]{fig3b.eps}\n\\includegraphics[width=0.61\\textwidth]{fig3c.eps}\n\\includegraphics[width=0.37\\textwidth]{fig3d.eps}\n\\caption{Top: the distributions of the intrinsic variability in pixels of $3.5^{\\circ} \\times 3.5^{\\circ}$ area (shown nearby) centered on the IGR~J18450-0435. In green is $B_1$ distribution in the range $(min, 2m-min)$ here representing the local intrinsic variance background, in red is the sources contribution, f(x) is the gaussian distribution. Bottom: the $3.5^{\\circ} \\times 3.5^{\\circ}$ area (shown nearby) centered on the GC source 1A~1743-288. In green is $B_1$ distribution in the range $(min, 2m-min)$, in blue is $B_2$ distribution from the empty region in GC here representing the local intrinsic variance background, in red and f(x) are same as above.}\n\\label{fig:sintbhist}\n\\end{center}\n\\end{figure*}\n\nIn the course of simulations, we also checked the behavior of the ``artificial'' variability in the case a moderate persistent source situated at the ghost position of the bright variable source. We considered two situations: a) when the mosaic image consisted mostly of ScWs in a configuration being determined almost entirely by the spacecraft orientation, which remained constant (i.e., sky region of the Crab), and b) when the mosaic image contained only a chance fraction of ScWs in that specific configuration because of different spacecraft orientations (i.e., sky region of the Cyg X-1). The simulations showed that the flux and therefore the variability measurement of the mosaic deviated from the input source parameters in case a) only, while in case b), there were no significant deviations. As expected in case a), the moderate source was affected. This was caused by the coincidence of the sources shadowgrams. The deconvolution procedure was unable to distinguish the sources correctly on the ScW level, therefore the mosaic results were affected. We detected an incorrect flux and variability for the moderate source. In reality, this particular simulated situation is very rare (see Sect.~\\ref{sec:catvar} for discussion of this situation in real case). Even if the constant orientation of some observation is kept, different observation patterns applied during the observation will significantly reduce the undesirable effect considered in situation a). \n\\\\\n\n\\section{The detection of variable sources}\n\\label{sec:detect}\n\nFor our study, we used the latest distributed \\emph{INTEGRAL} reference catalog\\footnote{http:\/\/isdc.unige.ch\/?Data+catalogs} \\citep{Ebietal03} version 30 and selected the sources with ISGRI\\_FLAG == 1, i.e., all the sources ever detected by IBIS\/ISGRI.\\\\\n\nBased on the aforementioned method, we compiled the instrinsic variance maps ($S_{int}^2$) of the \\emph{INTEGRAL} sky in three energy bands (20-40, 40-100, and 100-200 keV). This was accomplished by performing pixel operations following Eq.~(\\ref{intvar}) on the constructed all-sky mosaic maps of $\\chi^2$, $\\sigma^2$ (variance), and $N$, which is the map showing the number of ScWs used in a given pixel for the all-sky mosaic. As an example, the instrinsic variance image of the inner region of our Galaxy is given in Fig.~\\ref{fig:gc} (see our online service for all-sky maps).\\\\\n\nAfter compiling an intrinsic variance map in each band, we calculated the local (or background) intrinsic variance, $S_{int,b}^2$, and its scatter, $\\Sigma$, in the region around each catalog source. This was performed by the following scheme. First, the values of the mean $m$, the minimum $min$, and their difference were calculated in a square of $3.5^{\\circ} \\times 3.5^{\\circ}$ centered on the source position. We then assumed that the pixel values in the corresponding area are distributed normally and there is the always-positive contribution from the sources in the field. Since the sources occupy a small fraction of the considered sky region, the initial mean value, $m$, is almost unaffected by their presence because of their small contribution. To reject the source contribution and to obtain the parameters of the normal component, we calculated the mean and the standard deviation in the range $(min, 2m-min)$. The newly found mean value is $S_{int,b}^2$ in Eq.~\\ref{fracvar} and the standard deviation shows its scatter $\\Sigma$. The detectability of the sources in the intrinsic variance map is then defined by the condition that $S_{int}^2 \\geq S_{int,b}^2 + 3 \\Sigma$. For an illustration (see top of Fig.~\\ref{fig:sintbhist}) we present a region around INTEGRAL source IGR~J18450-0435 with respective distributions. The green solid area is the distribution in the range $(min, 2m-min)$ with the mean value representing $S_{int,b}^2$ for the current source, and in red the always-positive contribution from the sources in the field is given. The distribution in the range $(min, 2m-min)$ is well fitted by the Gaussian shown with a dashed line. The top of Fig.~\\ref{fig:sintbhist} justifies well the approximate rejection of the source contribution to the overall pixel value distribution in the field. Applying this rejection procedure allows us to obtain the true scatter in the background rather than the scatter in the overall distribution (including sources), which is obviously higher.\n\nThe detection of variable sources in the innermost region of our Galaxy (circle of 4$^\\circ$ from the GC, see Fig.~\\ref{fig:gc}) was performed differently because it is a crowded field and therefore a large number of sources contribute to the intrinsic variance background of each other. In contrast to the individual source case, the sources in the inner GC region occupy a significant fraction of the region around the source of interest and therefore influence the $m$ value significantly. This results in improper estimation of the background distribution if one applies the rejection procedure based on $(min, 2m-min)$ range ($B_1$ distribution at the bottom of Fig.~\\ref{fig:sintbhist}). In place of calculating the intrinsic variance background and its scatter for each GC source individually, these values were therefore estimated from an empty field near the GC (box at the bottom image of Fig.~\\ref{fig:gc}) and assumed to be equal for all the sources in the GC region. The $B_2$ distribution shown at Fig.~\\ref{fig:sintbhist} (bottom) is accurately determined and well fitted by the Gaussian shown by the dashed line.\\\\\n\n\\section{The catalog of variable sources}\n\\label{sec:catvar}\n\nThe search for variable sources from the reference catalog was performed on the maps generated from ScWs divided into three equal subsequent time periods (maps 1,2, and 3, approximately 2 years each). This was done to detect transient sources that would be difficult (or even impossible) to detect on the map integrated over the whole time period (map T). The search was performed separately on each map (1,2,3) and in each energy band. The results of the search were put into one list from which the sources were selected. Finally, the search was performed on the total map (T) to find sources that were detected only on the map integrated over the whole time period and identify the sources that were active only during specific time periods. The resulting catalog of variable sources detected by \\emph{INTEGRAL} can be found in Table~\\ref{varcat}.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.99\\columnwidth]{fig4.eps}\n\\caption{Number of significantly variable sources detected in different energy bands classified by types.}\n\\label{chart:poptypes}\n\\end{center}\n\\end{figure}\n\nFigure~\\ref{chart:poptypes} shows the number of significantly variable sources from our catalog for each source type (HMXB, LMXB, AGN, unidentified, and miscellaneous). The majority of the variable sources ($\\sim$76\\%) in all energy bands are Galactic X-ray binaries. We see that in the 100-200 keV band there are four times more LMXBs than HMXBs. The remaining variable sources are AGNs ($\\sim$10\\%), unidentified ($\\sim$7.5\\%), and other ($\\sim$6.5\\%) source types (cataclysmic variables, gamma-ray bursts, and pulsars). The number of significantly variable sources decreases with energy for each population type, which only reflects the sensitivity of the instruments.\n\nThe distribution of the variability of sources from Table~\\ref{varcat} is presented in Fig.~\\ref{chart:histo}. The variability distribution is approximately normal with one evident outlier, the gamma-ray burst IGR J00245+6251 \\citep{Vesetal05}. However, this is mainly caused by the upper limit to the mean flux of this source being too low. The gamma-ray burst IGR J00245+6251 is detected in all three energy bands. Figure~\\ref{fig:V1-V2} shows the fractional variability in the 40-100 keV band versus the variability in the 20-40 keV band. The majority of the sources that are found to be variable in both the 20-40 keV and 40-100 keV energy bands exhibit nearly equal variability in both bands. \n\nTo show the detection threshold for the variability of a source, we compiled a diagram (see Fig.~\\ref{fig:logV-logF}) where we plot the fractional variability versus flux for all detected variable sources along with the upper limit to the fractional variability versus flux for all the reference catalog sources detected in our significance map in 20-40 keV energy band. The upper limit was determined by substituting $S_{int,s}^2$ with $S_{int,b}^2 + 3 \\Sigma$ in Eq.~(\\ref{fracvar}). Although we chose all the sources detected in our significance map, we used the mean flux of the source because, unlike significance, it is an exposure-independent value. According to our definition, the fractional variability is also an exposure-independent value, so we plot it versus the exposure-independent mean flux to clearly see the detection threshold. We can see that starting from a limiting flux ($6.2\\times10^{-11}$~ergs\/cm$^2$s or $\\sim10$~mCrab) nearly all catalog sources are found to be variable. The majority of the bright sources are binary systems, which explains why nearly all of them are found to be variable.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.99\\columnwidth]{fig5.eps}\n\\caption{Number of sources versus fractional variability of the sources detected in different energy bands.}\n\\label{chart:histo}\n\\end{center}\n\\end{figure}\n\n\\begin{table*}[!h]\n\\begin{center}\n\\caption{The transient sources detected at the intrinsic variance map and not detected at the significance map. }\n\\begin{tabular}{|l|l|l|l|l|l|}\n\\hline\nName & Class & $V_{r,notnorm} \\pm Err$ (c\/s) & Exposure (ksec) & Map & Band\\\\\n\\hline\n\\input{varnstab.tex}\n\\hline\n\\end{tabular}\n\\label{transou}\n\\end{center}\n\\end{table*}\n\nWe also found a number of variable sources that have no counterparts in the significance map, which means that we were unable to measure the mean flux of these sources as it is compatible with the background mean flux. Hence, we are not able to give their fractional variability value, which is a normalized value and therefore tends to infinity with infinite errors. For these sources, we provide a 3-$\\sigma$ lower limit to their fractional variability by taking a 3-$\\sigma$ upper limit on their mean flux. Most of the sources are transient, and sometimes (in a specific observation period in a given energy band) the source is not detected in the significance map because the sensitivity of the instrument decreases with energy. Therefore, we can see that the variability map provides a tool to detect transient or faint but variable sources that would be missed in mosaics averaged over long timescales. To illustrate the detectability of the sources in the variability map, we provide a list (see Table~\\ref{transou}) of the sources that are smeared out in the significance map because of their high exposures. The values given in the table are not normalized variability values, $V_{r,notnorm} = \\sqrt{S_{int,s}^2 - S_{int,b}^2}$ along with their 3-$\\sigma$ errors, $Err$. To verify that the sources that are absent in the significance maps but detected in the variability maps are not the result of the low detection threshold, we ran the same detection procedures on the mock catalog of 2500 false sources distributed randomly and uniformly over the sky. The test detected 11 of 2500 false sources seen in variability maps and absent from the significance maps, compared to 8 of 576 in the case of the real catalog. This means that our detection criteria are rather strict.\n\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.99\\columnwidth]{fig6.eps}\n\\caption{Fractional variability of sources in the 20-40 keV band versus fractional variability in the 40-100 keV band.}\n\\label{fig:V1-V2}\n\\end{center}\n\\end{figure}\n\nWe comment on the inclusion of Crab in our catalog. It is known to be a constant source that is commonly used as a ``standard candle'' in high-energy astrophysics. There are two reasons why it appears in the catalog. The first is the deterioration of the detector electronics onboard the spacecraft. The loss of detector gain is around 10\\% over the entire mission. Although this loss is partially compensated by the software, it introduces around 3-5\\% variability in our method. The remaining variability can be ascribed to systematic errors in the spacecraft alignment \\citep{Waletal03}, which for OSA 7.0 are of the order of 7 arcsec \\citep{Ecketal09}, hence slightly different positions of the Crab are found in each observation. Since Crab is a very bright source, its Gaussian profile is very steep. When the peak is slightly offset, we measure a sharp decrease in the flux at the catalog position of Crab. The combination of the two effects leads to an artifical variability of Crab in all energy bands. A similar effect occurs in the pixels adjacent to the catalog position of Crab.\n\\begin{figure}[!t]\n\\begin{center}\n\\includegraphics[width=0.99\\columnwidth]{fig7.eps}\n\\caption{Fractional variability (V) versus flux (F) for all significantly variable sources (red crosses) from Table~\\ref{varcat}. Energy band is 20-40 keV. For comparison, the green crosses show the fractional variability detection threshold (3-$\\sigma$) versus flux for all reference catalog sources detected in the significance map. The sources not detected at the variability map are indicated by blue stars and are coincident with their green cross counterparts. Pink squares indicate 3-$\\sigma$ lower limits to the fractional variability of the sources that are not detected in the significance map.}\n\\label{fig:logV-logF}\n\\end{center}\n\\end{figure}\nWhen the peak of the PSF is found at a slightly displaced position, we find a sharp increase in the flux in this pixel. Our interpretation is confirmed by the image of Crab in the variability map, where the closest pixels to the source are found to be very variable, creating a ring-like structure. Moreover, it has been found that the observed position of Crab does not coincide with the position of the pulsar \\citep{Ecketal09}, which thus explains such an effect at the expected source position. To determine the influence of this effect on other sources, we inspected the 11 brightest sources in the 20-40 keV band and looked for a similar situation. In the case of Cyg~X-1 and Sco~X-1, the same effect, albeit weaker, was also seen. However, the derived value of their variability was not found to be affected by this effect. This effect contributes mainly to the variability of the pixels around the catalog position of these sources. Cyg X-1 and Sco X-1 are intrinsically very variable so the value extracted from the source position is much higher than for surrounding pixels, which is the opposite of the situation found in the Crab. For all the other sources, the influence of the misalignment was found to be negligible.\n\n\\begin{table*}[!h]\n\\begin{center}\n\\caption{Sources with additional error induced by the ``ghosts''.}\n\\begin{scriptsize}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|}\n\n\\hline\n\\multicolumn{1}{|l|}{Name} & \\multicolumn{5}{c|}{20-40 keV} &\\multicolumn{5}{c|}{40-100 keV} &\\multicolumn{5}{c|}{100-200 keV} \\\\\n\\cline{2-16}\n\\multicolumn{1}{|l|}{} & $V_r$ & $V_{r,err}$ & $G_{err}$ & Gexp & Exp & $V_r$ & $V_{r,err}$ & $G_{err}$ & Gexp & Exp & $V_r$ & $V_{r,err}$ & $G_{err}$ & Gexp & Exp \\\\\n\n\\hline\n\\input{ghostab.tex}\n\\hline\n\\end{tabular}\n\\label{ghostab}\n\\end{scriptsize}\n\\end{center}\n\\hspace{-6mm} Here, $V_r$ is the true fractional variability of the source, $V_{r,err}$ is the fractional variability error same as in catalog, $G_{err}$ is the error induced by the source, Gexp, in ksec, is the exposure time the source was affected by the ``ghost'', Exp, in ksec, is the total exposure time.\n\\end{table*}\n\nFinally, we performed a test to find cases in which a source is coincident with the ghost of another source within one ScW (a case described in Sect.~\\ref{sec:method}). We considered all the reference catalog sources and searched for ``ghost-source'' pairs in individual ScWs from the list used for our all-sky maps. If one of the sources in a given pair was present in our variability catalog, the pair was selected for further analysis. According to our simulations, if two sources are in the ghosts of each other, the fainter one loses up to half of its flux to the flux of the brighter one. If one of the sources is substantially brighter than the other, the relative distortion of the flux of the bright source is minor. Therefore, the flux of the source is significantly distorted if its ``ghost companion'' is brighter or comparable in brightness to the source itself. In the latter case, the ``ghost companion'' is also affected. Thus, we assume that if the source in the ghost is more than ten times fainter than its ``ghost companion'', its contaminating influence is negligible, whereas its own flux is contaminated significantly. After adopting this condition, we obtained a list of the sources affected by such position coincidences and the exposure times for each of them during which they where affected. For nearly all of the sources from the list, the fraction of exposures with distorted flux is less than 1\\%, which infers a relative uncertainty in the average flux and fractional variability of the same order. This is much smaller than the error set on variability in our catalog and, as can be seen from the Fig.~\\ref{fig:logV-logF} is smaller than the variability detection threshold even for the brighest sources. Nonetheless, a number of sources have larger than 1\\% errors induced by the ghosts, which we indicate with a $^{g}$ superscript in the catalog and provide a Table~\\ref{ghostab} where both errors are given. As can be seen, in all cases the ghost induced error is much smaller than the catalog variability error.\n\n\n\\section{The All-Sky online}\n\\label{sec:skyview}\n\nTo make our results available to the community, we decided to take advantage of the SkyView interface \\citep{Mcgetal98} (i.e., a Virtual Observatory service available online\\footnote{http:\/\/skyview.gsfc.nasa.gov} developed at and hosted by the High Energy Astrophysics Science Archive Research Centre (HEASARC)). SkyView provides a straightforward interface where users can retrieve images of the sky at all wavelengths from radio to gamma-rays \\citep{Mcgetal98}. SkyView uses NED and SIMBAD name resolvers to translate names into positions and is connected to the HEASARC catalog services. The user can retrieve images in various coordinate systems, projections, scalings, and sizes as ordinary FITS as well as standard JPEG and GIF file formats. The output image is centered on the queried object or sky position. SkyView is also available as a standalone Java application \\citep{Mcgetal97}. The ease-of-use of this system allowed us to incorporate INTEGRAL\/ISGRI variability and significance all-sky maps into the SkyView interface. We developed a simple web interface for the SkyView Java application and have made all-sky mosaics available online\n\n\n\\section{Concluding remarks}\n\\label{sec:conclu}\n\nOur study of variability of the \\emph{INTEGRAL} sky has found that 202 sources exhibit significantly variable hard X-ray emission. To compile the catalog of variable sources, we have developed and implemented a method to detect variable sources, and compiled all-sky variability maps. A search for variable sources from the \\emph{INTEGRAL} reference catalog was carried out in three energy bands: 20-40, 40-100, and 100-200 keV. The variable sources were detected in all studied energy bands, although their number sharply decreases with increasing energy. A number of sources were detected only during specific time periods and not detected on the map integrated over the whole time period. These sources are most likely transient. On the other hand, some sources were found to be variable only on the total variability map. This means that they might be persistent and not extremely variable sources.\n\nWe found that around 76\\% of all variable sources of our catalog are binary systems and nearly 24\\% of variable sources are either AGNs, unidentified, or other source types. The variability measurements of our catalog sources have rather normal distributions in all energy bands. We found that in the majority of cases the variability of the source in the first band correlates with its variability in the second band. We derived the limits to the fractional variability value to be detected as a function of the source flux (Fig.~\\ref{fig:logV-logF}). We also found that variability map can be a tool to detecting transient or faint but variable sources that would be missed in mosaics averaged over long timescales. In a forthcoming paper, we will discuss in more detail the properties of the variable sources detected during this study in order to gain some physical insights into the population of hard X-ray sources.\\\\\n\nFinally, we emphasize that the sky maps generated during this study represent 6 years of \\emph{INTEGRAL} operation in orbit. In addition to the variability maps, we have compiled significance maps in three energy bands (20-40, 40-100, and 100-200 keV). All our maps are available as an online service to the community using the SkyView engine. \n\n\\section*{Acknowledgements}\n\nBased on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), Czech Republic and Poland, and with participation of Russia and the USA.\n\nThis work was supported by the Swiss National Science Foundation and the Swiss Agency for Development and Cooperation in the framework of the programme SCOPES - Scientific co-operation between Eastern Europe and Switzerland. The computational part of the work was done at VIRGO.UA\\footnote{http:\/\/virgo.org.ua} and BITP\\footnote{http:\/\/bitp.kiev.ua} computing resources.\n\nWe are grateful to the anonymous referee for the critical remarks which helped us improve the paper.\n\nIT acknowledges the support from the INTAS YSF grant No.~06-1000014-6348. \n\n\\section*{Appendix: Error Calulation on $V_r$}\n\nWe use the standard error propagation formula to find the error, $\\sigma_{V_r}$, of the function $V_r = f(S_{int,s}^2=a, S_{int,b}^2=b, F_{s}=c, F_{b}=d)$ so :\n\\begin{equation}\n\\sigma_{V_r} = \\sigma_{f} = \\sqrt{\\left(\\frac{\\partial f}{\\partial a}\\sigma_a\\right)^2 + \\left(\\frac{\\partial f}{\\partial b}\\sigma_b\\right)^2 + \\left(\\frac{\\partial f}{\\partial c}\\sigma_c\\right)^2 + \\left(\\frac{\\partial f}{\\partial d}\\sigma_d\\right)^2},\n\\label{VERR}\n\\end{equation}\nwhere $\\sigma_a = \\sigma_b = \\Sigma$ and $\\sigma_c = \\sigma_d = \\sigma$ for a given source. By substituting the appropriate values in Eq.~\\ref{VERR} and by taking derivatives we find that:\n\\begin{equation}\n\\sigma_{V_r} = \\sqrt{\\frac{\\Sigma^2}{2\\left(F_s - F_b\\right)^2 \\left(S_{int,s}^2 - S_{int,b}^2 \\right)} + \\frac{2 \\sigma^2 \\left(S_{int,s}^2 - S_{int,b}^2 \\right) }{\\left(F_s - F_b\\right)^4}}.\n\\end{equation}\n\n\n\\bibliographystyle{aa}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Band Structure Calculations}\nTo support these claims, we performed extensive ab-initio numerical simulations of adatom-decorated stanene. For all of our simulations, we used the high-buckled, free-standing structure of stanene shown in Fig.~\\ref{fig:struc_BC}, with buckling height $\\delta$=0.859\\AA~, and in-plane lattice parameter $a=4.676$\\AA,~as found by relaxing the structure of free-standing stanene using DFT. We decorated stanene with Zn adatoms at one of each of the four structural sites for the buckled honeycomb lattice: hollow (H), bridge (B), valley (V), and top (T)~\\cite{naqvi2017exploring}, and relaxed the height of the adatoms. Because the phenomena in which we are interested require sublattice site symmetry breaking, we primarily focus on the V and T adatom sites for the remainder of this work.\n\nTo determine the stability of Zn atoms in the V and T positions, we used density functional theory to calculate the adsorption energy of Zn at each site using the definition $E_{ads} = E_{Zn+stanene} - E_{stanene} - E_{Zn}$. We found the adsorption energies to be $E_{ads}^{V} = -0.404$ eV and $E_{ads}^{T} = -0.545$ eV. To explore the possibility that the adatoms would migrate from the V and T positions, we also performed both a nudged elastic band calculation~\\cite{henkelman2000climbing} to determine the most favorable pathway for adatom transport away from the V or T sites, and a dimer method calculation~\\cite{henkelman1999dimer} to precisely determine the activation barriers for Zn migration. These calculations indicated that the Zn atoms on V and T sites are most likely to move to the H site, with diffusion barriers $E^{V}_{barrier}$ = 0.008 eV and $E^{T}_{barrier}$ = 0.011 eV. This is similar to other reports of barrier heights for the migration of adatoms on stanene for the V\/T $\\rightarrow$ H processes~\\cite{mortazavi2016staneneAdatomNaLi} and indicates that the adatoms will be slow to diffuse at temperatures below $\\sim$100K. These migration barriers, while limiting stability at higher temperature, would also allow the adatoms to be manipulated by STM techniques more easily. However, for cases where a higher operating temperature is desirable, we identify other candidate adatoms with higher diffusion barriers in the supplemental material.\n \nNext we calculated the band structure for bare stanene and stanene decorated with Zn adatoms at the T or V sites, shown in Fig~\\ref{fig:struc_BC}a-c. Bare stanene has massive Dirac cones with negative band gaps of magnitude $E_g^{bare} = 0.073$ eV at the $\\v{K}$ and $\\v{K}'$ points. When decorated with Zn adatoms in either position, the degenerate Dirac cones are spin-split, resulting in a smaller $E_g^{V}=0.095$ eV gap for V decoration and a larger $E_g^{T}=0.398$ eV gap for T decoration. Crucially, we find that the decoration leaves the bands away from the Fermi level qualitatively unchanged, ensuring that the significant physical changes in the material can be captured by the topological indices near the Fermi-level. \n\nWe repeated this analysis for stanene decorated at the T and V sites with each element in rows 2 through 5 of the periodic table. We find that nearly all elements produce bands that differ significantly from bare stanene. Additionally, many elements that do not qualitatively change the band structure of stanene end up doping the system to result in a metallic character. We provide more details of the viability of these adatom species and how they compare to Zn in the supplementary material.\n\n\n\n\\section{QSH and QVH Indicators}\nUsing the results of the ab-initio calculations, we generated tight-binding parameters via the maximally-localized Wannier function procedure. From the resulting Hamiltonians we calculate the Berry curvature for bare and decorated stanene~\\cite{marzari2012maximally}. As shown by the coloration of the band structures in Fig.~\\ref{fig:struc_BC}b and c, with red and blue representing positive and negative Berry curvature, both decorations produce equal and opposite Berry curvature concentrations at the $\\v{K}$ and $\\v{K}'$ points, indicating that one set of bands at each of these points was inverted by the Zn decoration. \n\nThe origin and consequences of these band inversions are best understood via a low-energy effective model for the massive Dirac cones at the $\\v{K}$ and $\\v{K}'$ points in stanene~\\cite{yao_spin-orbit_2007, Liu11, molle_buckled_2017}:\n\\begin{equation}\n \\begin{split}\n H=\\hbar v_F(\\eta k_x\\tau^x+ k_y\\tau^y)+\\eta\\tau^z\\sigma^z\\lambda_{SO}+\\Delta\\tau^z,\n \\end{split}\n \\label{eqn:ham_tb}\n\\end{equation}\nwhere $\\tau^\\alpha$ and $\\sigma^\\alpha$ are Pauli matrices for the sublattice and spin degrees of freedom respectively, $v_F$ is the Fermi velocity, $\\eta=+1$ for $\\v{K}$ and $\\eta=-1$ for $\\v{K}'$, and $\\lambda_{SO}$ is the spin-orbit coupling strength. The final term describes a staggered potential of strength $\\Delta$ between the sublattice sites generated by the adatom decoration.\n\nIn the absence of the staggered potential $\\Delta$, this model describes the QSH phase realized by bare stanene. Because the Berry curvature distribution is concentrated around the $\\v{K}$ and $\\v{K}'$ points, and the $z$-component of the spin is conserved, we can define spin-valley resolved Chern numbers, which are protected by time-reversal symmetry and spin-conservation, by integrating the Berry curvature of a particular spin around $\\v{K}$ or $\\v{K}'$~\\cite{ezawa_monolayer_2015}. We obtain $C_{K\\uparrow}=C_{K'\\uparrow}=\\pm1\/2$ and $C_{K\\downarrow}=C_{K'\\downarrow}=\\mp1\/2$, with the signs dependent on the sign of $\\lambda_{SO}$. In terms of these spin-valley resolved indices, the total Chern number $C \\in \\mathbb{Z}$ and the spin Chern number $C_s \\in \\mathbb{Z}_2$ are\n\\begin{equation}\n \\begin{aligned}\n C &= C_{K\\uparrow} + C_{K\\downarrow} + C_{K'\\uparrow} + C_{K'\\downarrow} = 0 \\\\\n C_s &= \\frac{1}{2}(C_{K\\uparrow} - C_{K\\downarrow} + C_{K'\\uparrow} - C_{K'\\downarrow}) = 1 \\text{ mod } 2.\n \\end{aligned}\n\\end{equation}\nAccording to the bulk-boundary correspondence, we expect that interfaces between regions with different spin Chern numbers host gapless helical modes that carry a spin current\\cite{kanemele05, Bernevig06, hasan_colloquium_2010, qi_topological_2011}. Pairs of helical modes are not protected by time-reversal symmetry, so the spin Chern number is defined modulo $2$, $C_s\\in\\mathbbm{Z}_2$, and interfaces either have zero or one pair of stable helical modes.\n\nNow when we consider the adatom decoration we find that a sufficiently large positive (negative) sublattice potential $\\Delta$ induces a spin-valley resolved band inversion and changes the signs of $C_{K,\\uparrow}$ and $C_{K',\\downarrow}$ ($C_{K,\\downarrow}$ and $C_{K',\\uparrow}$)~\\cite{Ni12, ezawa_monolayer_2015}. The resulting Chern and spin Chern numbers both vanish, but we can instead assign a translation symmetry protected \\emph{momentum} vector charge to each valley.\nThis charge is equal to the vector describing the position of the valley in momentum space and defines the valley vector index: $\\vec{C}_{v}=\\hbar\\vec{K}(C_{K}-C_{K'})=2\\hbar\\vec{K}$.\nSystems with a non-vanishing $\\vec{C}_v$ are called quantum valley Hall (QVH) insulators~\\cite{xiao_valley-contrasting_2007, ren_topological_2016}. \n\nInterestingly, a change in the QVH index across an interface is accompanied by a translation symmetry protected current~\\cite{Xiao07} carrying \\emph{lattice momentum} along the interface. \nThe amount of momentum transported along the edge is characterized by a scalar quantity $C_v$, equal to the projection of $\\vec{C}_v$ onto the unit vector $\\vec{\\tau}$ tangential to the interface.\nIn particular, a straight open edge satisfying $\\vec{\\tau} \\cdot \\vec{C}_v = 0$ projects the valleys on top of each other, resulting in a trivial edge as indicated by the vanishing scalar index $C_v=0$. In contrast, when $\\vec{\\tau}$ is parallel to $\\vec{C}_v$, we obtain $C_v=2\\hbar|\\vec{K}|$. It is customary to drop the factor of momentum $\\hbar|\\vec{K}|$ from the definition of $C_v$ altogether and work with the dimensionless valley index $C_{v}=C_{K}-C_{K'}=\\pm2$. We list the values of the Chern numbers and valley index realized by bare and decorated stanene in Table~\\ref{tab:indices}.\n\n\\begin{table}[t]\n \\centering\n \\def1.25{1.25}\n \\begin{tabular}{| c | c c c | c |}\n \\hline\n Phase & $C$ & $C_{s}$ & $C_{v}$ & Decoration pattern \\\\\n \\hline\n \\hline\n \n \\multirow{2}{*}{QVH} & 0 & 0 & 2 & Zn at V \\\\\n \\cline{2-5}\n & 0 & 0 & $-2$ & Zn at T \\\\\n \\hline\n QSH & 0 & 1 & 0 & No adatom decoration \\\\\n \\hline\n \\end{tabular}\n \\caption{The Chern number, $C$, spin-Chern number, $C_s$, and valley-Chern number, $C_{v}$, for each topologically nontrivial phase realized by Hamiltonian (\\ref{eqn:ham_tb}), along with the corresponding adatom decoration patterns.}\n \\label{tab:indices}\n\\end{table}\n \n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{images\/figure2.pdf}\n \\caption{\n %\n \\textbf{Interfacial edge modes from the QSH-QVH structure.}\n %\n The (a, b, d, e) band structure and (c, f) representative edge state probability distributions for the QSH-QVH ribbon for zigzag (top) and armchair (bottom) orientations. In (a, b, d, e), the thin gray and thick black lines represent the bulk and edge states, respectively. (b, e) zoom in on the near-gap region at the $\\v{K}'$ point. The red dots marks a spin-up edge state propagating down the right interface and the blue dots marks a spin-up edge state propagating up the left interface. The line plots in (c) and (f) represent the probability density of the edge states indicated by the red and blue dots integrated over the plane perpendicular to the width of the ribbon. The shapes of the probability density plots for the zigzag interfaces differ because the adatom decoration makes the two interfaces asymmetric. An in-plane view of the interface structure, with colored atoms corresponding to those in Fig.~\\ref{fig:struc_BC} is shown in the bottom row of (c) and (f). The locking of the spin and valley degrees of freedom at each interface is the hallmark of the QSH-QVH edge state.\n %\n }\n \\label{fig:qsh_qvh}\n\\end{figure}\n\n\\section{First-Principles Interface Calculations}\nWith this understanding of the bulk properties of decorated and bare stanene we can now consider interfaces between different spatial domains. Three interfaces can be constructed from the three phases realized by bare and decorated stanene: two distinct QSH-QVH interfaces where the spin Chern number changes from 1 to 0 and the valley Chern number changes from 0 to $\\pm2$, and a QVH-QVH interface where the spin Chern number is zero on both sides while the valley Chern number changes from $-2$ to $2$. As discussed above, the QVH-QVH interface is sensitive to the orientation of the interface relative to the valley separation, $\\mathbf{K}-\\mathbf{K}'$, so we consider only ``zigzag'' QVH-QVH interfaces for which the edge is perpendicular to the valley separation. The QSH-QVH interfaces are insensitive to the edge orientation because the change in $C_s$ does not depend on the valley degree of freedom, so we consider both zigzag and armchair interfaces for this case. \n\nNow let us consider the possible interface states. At QVH-QVH interfaces, the valley Chern number changes from $\\mp2$ to $\\pm2$, indicating that four gapless edge modes will appear. Each valley hosts two chiral modes, with the chirality determined by the valley such that a net momentum current will flow along the interface. At QSH-QVH interfaces, the spin Chern number changes by one, and the valley Chern number changes by two, producing a pair of oppositely propagating spin-valley polarized modes. The modes in each valley are of opposite spin and opposite chirality, resulting in both spin and momentum currents at the interface.\n\nTo determine the characteristics of these interface modes in decorated stanene, we performed large-scale first principles electronic structure simulations of stanene nanoribbons decorated to produce QVH-QSH and QVH-QVH interfaces. The translation-invariant direction points in the $\\vec{b}$ and $\\frac{1}{2}\\vec{a} + \\vec{b}$ directions to realize zigzag and armchair interfaces, respectively. The unit vectors $\\vec{a}$ and $\\vec{b}$ point along the primitive lattice vectors, as shown in Fig.~\\ref{fig:struc_BC}. To create topological interfaces, we selectively decorated domains in the transverse direction (the x-axis in Fig.~\\ref{fig:qsh_qvh}c, \\ref{fig:qsh_qvh}f, and \\ref{fig:qvh_qvh}c). We used periodic boundary conditions in the transverse direction to eliminate spurious interfaces with the vacuum, forming two topological interfaces per ribbon. All zigzag ribbons were 145.67~\\AA ~wide and the armchair ribbon was 149.64~\\AA ~wide.\n\n To make the most of finite computational resources, the relative sizes of the decoration domains were chosen to minimize wavefunction overlap between the exponentially decaying interface states. Accordingly, the zigzag QSH-QVH ribbon is T-decorated on 10 unit cells and bare on 26 unit cells, the armchair QSH-QVH ribbon is T-decorated on 10 unit cells and bare on 22 unit cells, and the QVH-QVH ribbon is T-decorated on 10 unit cells and V-decorated on 26 unit cells. We note that the overlap of the interface wavefunctions in the bulk of the ribbon leads to undesired gaps in the interface spectrum produced by finite-size effects. We show in the supplementary material that these interface spectrum gaps vanish for sufficiently wide ribbons, and we find that the interface states decay exponentially into the insulating bulk with a decay length roughly determined by the ratio of the Fermi velocity to the bulk gap. \n\nThe resulting band structure and interface wavefunction plots are shown in Fig.~\\ref{fig:qsh_qvh} for the QSH-QVH zigzag and armchair ribbons. Each interface in the zigzag ribbon hosts a helical pair of spin-valley locked modes that produce both equilibrium spin and momentum currents on the interface. Each interface of the armchair ribbon also hosts a helical pair of modes, but since the valleys in this case are projected to the $\\Gamma$ point of the Brillouin zone the interfaces only carry spin current, not momentum current. The configurations of edge states we find agree with the $\\v{k}\\cdot\\v{p}$ model predictions of the previous section. As mentioned above, both ribbons have a small gap in the interface state spectrum, $E_g \\approx 0.03$ eV, that originates from the overlap and hybridization of the wavefunctions on the two interfaces and would vanish for a larger system. The decay lengths of the interface states in the zigzag ribbon are $\\lambda_T=5.95$ \\AA{} and $\\lambda_0=32.4$ \\AA{} in the T-decorated and bare regions, respectively. The decay lengths in the armchair ribbon are $\\lambda_T=6.55$ \\AA{} and $\\lambda_0=35.7$ \\AA{}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1.0\\textwidth]{images\/figure3.pdf}\n \\caption{\n %\n \\textbf{Interfacial edge modes from the QVH-QVH structure.}\n %\n The (a, b) band structure and (c) representative edge state probability densities of the QVH-QVH interface with a zigzag orientation. In (a, b), the thin gray and thick black lines represent the bulk and edge states, respectively. (b) Zoom in on the near-gap region at the $\\v{K}'$ point. The red and purple dots mark states propagating along the right interface and the blue and yellow dots mark edge states propagating along the left interface. The line plots in (c) represent the probability density integrated over the plane perpendicular to the width of the ribbon. The shapes of the probability density plots for each interface differ because the adatom decoration makes the two interfaces asymmetric. An in-plane view of the structure, with colored atoms corresponding to those in Fig.~\\ref{fig:struc_BC}, is shown in the bottom row of (c). Each valley contributes two unpolarized edge modes to each interface, as the indicated by the change of the valley Chern number by four across each interface.\n %\n }\n \\label{fig:qvh_qvh}\n\\end{figure}\n\nThe results for the QVH-QVH ribbon are shown in the same format as the QSH-QVH ribbon in Fig.~\\ref{fig:qvh_qvh}. Each interface hosts two chiral modes from each valley, with the valleys contributing modes of opposite chirality. This leads to the equilibrium edge momentum current predicted above. In this case the decay lengths of the edge states are $\\lambda_T=5.68$ \\AA{} and $\\lambda_V=23.8$ \\AA{} in the T- and V-decorated regions, respectively. The gaps in the interface state spectrum resulting from wavefunction overlap are $E_g \\approx 0.02$ eV and and $0.006$ eV. We report two gaps here because there are two sets of interface states at each valley for this ribbon. For interfaces with a finite projection onto the valley separation direction, the momentum carried by the edge is reduced. In the extreme case of an armchair interface, the valleys exactly overlap, the edge carries no momentum current, and any local perturbation can gap out the interface states. \n \n\\section{Technological Applications}\nThe above calculations demonstrate that decorating stanene with Zn adatoms presents a uniquely promising platform for technological applications. The topological domains can be patterned with a high degree of control by manipulating the adatom positions with an STM tip, permitting the fabrication of many topological devices, two of which are depicted schematically in Fig.~\\ref{fig:devices}. Furthermore, the interface states residing at domain walls are localized on the scale of tens of nanometers, which permits extremely dense packing of features. One of the first proposals for an application of topological edge modes was designer interconnect networks, which are a possible solution to the ``interconnect bottleneck'', wherein scattering and parasitic capacitance in interconnects leads to signal delays that prohibit further miniaturization of semiconductor devices~\\cite{george_chiral_2012}. The minimum metal pitch, or center to center distance between interconnects, of the current semiconductor manufacturing technology node is 24 nm to 36 nm~\\cite{ITRS}. At this scale, grain boundary and defect scattering leads to large resistances that inhibit the performance of traditional copper interconnects~\\cite{graham_resistivity_2010}. Considering the edge state decay lengths obtained in our simulations, the minimum pitch that could be achieved with Zn-decorated stanene interconnects is also on the order of tens of nanometers. However, the topological protection of the interface modes eliminates the issue of scattering, drastically improving performance with no compromise on feature size.\n\nThe interface modes of Zn-decorated stanene also have many applications beyond the world of conventional electronics. The fields of spin- and valley-tronics attempt to process information by exploiting the spin and valley degrees of freedom, rather than the charge degree of freedom~\\cite{bader_spintronics_2010, vitale_valleytronics_2018}. Topological interface modes are useful for engineering spin- and valleytronic devices such as waveguides, splitters, valves, and filters~\\cite{li_valley_2018, ezawa_topological_2013, xu_manipulating_2017,yang_topological_2020, qiao_spin-polarized_2011, jana_robust_2021}, and STM manipulation of Zn adatoms on stanene provides an ideal platform to fabricate the precise geometries of such devices. The same is true of electron quantum optics devices, such as valley Hall beam splitters, Mach-Zehnder interferometers, and Fabry-Perot resonators~\\cite{jo_quantum_2021, rickhaus_transport_2018, wei_mach-zehnder_2017}. This approach to engineering topological interface modes is also well suited to fabricating quantum computing gates out of helical edge states decorated with magnetic impurities~\\cite{niyazov_coherent_2020, chen_quantum_2014}, as STM manipulation of adatoms can be used to both create the edge states \\emph{and} deposit magnetic impurities.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{images\/figure4}\n \\caption{\n %\n \\textbf{Schematic drawing of example devices.}\n %\n A schematic showing two possible devices constructed via adatom decoration of stanene. The blue spheres are Sn tin atoms and the red and yellow spheres are Zn adatoms located at T and V sites, respectively. The blue, red, and yellow shading is a guide to the eye, indicating the regions that are bare, T-decorated, and V-decorated, respectively. The red and yellow arrows indicate quantum spin Hall edge modes, the color determined by the decoration site of the quantum valley Hall region. The left side of the image shows two densely-packed chiral interconnects constructed by decorating a thin region of stanene with Zn adatoms. The right side of the image shows a Mach-Zehnder interferometer built out of the edge modes of two adjacent T- and V-decorated regions.\n %\n }\n \\label{fig:devices}\n\\end{figure}\n\n\\section{Conclusion}\nWe have demonstrated that sublattice-selective decoration of stanene with Zn adatoms is an excellent platform for engineering topological interface modes. Because Zn adatoms bond relatively weakly to stanene, they act as an ideal sublattice potential and induce a QSH to QVH transition in stanene. The weak nature of the bond also does not transfer significant charge to stanene and permits STM manipulation of the adatoms allowing detailed patterning of topological interfaces. Importantly, the Zn-Sn bond is also strong enough for the decoration to remain stable at liquid nitrogen temperatures. The combined result of these effects is a platform suitable for fabricating arbitrary networks of topological interface modes without any of the deleterious effects that plague existing proposals for topological devices. These ideal topological interface-state networks have applications in semiconductor devices, spintronics, valleytronics, quantum electron optics, and even quantum computing. Implementing this technique is possible with existing fabrication and STM technology and can lead to transformative advances in topological device engineering.\n\n\\section*{Methods}\n The investigation of all possible adatom species in rows two through five of the periodic table was completed using JDFTx~\\cite{sundararaman2017jdftx} to take advantage of GPU functionality. These calculations were carried out with ONCV pseudopotentials~\\cite{hamann2013optimized, van2018pseudodojo} using the Perdew-Burke-Enzerhof (PBE)~\\cite{pbe} exchange-correlation functional, a 1090 eV plane-wave energy cutoff, a 15x15x1 $\\Gamma$-centered k-mesh, and Methfessel-Paxton smearing of $\\sigma = 0.0272$ eV. \n \n The electronic structure of the decorated interface structures was determined via the Vienna Ab-initio Software Package (VASP)~\\cite{vasp1,vasp2,vasp3} using the PBE functional with the projector-augmented wave (PAW)~\\cite{paw_pseudopotentials} potentials provided by VASP. The calculations were performed on a 1x15x1 $\\Gamma$-centered k-mesh with a plane-wave energy cutoff of 450 eV and Methfessel-Paxton smearing of $\\sigma = 0.01$ eV.\n \n All calculations included spin-orbit coupling, 20 \\AA{} of z-axis vacuum between periodic images, and the many-body dispersion (MBD) van der Waals correction~\\cite{ambrosetti2014long}. Probability density data was visualized using the pawpyseed package~\\cite{pawpyseed}.\n\n\\nolinenumbers\n\n\\bibliographystyle{naturemag}\n\\footnotesize{","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n The collisions of heavy ions at ultra relativistic energies are performed to create and\nstudy bulk strongly interacting matter at high temperatures. \nThe data from RHIC and LHC provide strong evidences of formation of a new state\nof matter known as quark gluon\nplasma (QGP) in these collisions \\cite{Proceedings:2019drx}.\n\nThe light charged hadrons and jets \ntransverse momentum ($p_{\\rm{T}}$) spectra give insight into the particle production\nmechanism in pp collisions. The partonic energy energy loss is reflected in these\nparticles when measured in heavy ion collisions \ndue to jet-quenching \\cite{Wang:2003aw} which measures the opacity of the medium.\nA modified power law distribution \\cite{Tsallis:1987eu, Biro:2008hz, Khandai:2013gva} describes\nthe $p_{\\rm{T}}$ spectra of the hadrons in pp collisions in terms of a\npower index $n$ which determines the initial production in partonic\ncollisions. In Ref.\\cite{Saraswat:2017kpg}, the power law function is applied to heavy ion\ncollisions as well which includes the transverse\nflow in low $p_{\\rm{T}}$ region and the in-medium energy loss (also in terms of power law)\nin high $p_{\\rm{T}}$ region.\n\n\n\n The spectra of hadrons are measured in both pp and AA collisions and\nnuclear modification factor ($R_{\\rm{AA}}$) is obtained.\nThe energy loss of partons can be connected to horizontal shift in the scaled hadron \nspectra in AA with respect to pp spectra as done by PHENIX measurement \\cite{Adler:2006bw}.\nTheir measurements of neutral pions upto $p_{\\rm T} \\sim 10$ GeV\/$c$ are consistent with the\nscenario where the momentum shift $\\Delta p_{\\rm T}$ is proportional to $p_{\\rm T}$.\n In similar approach, the authors in Ref.~\\cite{Wang:2008se}, extracted the fractional energy loss\nfrom the measured nuclear modification factor of hadrons as a function of $p_{\\rm{T}}$ below\n10 GeV\/$c$ in AuAu collisions at $\\sqrt{s_{\\rm{NN}}}$ = 200 GeV. They also considered that \nthe energy loss increases linearly with $p_{\\rm T}$.\n In recent PHENIX work \\cite{Adare:2015cua},\nfractional energy loss was obtained in the hadron spectrum measured upto $p_{\\rm T}=20$ GeV\/$c$\nin heavy ions collisions at RHIC and LHC energy and is not found to be constant.\nThis means that a constant fractional energy loss (energy loss varying linearly with $p_{\\rm T}$)\ncan be applicable only to low $p_{\\rm T}$ RHIC measurements.\n\n There are many recent studies which use so-called shift formalism to study the energy loss.\nThe work in Ref.\\cite{Spousta:2015fca} is based on \nshift formalism and describes the transverse momentum ($p_{\\rm{T}}$), rapidity ($y$)\nand centrality dependences of the measured jet nuclear modification factor ($R_{\\rm{AA}}$)\nin PbPb collisions at LHC.\nThey assume that the energy loss is given by a power law in terms of $p_{\\rm T}$, the value of power\nindex is obtained between 0.4 to 0.8 by fitting the $R_{\\rm{AA}}$ as a function of $p_{\\rm{T}}$\nand centralities.\n They also found that the energy loss linearly increases with number of participants. \nUsing the same method they study the magnitude and the colour charge dependence of the\nenergy loss in PbPb collisions at LHC energies using the measured data of the inclusive\njet suppression~\\cite{Spousta:2016agr}.\n The authors of the Ref.\\cite{Ortiz:2017cul} work on inclusive charged particle spectra\nmeasured in the range ($5 < p_{\\rm T} < 20$ GeV\/$c$) in heavy ion collisions at RHIC and LHC.\nThey assume that the energy loss linearly increases with $p_{\\rm T}$ and pathlength. \n\n\n\n\n\n\nThere are detailed calculations of energy loss of partons in the hot medium\n[see e.g. Refs.~\\cite{Wang:1994fx,Baier:1996kr}.\n Phenomenological models tend to define simple dependence of the radiative energy\nloss of the parton on the energy of the parton inside the medium [for a discussions\nsee Ref.~\\cite{Baier:2000mf}]. The energy loss can be characterized in terms of \ncoherence length $l_{\\rm{coh}}$, which is associated with the formation time of\ngluon radiation by a group of scattering centres. If $l_{\\rm{coh}}$ is less then \nthe mean free path $(\\lambda)$ of the parton, the energy loss is proportional to the\nenergy of the parton.\nIf $l_{\\rm{coh}}$ is greater than $\\lambda$ but less than the path length ($L$) of the\nparton ($\\lambda < l_{\\rm{coh}} < L)$, the energy loss is proportional to the square\nroot of the energy of the parton.\nIn the complete coherent regime, $l_{\\rm{coh}} > L$, the energy loss per unit length\nis independent on energy but proportional to the parton path length implying that\n$\\Delta p_T$ is proportional to square of pathlength.\n There is a nice description of charged particle spectra at RHIC and LHC using such a\nprescription by dividing the $p_{\\rm T}$ spectra in three regions \\cite{De:2011fe, De:2011aa}.\n For low and intermediate energy partons, $\\Delta p_T$ is assumed to be linearly\ndependent on $L$ \\cite{Muller:2002fa}. The work in Ref.~\\cite{Betz:2011tu} studies \nthe energy loss of jets in terms of exponent of the number of participants.\n It should be remembered that the fragmentation changes the momentum between the partonic\nstage (at which energy is lost) and hadron formation.\n There are models which say that softening occurs at fragmentation stage \ndue to color dynamics [See e.g. Ref.~\\cite{Beraudo:2012bq}].\n \n\n In general, one can assume that the energy loss of partons in the hot medium as a function of\nparton energy is in the form of power law where the power index ranges from 0 (constant) to\n1 (linear). Guided by these considerations, in the present work, the $p_{\\rm T}$ loss has been assumed\nas power law with different power indices in three different $p_{\\rm T}$ regions.\nThe energy loss in different collisions centralities are described in terms of fractional power\nof number of participants.\n The $p_{\\rm T}$ distributions in pp collisions are fitted with a modified power law and\n$R_{\\rm AA}$ in PbPb collisions can be obtained using effective shift ($\\Delta p_{\\rm T}$) in the\n$p_{\\rm T}$ spectrum measured at different centralities.\n The power index and the boundaries of three $p_{\\rm T}$ regions are obtained by fitting the\nmeasured $R_{\\rm{AA}}$ of charged particles and jets in PbPb collisions at\n$\\sqrt{s_{\\rm NN}}$ = 2.76 and 5.02 TeV in large transverse momentum ($p_{\\rm T}$) and\ncentrality range.\nThe shift $\\Delta p_{\\rm{T}}$ includes the medium effect, mainly energy loss of parent\nquark inside the plasma.\n The shift $\\Delta p_T$ can be approximatively understood as the partonic energy loss in the\ncase of jets while in case of hadrons it is not simple due to complicated correlations.\nOften we refer to the shift $\\Delta p_{\\rm{T}}$ as the energy loss.\n\n\n\n\\section{Nuclear Modification Factor and Energy Loss}\n\nThe nuclear modification factor $R_{\\rm{AA}}$ of hadrons is defined as the ratio\nof yield of the hadrons in AA collision and the yield in pp collision with a\nsuitable normalization\n\\begin{equation}\nR_{\\rm{AA}} (p_{\\rm{T}}, b) = \\frac{1}{T_{\\rm{AA}}} {\\frac{d^2N^{AA}(p_{\\rm{T}}, b)}{dp_{\\rm{T}}dy}}\/\n{\\frac{d^2\\sigma^{pp}(p_{\\rm{T}}, b)}{dp_{\\rm{T}}dy}}~.\n\\label{raa_definition}\n\\end{equation}\nHere, $T_{\\rm{AA}}$ is the nuclear overlap function which can be calculated from the\nnuclear density distribution. High $p_{\\rm T}$ partons traversing the medium loose energy and\ncause the suppression of hadrons at high $p_{\\rm T}$ indicated by value of $R_{\\rm{AA}}$\nwhich is less than one.\n The transverse momentum distribution of hadrons in pp collisions\ncan be described by the Hagedorn function which is a QCD-inspired summed power\nlaw \\cite{Hagedorn:1983wk} given as\n\\begin{equation}\n\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\frac{d^2\\sigma^{\\rm{pp}}}{dp_{\\rm{T}}dy}\n= A_n~2\\pi p_{\\rm{T}} ~\\Bigg(1 + \\frac{p_{\\rm{T}}}{p_{0}}\\Bigg)^{-n}~.\n\\label{Hag}\n\\end{equation}\nwhere $n$ is the power and $A_n$ and $p_{0}$ are other parameters which are obtained\nby fitting the experimental pp spectrum.\n The yield in the AA collision can be obtained by shifting the spectrum by \n$\\Delta p_{\\rm T}$ as\n\\begin{eqnarray}\n \\,\\,\\,\\,\\, \\frac{1}{T_{\\rm{AA}}}\\frac{d^{2}N^{\\rm{AA}}}{dp_{\\rm{T}}dy}\n & = \\frac{d^{2}\\sigma^{\\rm{pp}}(p'_{\\rm{T}} = p_{\\rm{T}} + \\Delta p_{\\rm{T}})}{dp'_{\\rm{T}}dy}\n \\frac{dp'_{\\rm{T}}}{dp_{\\rm{T}}} \\nonumber \\\\\n & = \\frac{d^{2}\\sigma^{\\rm{pp}}(p'_{\\rm{T}})}{dp'_{\\rm{T}}dy}\n \\Bigg(1 + \\frac{d(\\Delta p_{\\rm{T}})}{dp_{\\rm{T}}}\\Bigg)~.\n\\label{shiftRAA}\n\\end{eqnarray}\nThe reasoning behind writing Eq.~\\ref{shiftRAA} lies in the assumption that particle yield at\na given $p_{\\rm{T}}$ in AA collisions would have been equal to the yield in pp collisions\nat $p_{\\rm{T}} + \\Delta p_{\\rm{T}}$. The shift $\\Delta p_{\\rm{T}}$ includes the medium effect,\nmainly energy loss of parent quark inside the plasma.\n\nThe nuclear modification factor $R_{\\rm{AA}}$ can be obtained as \n\\begin{eqnarray}\nR_{\\rm{AA}} = \\left(1 + { \\Delta p_{\\rm{T}} \\over p_{0}+p_{\\rm{T}} } \\right)^{-n} \\,\\,\n\\left({p_{\\rm{T}} + \\Delta p_{\\rm{T}} \\over p_{\\rm{T}}}\\right) \\, \n\\left(1 + {d(\\Delta p_{\\rm{T}}) \\over dp_{\\rm{T}}}\\right)\n\\label{nmf_raa_fitting_function}\n\\end{eqnarray}\nThe energy loss given by $p_{T}$ loss, $\\Delta p_{T}$ can be extracted by fitting the\nexperimental data on $R_{\\rm AA}$\nwith Eq.~\\ref{nmf_raa_fitting_function}.\nThe $\\Delta p_{\\rm T}$ and its derivative will go as input in the above equation\nand can be assumed to be in the form of the power law\nwith different values of power indices in three different $p_{\\rm T}$ regions as follows\n\n\n\n\\begin{eqnarray}\n\\Delta p_{\\rm T} = \\left\\{\n\\begin{array}{l}\na_1~(p_{\\rm T} - C_1)^{\\alpha_1} ~~~ {\\rm for} ~~~ p_{\\rm T} < p_{\\rm T_1}~~~, \\\\\na_2~(p_{\\rm T} - C_2)^{\\alpha_2} ~~~ {\\rm for} ~~~ p_{\\rm T_1} \\leq p_{\\rm T} < p_{\\rm T_2}~~,\\\\\na_3~(p_{\\rm T} - C_3)^{\\alpha_3} ~~~ {\\rm for} ~~~ p_{\\rm T} \\geq p_{\\rm T_2}~.\n\\end{array}\n\\right\\}\n\\label{Equation_Two}\n\\end{eqnarray}\n\n\nThe parameter $a_1$ in our work contains the pathlength dependence. The pathlength $L$\n scales as the square root of number of participants as $\\sqrt{N_{\\rm part}}$.\n For low and intermediate energy partons, $\\Delta p_T$ can be assumed to be\nlinearly dependent on $L$ \\cite{Muller:2002fa}. If the scattering happens\nin complete coherent regime where the whole medium acts as one coherent source of radiation, \nthe $\\Delta p_T$ approaches quadratic dependence on $L$.\n The work in Ref.~\\cite{Betz:2011tu} studies \nthe energy loss of jets in terms of exponent of the number of participants.\nWithout complicating the calculations we can assume that\n$a_1 = M \\, (N_{\\rm{part}}\/(2A))^\\beta$. The exponent $\\beta$ is obtained separately\nfor each dataset. \n The parameter $M$ relies on the energy density of the medium depending on the\ncollision energy but has a same value for all centralities.\nThe boundaries of the $p_{\\rm T}$ regions $p_{{\\rm{T}}_{1}}$, $p_{{\\rm{T}}_{2}}$ and the power\nindices $\\alpha_{1}$, $\\alpha_{2}$ and $\\alpha_{3}$ in the three different regions are\nused as free parameters while fitting the $R_{\\rm{AA}}$ measured at different centralities\nsimultaneously.\n The parameter $C_{1}$ is fixed to a suitable value to choose a lower $p_{\\rm T}$ cutoff\n and the parameters $C_{2}$, $C_{3}$, $a_2$ and $a_3$ are obtained by assuming the function\n and its derivative to be continuous at boundaries.\n\nDemanding that the function in Eq.~\\ref{Equation_Two} to be continuous\nat $p_{\\rm{T}} = p_{{\\rm{T}}_{1}}$ and at $p_{\\rm{T}} = p_{{\\rm{T}}_{2}}$ we obtain\n\\begin{equation}\n a_{2} = a_{1}~ \\frac{(p_{{\\rm{T}}_{1}} - C_{1})^{\\alpha_{1}}}{(p_{{\\rm{T}}_{1}} - C_{2})^{\\alpha_{2}}}~~.\n\\label{Equation_Three}\n\\end{equation}\n\n\\begin{equation}\n a_{3} = a_{2}~ \\frac{(p_{{\\rm{T}}_{2}} - C_{2})^{\\alpha_{2}}}{(p_{{\\rm{T}}_{2}} - C_{3})^{\\alpha_{3}}}~~.\n\\label{Equation_Four}\n\\end{equation}\n Demanding that at $p_{\\rm{T}} = p_{{\\rm{T}}_{1}}$, the derivative of Eq.~\\ref{Equation_Two} is continuous.\n\\begin{equation}\n a_{1}~\\alpha_{1}~(p_{{\\rm{T}}_{1}} - C_{1})^{(\\alpha_{1}-1)} = \n a_{2}~\\alpha_{2}~(p_{{\\rm{T}}_{1}} - C_{2})^{(\\alpha_{2}-1)}~~,\n\\label{Equation_Seven}\n\\end{equation}\nUsing the value of $a_{2}$ from Eq.~\\ref{Equation_Three}\n\\begin{equation}\n\\frac{\\alpha_{1}}{(p_{{\\rm{T}}_{1}} - C_{1})} = \\frac{\\alpha_{2}}{(p_{{\\rm{T}}_{1}} - C_{2})}~~,\n\\label{Equation_Eight}\n\\end{equation}\n\n\\begin{equation}\nC_{2} = p_{{\\rm{T}}_{1}} - \\frac{\\alpha_{2}}{\\alpha_{1}}(p_{{\\rm{T}}_{1}} -C_{1})~.\n\\label{Equation_Nine}\n\\end{equation}\nSimilarly, demanding the derivative to be continuous at\n$p_{\\rm{T}} = p_{{\\rm{T}}_{2}}$, we get $C_{3}$ \n\\begin{equation}\nC_{3} = p_{{\\rm{T}}_{2}} - \\frac{\\alpha_{3}}{\\alpha_{2}}(p_{{\\rm{T}}_{2}} -C_{2})~.\n\\label{Equation_Ten}\n\\end{equation}\nIn case of jets we consider only one region as the data starts from very high $p_T$\nabove 40 GeV\/$c$.\n\n\n\\section{Results and Discussions}\n\n\nFigure~\\ref{Figure1_charged_particles_pT_ALICE_spectra_Tsallis_fit_pp_276TeV} shows the\ninvariant yields of the charged particles as a function of the transverse momentum $p_{\\rm{T}}$\nfor pp collisions at $\\sqrt{s}$ = 2.76 TeV measured by the ALICE\nexperiment \\cite{Abelev:2013ala}. The solid curve is the Hagedorn distribution fitted\nto the $p_{\\rm{T}}$ spectra with the parameters given in \nTable~\\ref{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}. \n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure1_charged_particles_pT_ALICE_spectra_Tsallis_fit_pp_276TeV.pdf}\n\\caption{The invariant yields of the charged particles as a function of transverse momentum \n$p_{\\rm{T}}$ for pp collision at $\\sqrt{s}$ = 2.76 TeV measured by the ALICE experiment\n \\cite{Abelev:2013ala}. The solid curve is the fitted Hagedorn function.}\n\\label{Figure1_charged_particles_pT_ALICE_spectra_Tsallis_fit_pp_276TeV}\n\\end{figure}\n\n\\begin{table}[ht]\n \\caption{Parameters for the Hagedorn function obtained by fitting the\ntransverse momentum spectra of charged particles and jets measured in pp \ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and 5.02 TeV.}\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{| c || c | c | c | c |} \n\\hline\nParameters & \\multicolumn{2}{c|}{Charged particles}\n & \\multicolumn{2}{c|}{Jets } \\\\ \\hline\n & $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV & $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV \n & $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV & $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV \\\\ \\hline \\hline\n $n$ & 7.26 $\\pm$ 0.08 & 6.70 $\\pm$ 0.14 & 8.21 $\\pm$ 1.55 & 7.90 $\\pm$ 0.50 \\\\\\hline\n $p_{0}$ (GeV\/$c$) & 1.02 $\\pm$ 0.04 & 0.86 $\\pm$ 0.16 & 18.23 $\\pm$ 1.69 & 19.21 $\\pm$ 3.20 \\\\\\hline\n$\\chi^{2}\/\\rm{NDF}$ & 0.15 & 0.06 & 0.23 & 0.95 \\\\\\hline \n\\end{tabular}}\n\\end{center}\n\\label{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}\n\\end{table}\n\n\n\nFigure~\\ref{Figure2_charged_particles_RAA_ALICE_spectra_com_fit_PbPb_276TeV} shows the nuclear\nmodification factor $R_{\\rm{AA}}$ of the charged particles as a function of the transverse\nmomentum $p_{\\rm{T}}$ for different centrality classes in PbPb collisions at $\\sqrt{s_{\\rm{NN}}}$\n= 2.76 TeV measured by the ALICE experiment \\cite{Abelev:2012hxa}. The solid lines are the\nfunction given by Eq.~\\ref{nmf_raa_fitting_function}. The modeling of centrality dependence using\n$N_{\\rm part}^\\beta$ with $\\beta=0.58$ gives a very good description of the data. \n The extracted parameters of the shift $\\Delta p_{\\rm{T}}$ \nobtained by fitting the $R_{\\rm{AA}}$ measured in different centrality classes of PbPb\ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV are given in\nTable~\\ref{table1_charged_particles_raa_fitting_parameter_276_502_TeV}\nalong with value of $\\chi^{2}\/\\rm{NDF}$. It shows that the $\\Delta p_{\\rm{T}}$ increases\nalmost linearly ($p_{\\rm{T}}^{0.97}$) upto $p_{\\rm{T}} \\simeq$ 5 GeV\/$c$ in confirmation with\nearlier studies. After that it increases slowly with power $\\alpha=0.224$ upto\na $p_{\\rm{T}}$ value 29 GeV\/$c$ and then\nbecomes constant for higher values of $p_{\\rm{T}}$.\n\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Figure2_charged_particles_RAA_ALICE_spectra_com_fit_PbPb_276TeV.pdf}\n\\caption{The nuclear modification factor $R_{\\rm{AA}}$ of the charged particles as a \nfunction of transverse momentum $p_{\\rm{T}}$ for various centrality classes in PbPb \ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV measured by the ALICE experiment \\cite{Abelev:2012hxa}.\n The solid curves are the $R_{\\rm{AA}}$ fitting function\n(Eq.~\\ref{nmf_raa_fitting_function}).} \n\\label{Figure2_charged_particles_RAA_ALICE_spectra_com_fit_PbPb_276TeV}\n\\end{figure}\n\n\n\\begin{table}[ht]\n \\caption[]{The extracted parameters of the shift $\\Delta p_{\\rm{T}}$ obtained by fitting the charged\n particles $R_{\\rm{AA}}$\nmeasured in different centrality classes of PbPb collisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and 5.02 TeV.}\n\\label{table1_charged_particles_raa_fitting_parameter_276_502_TeV}\n\\begin{center}\n\\begin{tabular}{| c || c | c |} \\hline\n ~ Parameters & $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV & $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV \\\\ \\hline\\hline\n~ $M$ & 0.75 $\\pm$ 0.02 & 0.80 $\\pm$ 0.038 \\\\ \\hline \n~ $p_{{\\rm{T}}_{1}}$ (GeV\/$c$) & 5.03 $\\pm$ 0.15 & 5.10 $\\pm$ 0.22 \\\\ \\hline \n~ $p_{{\\rm{T}}_{2}}$ (GeV\/$c$) & 29.0 $\\pm$ 0.1 & 22.2 $\\pm$ 4.1 \\\\ \\hline \n~ $C_{1}$ (GeV\/$c$) & 1.0 (fixed) & 1.0 \\\\ \\hline \n~ $\\alpha_{1}$ & 0.97 $\\pm$ 0.02 & 0.95 $\\pm$ 0.04 \\\\ \\hline \n~ $\\alpha_{2}$ & 0.22 $\\pm$ 0.02 & 0.22 $\\pm$ 0.03 \\\\ \\hline \n~ $\\alpha_{3}$ & 0.05 $\\pm$ 0.13 & 0.05 $\\pm$ 0.10 \\\\ \\hline \n~ $\\frac{\\chi^{2}}{\\rm{NDF}}$ & 0.35 & 0.38 \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\n Figure~\\ref{Figure3_charged_particles_com_fit_Del_pT_PbPb_276TeV} shows the energy loss\n$\\Delta p_{\\rm{T}}$ of the charged particles as a function of the transverse momentum\n$p_{\\rm{T}}$ for different centrality classes in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$\n= 2.76 TeV. The $\\Delta p_{\\rm{T}}$ is obtained from Eq.~\\ref{Equation_Two}\nwith parameters given in \nTable~\\ref{table1_charged_particles_raa_fitting_parameter_276_502_TeV}.\n The $\\Delta p_{\\rm{T}}$ increases from peripheral to the most\n central collision regions as per $N_{\\rm part}^{0.58}$.\n The figure shows that the $\\Delta p_{\\rm{T}}$ increases \nalmost linearly upto $p_{\\rm{T}} \\sim 5$ GeV\/$c$. After that it\nincreases slowly upto a $p_{\\rm{T}}$ value 29 GeV\/$c$ and then\nbecomes constant for higher values of $p_{\\rm{T}}$.\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure3_charged_particles_com_fit_Del_pT_PbPb_276TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the charged particles as a function of \ntransverse momentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV for \ndifferent centrality classes.}\n\\label{Figure3_charged_particles_com_fit_Del_pT_PbPb_276TeV}\n\\end{figure}\n\n\n Figure~\\ref{Figure4_charged_particles_cms_pT_spectra_pp_502TeV} shows the invariant\nyields of the charged particles as a function of the transverse momentum $p_{\\rm{T}}$\nfor pp collisions at $\\sqrt{s}$ = 5.02 TeV measured by the CMS experiment\n\\cite{Khachatryan:2016odn}. The solid lines are the Hagedorn function fitted to\nthe measured $p_{\\rm{T}}$ spectra the parameters of which are given in \nTable~\\ref{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}. \n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure4_charged_particles_cms_pT_spectra_pp_502TeV.pdf}\n\\caption{The invariant yields of the charged particles as a function of transverse momentum \n$p_{\\rm{T}}$ for pp collision at $\\sqrt{s}$ = 5.02 TeV measured by the CMS experiment \n\\cite{Khachatryan:2016odn}. The solid curve is the fitted Hagedorn function.}\n\\label{Figure4_charged_particles_cms_pT_spectra_pp_502TeV}\n\\end{figure}\n\n\nFigure~\\ref{Figure5_charged_particles_cms_RAA_spectra_com_fit_PbPb_502TeV} shows the\nnuclear modification factor $R_{\\rm{AA}}$ of the charged particles as a function of the\ntransverse momentum $p_{\\rm{T}}$ for different centrality classes in PbPb collisions at\n$\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV measured by the CMS experiment \\cite{Khachatryan:2016odn}.\nThe solid curves are the $R_{\\rm{AA}}$ fitting function (Eq.~\\ref{nmf_raa_fitting_function}).\n Here also the modeling of centrality dependence using \n$N_{\\rm part}^\\beta$ with $\\beta=0.58$ gives a good description of the data. \n The extracted parameters of the shift $\\Delta p_{\\rm{T}}$ \nobtained by fitting the $R_{\\rm{AA}}$ measured in different centrality classes of PbPb\ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV are given in\nTable~\\ref{table1_charged_particles_raa_fitting_parameter_276_502_TeV}\nalong with the value of $\\chi^{2}\/\\rm{NDF}$. It shows that the $\\Delta p_{\\rm{T}}$ increases\nalmost linearly ($p_{\\rm{T}}^{0.96}$) similar to the case at 2.76 TeV for $p_{\\rm T}$ upto 5.1 GeV\/$c$.\nAfter that it increases slowly with power $\\alpha=0.22$ upto a $p_{\\rm{T}}$ value 22.2 GeV\/$c$ and then\nbecomes constant for higher values of $p_{\\rm{T}}$ right upto 160 GeV\/$c$.\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{Figure5_charged_particles_cms_RAA_spectra_com_fit_PbPb_502TeV.pdf}\n\\caption{The nuclear modification factor $R_{\\rm{AA}}$ of the charged particles as a function \nof transverse momentum $p_{\\rm{T}}$ for various centrality classes in PbPb collisions at \n$\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV measured by the CMS experiment \\cite{Khachatryan:2016odn}. The\nsolid lines are the $R_{\\rm{AA}}$ fitting function (Eq.~\\ref{nmf_raa_fitting_function}).}\n\\label{Figure5_charged_particles_cms_RAA_spectra_com_fit_PbPb_502TeV}\n\\end{figure}\n \n\nFigure~\\ref{Figure6_charged_particles_com_fit_Del_pT_PbPb_502TeV} shows the energy loss\n$\\Delta p_{\\rm{T}}$ of the charged particles as a function of the transverse momentum\n$p_{\\rm{T}}$ for different centrality classes in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ =\n5.02 TeV. The $\\Delta p_{\\rm{T}}$ is obtained from Eq.~\\ref{Equation_Two} with the\nparameters given in \nTable~\\ref{table1_charged_particles_raa_fitting_parameter_276_502_TeV}.\nThe $\\Delta p_{\\rm{T}}$ becomes constant for $p_{\\rm{T}}$ in the range 22 GeV\/$c$ to 160 GeV\/c\nand increases as the collisions become more central.\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure6_charged_particles_com_fit_Del_pT_PbPb_502TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the charged particles as a function of \ntransverse momentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV for \ndifferent centrality classes.}\n\\label{Figure6_charged_particles_com_fit_Del_pT_PbPb_502TeV}\n\\end{figure}\n\n\nFigure~\\ref{Figure7_charged_particles_Del_pT_cen_0_5_PbPb_276_502TeV} shows the comparison of energy\nloss $\\Delta p_{\\rm{T}}$ of the charged particles as a function of the transverse momentum\n$p_{\\rm{T}}$ for 0 - 5 $\\%$ centrality class in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ =\n2.76 and at 5.02 TeV. The $\\Delta p_{\\rm{T}}$ at 5.02 TeV is similar but slightly more than\nthat at 2.76 TeV. \n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure7_charged_particles_Del_pT_cen_0_5_PbPb_276_502TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the charged particles as a function \nof transverse momentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 \nand 5.02 TeV for 0 - 5 $\\%$ centrality.}\n\\label{Figure7_charged_particles_Del_pT_cen_0_5_PbPb_276_502TeV}\n\\end{figure}\n\n\nFigure~\\ref{Figure8_jet_atlas_pT_spectra_pp_276TeV} shows the yields of the\njets as a function of the transverse momentum $p_{\\rm{T}}$ for pp collisions at $\\sqrt{s}$\n= 2.76 TeV measured by the ATLAS experiment~\\cite{Aad:2014bxa}. The solid curve is the\nHagedorn distribution fitted to the $p_{\\rm{T}}$ spectra, the parameters of which\nare given in \nTable~\\ref{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}. \n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure8_jet_atlas_pT_spectra_pp_276TeV.pdf}\n\\caption{The yields of the jets as a function of transverse momentum \n$p_{\\rm{T}}$ for pp collision at $\\sqrt{s}$ = 2.76 TeV measured by the ATLAS experiment \n \\cite{Aad:2014bxa}. The solid curve is the fitted Hagedorn distribution.}\n\\label{Figure8_jet_atlas_pT_spectra_pp_276TeV}\n\\end{figure}\n\n\nFigure~\\ref{Figure9_Jet_particles_cms_RAA_spectra_com_fit_PbPb_276TeV} shows the nuclear\nmodification factor $R_{\\rm{AA}}$ of the jets as a function of the transverse\nmomentum $p_{\\rm{T}}$ for different centrality classes in PbPb collisions at $\\sqrt{s_{\\rm{NN}}}$\n= 2.76 TeV measured by the ATLAS experiment \\cite{Aad:2014bxa}. \nThe solid curves are the $R_{\\rm{AA}}$ fitting function (Eq.~\\ref{nmf_raa_fitting_function}).\n Here the modeling of centrality dependence using $N_{\\rm part}^\\beta$ with $\\beta=0.60$\ngives a good description of the data. \n The extracted parameters of the energy loss\nobtained by fitting the $R_{\\rm{AA}}$ measured in different centrality classes of PbPb\ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV are given in\nTable~\\ref{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV}\nalong with the value of $\\chi^{2}\/\\rm{NDF}$.\nIt shows that the $\\Delta p_{\\rm{T}}$ increases as $p_{\\rm{T}}^{0.76}$\nat all the values of $p_{\\rm T}$ measured for jets.\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{Figure9_Jet_particles_cms_RAA_spectra_com_fit_PbPb_276TeV.pdf}\n\\caption{The nuclear modification factor $R_{\\rm{AA}}$ of jets as a function of\ntransverse momentum $p_{\\rm{T}}$ for various centrality classes in PbPb collisions at \n$\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV measured by the ATLAS experiment \\cite{Aad:2014bxa}. The solid\ncurves are the $R_{\\rm{AA}}$ fitting function given by Eq.~\\ref{nmf_raa_fitting_function}.}\n\\label{Figure9_Jet_particles_cms_RAA_spectra_com_fit_PbPb_276TeV}\n\\end{figure}\n\n\n\n\\begin{table}[ht]\n \\caption[]{The extracted parameters of the shift $\\Delta p_{\\rm{T}}$ obtained by fitting the jet $R_{\\rm{AA}}$\n measured in different centrality classes of PbPb collisions at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and\n 5.02 TeV.}\n\\label{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV}\n\\begin{center}\n\\begin{tabular}{| c || c | c |} \\hline\n~ Parameters~ & $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV & $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV \\\\ \\hline\\hline\n~ $M$ & 0.33 $\\pm$ 0.1 & 0.40 $\\pm$ 0.12 \\\\ \\hline \n~ $C$ (GeV\/$c$) & -55.1 $\\pm$ 22.7 & -119 $\\pm$ 15 \\\\ \\hline \n~ $\\alpha$ & 0.76 $\\pm$ 0.08 & 0.72 $\\pm$ 0.01 \\\\ \\hline \n~ $\\frac{\\chi^{2}}{\\rm{NDF}}$ & 0.30 & 0.25 \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\n\nFigure~\\ref{Figure10_Jet_particles_Del_pT_PbPb_276TeV} shows the shift $\\Delta p_{\\rm{T}}$\nof the jets as a function of the transverse momentum $p_{\\rm{T}}$\nfor different centrality classes in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV.\nThe $\\Delta p_{\\rm{T}}$ is obtained from Eq.~\\ref{Equation_Two} using the \nparameters given in Table~\\ref{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV}.\nThe $\\Delta p_{\\rm{T}}$ increases from \nperipheral to the most central collision regions.\n The figure shows that the $\\Delta p_{\\rm{T}}$ increases almost linearly \nat all the values of $p_{\\rm T}$ measured for jets.\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure10_Jet_particles_Del_pT_PbPb_276TeV.pdf}\n\\caption{The shift $\\Delta p_{\\rm{T}}$ of the jets as a function of transverse \nmomentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 TeV for different \ncentrality classes.}\n\\label{Figure10_Jet_particles_Del_pT_PbPb_276TeV}\n\\end{figure}\n\n\n\nFigure~\\ref{Figure11_jet_yield_pp_502tev} shows the yields of the jets\nas a function of the transverse momentum $p_{\\rm{T}}$ for pp collisions at $\\sqrt{s}$\n= 5.02 TeV measured by the ATLAS experiment \\cite{Aaboud:2018twu}. The solid curve\nis the Hagedorn distribution with the parameters given in \nTable~\\ref{table0_charged_particles_jet_pT_spectra_tsallis_fitting_parameters_276_502_TeV}. \n\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{Figure11_jet_yield_pp_502tev.pdf}\n\\caption{The yields of the jets as a function of transverse \nmomentum $p_{\\rm{T}}$ for pp collision at $\\sqrt{s}$ = 5.02 TeV measured by the \nATLAS experiment \\cite{Aaboud:2018twu}. The solid curve is the fitted Hagedorn \ndistribution.}\n\\label{Figure11_jet_yield_pp_502tev}\n\\end{figure}\n\n\n\nFigure~\\ref{Figure12_Jet_particles_cms_RAA_spectra_com_fit_PbPb_502TeV} shows the\nnuclear modification factor $R_{\\rm{AA}}$ of the jets as a function of the\ntransverse momentum $p_{\\rm{T}}$ for different centrality classes in PbPb collisions\nat $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV measured by the ATLAS experiment \\cite{Aaboud:2018twu}.\n The solid curves are the $R_{\\rm{AA}}$ fitting function (Eq.~\\ref{nmf_raa_fitting_function}).\n The modeling of centrality dependence is done with $N_{\\rm part}^\\beta$ and the\nvalue of exponent is obtained as $\\beta=0.75$. \n The extracted parameters of the energy loss\nobtained by fitting the $R_{\\rm{AA}}$ measured in different centrality classes of PbPb\ncollisions at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV are given in\nTable~\\ref{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV},\nalong with the value of $\\chi^{2}\/\\rm{NDF}$.\nIt shows that the $\\Delta p_{\\rm{T}}$ increases as $p_{\\rm{T}}^{0.72}$\nat all the values of $p_{\\rm T}$ measured for jets similar to the case of jets at \n$\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.85\\linewidth]{Figure12_Jet_particles_cms_RAA_spectra_com_fit_PbPb_502TeV.pdf}\n\\caption{The nuclear modification factor $R_{\\rm{AA}}$ of the jets as a function\nof transverse momentum $p_{\\rm{T}}$ for various centrality classes in PbPb collisions at\n$\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV measured by the ATLAS experiment \\cite{Aaboud:2018twu}. \nThe solid curves are the $R_{\\rm{AA}}$ fitting function given by Eq.~\\ref{nmf_raa_fitting_function}.}\n\\label{Figure12_Jet_particles_cms_RAA_spectra_com_fit_PbPb_502TeV}\n\\end{figure}\n\n\n\nFigure~\\ref{Figure13_Jet_particles_Del_pT_PbPb_502TeV} shows the energy loss\n$\\Delta p_{\\rm{T}}$ of the jets as a function of the transverse momentum $p_{\\rm{T}}$\nfor different centrality classes in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV.\nThe $\\Delta p_{\\rm{T}}$ is obtained from Eq.~\\ref{Equation_Two} with the \nparameters given in Table~\\ref{Table2_Jet_raa_fitting_parameter_PbPb_276_502_TeV}.\nThe $\\Delta p_{\\rm{T}}$ increases from peripheral to the most central collision regions.\nThe figure shows that the $\\Delta p_{\\rm{T}}$ increases almost linearly\nat all the values of $p_{\\rm T}$ measured for jets.\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.61\\linewidth]{Figure13_Jet_particles_Del_pT_PbPb_502TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the jets as a function of transverse \nmomentum $p_{\\rm{T}}$ in PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV for different\ncentrality classes.}\n\\label{Figure13_Jet_particles_Del_pT_PbPb_502TeV}\n\\end{figure}\n\n\n\nFigure~\\ref{Figure14_jet_Del_pT_cen_0_10_PbPb_276_502TeV} shows the energy loss\n$\\Delta p_{\\rm{T}}$ of the jets as a function of the transverse momentum $p_{\\rm{T}}$ \nin the most central (0-10\\%) PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and 5.02 TeV.\nThese are compared with the $\\Delta p_{\\rm{T}}$ obtained for charged particles in\nthe 0-5\\% centrality class of PbPb collision\nat $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV.\nThe energy loss $\\Delta p_{\\rm{T}}$ in case of jets for both the energies increases\nwith $p_{\\rm{T}}$. The values of $\\Delta p_{\\rm{T}}$ for jets at 5.02 TeV is more than\nthat at 2.76 TeV. This behaviour at high $p_{\\rm{T}}$ is very different from the\nenergy loss of charged particles which becomes constant in these $p_{\\rm{T}}$ regions.\nThe modeling of centrality dependence of energy loss has been done using\n$N_{\\rm part}^\\beta$.\nFor charged particles, the centrality dependence of $p_T$ shift is found to\nbe $N_{\\rm part}^{0.58}$ which corresponds to $L^{1.18}$.\nThe centrality dependence for jets at $\\sqrt{s_{\\rm NN}}$ = 2.76 TeV is found to be\n$N_{\\rm part}^{0.60}$. \nIn case of jets at 5 TeV, the centrality dependence of energy loss is found to be\n$N_{\\rm part}^{0.75}$ corresponding to $L^{1.5}$ which means that the jets even at\nvery high energy are still away from complete coherent regime. \n\n\n\n\\begin{figure}[htp]\n\\centering\n\\includegraphics[width=0.61\\linewidth]{Figure14_jet_Del_pT_cen_0_10_PbPb_276_502TeV.pdf}\n\\caption{The energy loss $\\Delta p_{\\rm{T}}$ of the jets as a function of transverse \nmomentum $p_{\\rm{T}}$ in the most central PbPb collision at $\\sqrt{s_{\\rm{NN}}}$ = 2.76 and 5.02 TeV. \nThe $\\Delta p_{\\rm{T}}$ obtained for charged particles in the most central PbPb collision\nat $\\sqrt{s_{\\rm{NN}}}$ = 5.02 TeV is also shown.}\n\\label{Figure14_jet_Del_pT_cen_0_10_PbPb_276_502TeV}\n\\end{figure}\n\n\n\n\\clearpage\n\n\\section{Conclusions}\n\nWe presented a study of partonic energy loss with $p_T$ shift extracted from the measured\n$R_{\\rm{AA}}$ of charged particles and jets in PbPb collisions at $\\sqrt{s_{\\rm NN}}$ = 2.76\nand 5.02 TeV in wide transverse momentum and centrality range.\n The functional form of energy loss given by\n$\\Delta p_{\\rm T}$ has been assumed as power law with different power indices\nin three different $p_{\\rm T}$ regions driven by physics considerations.\nThe power indices and the boundaries of three $p_{\\rm T}$ regions are obtained by\nfitting the experimental data of $R_{\\rm{AA}}$ as a function of $p_{\\rm T}$ and centrality.\n The energy loss for light \ncharged particles is found to increase linearly with $p_{\\rm T}$ in low $p_{\\rm T}$ region\nbelow 5-6 GeV\/$c$ and approaches a constant value in high $p_{\\rm T}$ region above 25 GeV\/$c$\nwith an intermediate power law connecting the two regions.\n The $\\Delta p_{\\rm{T}}$ at 5.02 TeV is similar but slightly more than\nthat at 2.76 TeV. \nIn case of jets we consider only one $p_T$ region and it is found that for jets, the\nenergy loss increases almost linearly even at very\nhigh $p_{\\rm T}$.\n The modeling of centrality dependence of energy loss has been done using\n$N_{\\rm part}^\\beta$.\n For charged particles, the centrality dependence of $p_T$ shift is found to\nbe $N_{\\rm part}^{0.58}$ which corresponds to $L^{1.18}$.\n The centrality dependence for jets at $\\sqrt{s_{\\rm NN}}$ = 2.76 TeV goes as\n$N_{\\rm part}^{0.60}$. \nIn case of jets at 5 TeV, the centrality dependence of energy loss is found to be \n$N_{\\rm part}^{0.75}$ corresponding to $L^{1.5}$ which means that the jets even at\nvery high energy are still away from complete coherent regime. \n\n\n\n\\ \\\\\n\n\\noindent\n{\\bf References}\n\n\\noindent\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}