{"text":"\\section{Introduction}\nLearning to learn is essential in human intelligence but is still a wide area of research in machine learning.\n\\textit{Meta-learning} has emerged as a popular approach to enable models to perform well on new tasks using limited data.\nIt involves first a \\textit{meta-training} process, when the model learns valuable features from a set of tasks.\nThen, at test time, using only few datapoints from a new, unseen task, the model (1) \\textit{adapts} to this new task (i.e., performs \\textit{few-shot learning} with \\textit{context data}), \nand then (2) \\textit{infers} by making predictions on new, unseen \\textit{query inputs} from the same task.\nA popular baseline for meta-learning, which has attracted a large amount of attention, is Model-Agnostic Meta-Learning (MAML) \\citep{maml}, in which the adaptation process consists of fine-tuning the parameters of the model via gradient descent.\n\nHowever, meta-learning methods can often struggle in several ways when deployed in challenging real-world scenarios. First, when context data is too limited to fully identify the test-time task, accurate prediction can be challenging. As these predictions can be untrustworthy, this necessitates the development of meta-learning methods that can express uncertainty during adaptation \\citep{bayesian_maml, alpaca}. In addition, meta-learning models may not successfully adapt to ``unusual'' tasks, i.e., when test-time context data is drawn from an \\textit{out-of-distribution} (OoD) task not well represented in the training dataset \\citep{ood_maml, meta_learning_ood}.\nFinally, special care has to be taken when learning tasks that have a large degree of heterogeneity.\nAn important example is the case of tasks with a \\textit{multimodal} distribution, i.e., when there are no common features shared across all the tasks, but the tasks can be broken down into subsets (modes) in a way that the ones from the same subset share common features \\citep{mmaml}.\n\n\\textbf{Our contributions.}~\\, We present \\textsc{UnLiMiTD}{} (\\textit{uncertainty-aware meta-learning for multimodal task distributions}), a novel meta-learning method that leverages probabilistic tools to address the aforementioned issues.\nSpecifically, \\textsc{UnLiMiTD}{} models the true distribution of tasks with a learnable distribution constructed over a linearized neural network and uses analytic Bayesian inference to perform uncertainty-aware adaption.\nWe present three variants (namely, \\approach-\\textsc{I}, \\approach-\\textsc{R}{}, and \\approach-\\textsc{F}) that reflect a trade-off between learning a rich prior distribution over the weights and maintaining the full expressivity of the network; we show that \\approach-\\textsc{F}{} strikes a balance between the two, making it the most appealing variant.\nFinally, we demonstrate that (1) our method allows for efficient probabilistic predictions on in-distribution tasks, that compare favorably to, and in most cases outperform, the existing baselines, (2) it is effective in detecting context data from OoD tasks at test time, and that (3) both these findings continue to hold in the multimodal task-distribution setting.\n\nThe rest of the paper is organized as follows. Section~\\ref{sec:problem_statement} formalizes the problem. Section~\\ref{sec:background} presents background information on the linearization of neural networks and Bayesian linear regression.\nWe detail our approach and its three variants in Section~\\ref{sec:approach}.\nWe discuss related work in detail in Section~\\ref{sec:related_work}. Finally, we present our experimental results concerning the performance of \\textsc{UnLiMiTD}{} in Section~\\ref{sec:results} and conclude in Section~\\ref{sec:conclusion}.\n\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{images\/flowchart.pdf}\n \\caption{ The true task distribution $p(f)$ can be multimodal, i.e., containing multiple clusters of tasks (e.g., lines and sines). Our approach \\textsc{UnLiMiTD}{} fits $p(f)$ with a parametric, tuneable distribution $\\Tilde{p}_\\xi(f)$ yielded by Bayesian linear regression on a linearized neural network.}\n \\label{fig:flowchart}\n\\end{figure}\n\n\\section{Problem statement}\\label{sec:problem_statement}\nA task $\\mathcal{T}^i$ consists of a function $f_i$ from which data is drawn.\nAt test time, the prediction steps are broken down into (1) \\textit{adaptation}, that is identifying $f_i$ using $K$ context datapoints $(\\xinput^i, \\youtput^i)$ from the task, and (2) \\textit{inference}, that is making predictions for $f_i$ on the \\textit{query inputs} $\\xinput^i_*$.\nLater the predictions can be compared with the \\textit{query ground-truths} $\\youtput^i_*$ to estimate the quality of the prediction, for example in terms of mean squared error (MSE).\nThe meta-training consists in learning valuable features from a \\textit{cluster of tasks}, which is a set of similar tasks (e.g., sines with different phases and amplitudes but same frequency), so that at test time the predictions can be accurate on tasks from the same cluster.\nWe take a probabilistic, functional perspective and represent a cluster by $p(f)$, a theoretical distribution over the function space that describes the probability of a task belonging to the cluster.\nLearning $p(f)$ is appealing, as it allows for performing OoD detection in addition to making predictions. Adaptation amounts to computing the conditional distribution given test context data, and one can obtain an uncertainty metric by evaluating the negative log-likelihood (NLL) of the context data under $p(f)$.\n\nThus, our goal is to construct a parametric, learnable functional distribution $\\Tilde{p}_\\xi(f)$ that approaches the theoretical distribution $p(f)$, with a structure that allows tractable conditioning and likelihood computation, even in deep learning contexts.\nIn practice, however, we are not given $p(f)$, but only a meta-training dataset $\\mathcal{D}$ that we assume is sampled from $p(f)$: $\\mathcal{D}=\\{ (\\widetilde{\\xinput}^i, \\widetilde{\\youtput}^i)\\}_{i=1}^{N}$, where $N$ is the number of tasks available during training, and $(\\widetilde{\\xinput}^i, \\widetilde{\\youtput}^i) \\sim \\mathcal{T}^i$ is the entire pool of data from which we can draw subsets of context data $(\\xinput^i, \\youtput^i)$.\nConsequently, in the meta-training phase, we aim to optimize $\\Tilde{p}_\\xi(f)$ to capture properties of $p(f)$, using only the samples in $\\mathcal{D}$.\n\nOnce we have $\\Tilde{p}_\\xi(f)$, we can evaluate it both in terms of how it performs for few-shot learning (by comparing the predictions with the ground truths in terms of MSE), as well as for OoD detection \n(by measuring how well the NLL of context data serves to classify in-distribution tasks against OoD tasks, measured via the AUC-ROC score). \\nopagebreak[4]\n\n\n\n\n\\section{Background}\n\\label{sec:background}\n\\subsection{Bayesian linear regression and Gaussian Processes}\n\\label{sec:reglin}\nEfficient Bayesian meta-learning requires a tractable inference process at test time. In general, this is only possible analytically in a few cases.\nOne of them is the Bayesian linear regression with Gaussian noise and a Gaussian prior on the weights. \nViewing it from a nonparametric, functional approach, this model is equivalent to a Gaussian process (GP) \\citep{rasmussen}.\n\nLet ${\\bm{X}} = ({\\bm{x}}_1, \\dots, {\\bm{x}}_K) \\in \\mathbb{R}^{{N_x} \\times K}$ be a batch of $K$ ${N_x}$-dimensional inputs, and let ${\\bm{y}} = ({\\bm{y}}_1, \\dots, {\\bm{y}}_K) \\in \\mathbb{R}^{{N_y} K}$ be a vectorized batch of ${N_y}$-dimensional outputs. In the Bayesian linear regression model, these quantities are related according to\n$\n {\\bm{y}} = \\phi({\\bm{X}})^\\top \\hat{\\param} + \\varepsilon \\in \\mathbb{R}^{{N_y} K}\n$\nwhere $\\hat{\\param} \\in \\mathbb{R}^P$ are the weights of the model, and the inputs are mapped via $\\phi:\\mathbb{R}^{{N_x} \\times K} \\rightarrow \\mathbb{R}^{P \\times {N_y} K}$. Notice how this is a generalization of the usual one-dimensional linear regression (${N_y}=1$).\n\n\nIf we assume a Gaussian prior on the weights $\\hat{\\param} \\sim \\mathcal{N}({\\bm{\\mu}}, {\\bm{\\Sigma}})$ and a Gaussian noise $\\varepsilon \\sim \\mathcal{N}(\\bm{0}, {\\bm{\\Sigma}}_\\varepsilon)$ with ${\\bm{\\Sigma}}_\\varepsilon = \\sigma_\\varepsilon^2 {\\bm{I}}$, then the model describes a multivariate Gaussian distribution on ${\\bm{y}}$ for any ${\\bm{X}}$. Equivalently, this means that this model describes a GP distribution over functions, with mean and covariance function (or kernel)\n\\begin{align}\n\\begin{split}\n\\label{eq:prior_pred_dist}\n{\\bm{\\mu}}_{\\text{prior}} ({\\bm{x}}_t) & = \\phi({\\bm{x}}_t)^\\top {\\bm{\\mu}}, \\\\\n\\text{cov}_{\\text{prior}} ({\\bm{x}}_{t_1}, {\\bm{x}}_{t_2}) & = \\phi({\\bm{x}}_{t_1})^\\top {\\bm{\\Sigma}} \\phi({\\bm{x}}_{t_2}) + {\\bm{\\Sigma}}_\\varepsilon =: k_{\\bm{\\Sigma}}({\\bm{x}}_{t_1}, {\\bm{x}}_{t_2}) + {\\bm{\\Sigma}}_\\varepsilon .\n\\end{split}\n\\end{align}\nThis GP enables tractable computation of the likelihood of any batch of data $({\\bm{X}}, {\\bm{Y}})$ given this distribution over functions. The structure of this distribution is governed by the feature map $\\phi$ and the prior over the weights, specified by ${\\bm{\\mu}}$ and ${\\bm{\\Sigma}}$.\n\nThis distribution can also easily be conditioned to perform inference. Given a batch of data $({\\bm{X}}, {\\bm{Y}})$, the posterior predictive distribution is also a GP, with an updated mean and covariance function\n\\begin{align}\n \\label{eq:post_pred_dist}\n \\begin{split}\n {\\bm{\\mu}}_{\\text{post}} ({\\bm{x}}_{t_*}) & = k_{\\bm{\\Sigma}}({\\bm{x}}_{t_*}, {\\bm{X}}) \\left( k_{\\bm{\\Sigma}}({\\bm{X}}, {\\bm{X}}) + {\\bm{\\Sigma}}_\\varepsilon \\right)^{-1} {\\bm{Y}}, \\\\\n \\text{cov}_{\\text{post}} ({\\bm{x}}_{{t_1}_*}, {\\bm{x}}_{{t_2}_*}) & = k_{\\bm{\\Sigma}}({\\bm{x}}_{{t_1}_*}, {\\bm{x}}_{{t_2}_*}) - k_{\\bm{\\Sigma}}({\\bm{x}}_{{t_1}_*}, {\\bm{X}}) \\left( k_{\\bm{\\Sigma}}({\\bm{X}}, {\\bm{X}}) + {\\bm{\\Sigma}}_\\varepsilon \\right)^{-1} k_{\\bm{\\Sigma}}({\\bm{X}}, {\\bm{x}}_{{t_2}_*}).\n \\end{split}\n\\end{align}\nHere, ${\\bm{\\mu}}_{\\text{post}}({\\bm{X}}_*)$ represents our model's adapted predictions for the test data, which we can compare to ${\\bm{Y}}_*$ to evaluate the quality of our predictions, for example, via mean squared error (assuming that test data is clean, following \\citet{rasmussen}). \nThe diagonal of $\\text{cov}_{\\text{post}}({\\bm{X}}_*, {\\bm{X}}_*)$ can be interpreted as a per-input level of confidence that captures the ambiguity in making predictions with only a limited amount of context data.\n\n\\subsection{The linearization of a neural network yields an expressive linear regression model}\n\\label{sec:linearization}\nAs discussed, the choice of feature map $\\phi$ plays an important role in specifying a linear regression model.\nIn the deep learning context, recent work has demonstrated that the linear model obtained when linearizing a deep neural network with respect to its weights at initialization, wherein the Jacobian of the network operates as the feature map, can well approximate the training behavior of wide nonlinear deep neural networks \\citep{jacot,nonlinear,liu2020linearity,shallow_nns_infinite_width,dnns_infinite_wdith}.\n\n\nLet $f$ be a neural network $f: \\left({\\bm{\\theta}}, {\\bm{x}}_t \\right) \\mapsto {\\bm{y}}_t$, where ${\\bm{\\theta}} \\in \\mathbb{R}^{P}$ are the parameters of the model, ${\\bm{x}} \\in \\mathbb{R}^{{N_x}}$ is an input and ${\\bm{y}} \\in \\mathbb{R}^{{N_y}}$ an output.\nThe linearized network (w.r.t. the parameters) around $\\param_0$ is\n\\begin{displaymath}\n f({\\bm{\\theta}}, {\\bm{x}}_t) - f(\\param_0, {\\bm{x}}_t) \\approx {\\bm{J}}_{\\bm{\\theta}}( f )(\\param_0, {\\bm{x}}_t) ({\\bm{\\theta}} - \\param_0),\n\\end{displaymath}\nwhere ${\\bm{J}}_{\\bm{\\theta}}(f)(\\cdot, \\cdot): \\mathbb{R}^P \\times \\mathbb{R}^{N_x} \\rightarrow \\mathbb{R}^{{N_y} \\times P}$ is the Jacobian of the network (w.r.t. the parameters).\n\nIn the case where the model accepts a batch of $K$ inputs ${\\bm{X}} = ({\\bm{x}}_1, \\dots, {\\bm{x}}_K)$ and returns ${\\bm{Y}} = ({\\bm{y}}_1, \\dots, {\\bm{y}}_K)$, we generalize $f$ to $g: \\mathbb{R}^P \\times \\mathbb{R}^{{N_x} \\times K} \\rightarrow \\mathbb{R}^{{N_y} \\times K}$, with ${\\bm{Y}} = g({\\bm{\\theta}}, {\\bm{X}})$.\nConsequently, we generalize the linearization:\n\\begin{displaymath}\ng({\\bm{\\theta}}, {\\bm{X}}) - g(\\param_0, {\\bm{X}}) \\approx {\\bm{J}}(\\param_0, {\\bm{X}}) ({\\bm{\\theta}} - \\param_0),\n\\end{displaymath}\nwhere ${\\bm{J}}(\\cdot, \\cdot): \\mathbb{R}^P \\times \\mathbb{R}^{{N_x} \\times K} \\rightarrow \\mathbb{R}^{{N_y} K \\times P}$ is a shorthand for ${\\bm{J}}_{\\bm{\\theta}}(g)(\\cdot, \\cdot)$.\nNote that we have implicitly vectorized the outputs, and throughout the work, we will interchange the matrices $\\mathbb{R}^{{N_y} \\times K}$ and the vectorized matrices $\\mathbb{R}^{{N_y} K}$.\n\nThis linearization can be viewed as the ${N_y} K$-dimensional linear regression\n\\begin{equation}\n \\label{eq:linearized_network}\n {\\bm{z}} = \\phi_{\\param_0}({\\bm{X}})^\\top \\hat{\\param} \\in \\mathbb{R}^{{N_y} K},\n\\end{equation}\nwhere the feature map $\\phi_{\\param_0}(\\cdot): \\mathbb{R}^{{N_x} \\times K} \\rightarrow \\mathbb{R}^{P \\times {N_y} K}$ is the transposed Jacobian ${\\bm{J}}(\\param_0, \\cdot)^\\top$.\nThe parameters of this linear regression $\\hat{\\param} = \\left( {\\bm{\\theta}} - \\param_0 \\right)$ are the \\textit{correction} to the parameters chosen as the linearization point.\nEquivalently, this can be seen as a kernel regression with the kernel $ k_{\\param_0}({\\bm{X}}_1,{\\bm{X}}_2) = {\\bm{J}}(\\param_0, {\\bm{X}}_1) {\\bm{J}}(\\param_0, {\\bm{X}}_2)^\\top$, which is commonly referred to as the Neural Tangent Kernel (NTK) of the network. Note that the NTK depends on the linearization point $\\param_0$. \nBuilding on these ideas, \\citet{maddox} show that the NTK obtained via linearizing a DNN \\textit{after} it has been trained on a task yields a GP that is well-suited for adaptation and fine-tuning to new, similar tasks. Furthermore, they show that networks trained on similar tasks tend to have similar Jacobians, suggesting that neural network linearization can yield an effective model for multi-task contexts such as meta-learning. In this work, we leverage these insights to construct our parametric functional distribution $\\Tilde{p}_\\xi(f)$ via linearizing a neural network model.\n\n\\section{Our approach: \\textsc{UnLiMiTD}}\n\\label{sec:approach}\nIn this section, we describe our meta-learning algorithm \\textsc{UnLiMiTD}{} and the construction of a parametric functional distribution $\\Tilde{p}_\\xi(f)$ that can model the true underlying distribution over tasks $p(f)$.\nFirst, we focus on the single cluster case, where a Gaussian process structure on $\\Tilde{p}_\\xi(f)$ can effectively model the true distribution of tasks, and detail how we can leverage meta-training data $\\mathcal{D}$ from a single cluster of tasks to train the parameters $\\xi$ of our model.\nNext, we will generalize our approach to the multimodal setting, with more than one cluster of tasks. Here, we construct $\\Tilde{p}_\\xi(f)$ as a mixture of GPs and develop a training approach that can automatically identify the clusters present in the training dataset without requiring the meta-training dataset to contain any additional structure such as cluster labels.\n\n\\subsection{Tractably structuring the prior predictive distribution over functions via a Gaussian distribution over the weights}\nIn our approach, we choose $\\Tilde{p}_\\xi(f)$ to be the \nGP distribution over functions that arises from a Gaussian prior on the weights of the linearization of a neural network (\\eqref{eq:linearized_network}). Consider a particular task $\\mathcal{T}^i$ and a batch of $K$ context data $(\\xinput^i, \\youtput^i)$.\nThe resulting prior predictive distribution, derived from \\eqref{eq:prior_pred_dist} after evaluating on the context inputs, is ${\\bm{Y}} | \\xinput^i \\sim \\mathcal{N}( {\\bm{\\mu}}_{\\youtput \\mid \\xcontextinput}, {\\bm{\\Sigma}}_{\\youtput \\mid \\xcontextinput})$, where\n\\begin{equation}\n \\label{eq:prior_pred_dist_ntk}\n {\\bm{\\mu}}_{\\youtput \\mid \\xcontextinput} = {\\bm{J}}(\\param_0, \\xinput^i) {\\bm{\\mu}}, \\quad {\\bm{\\Sigma}}_{\\youtput \\mid \\xcontextinput} = {\\bm{J}}(\\param_0, \\xinput^i) {\\bm{\\Sigma}} {\\bm{J}}(\\param_0, \\xinput^i)^\\top + {\\bm{\\Sigma}}_\\varepsilon.\n\\end{equation}\nIn this setup, the parameters $\\xi$ of $\\Tilde{p}_\\xi(f)$ that we wish to optimize are the linearization point $\\param_0$, and the parameters of the prior over the weights $({\\bm{\\mu}}, {\\bm{\\Sigma}})$.\nGiven this Gaussian prior, it is straightforward to compute the joint NLL of the context labels $\\youtput^i$,\n\\begin{align}\n\\label{eq:single-nll}\n \\mathrm{NLL}(\\xinput^i, \\youtput^i) = \\frac12\\left( \\left\\| \\youtput^i - {\\bm{\\mu}}_{\\youtput \\mid \\xcontextinput} \\right\\|^2_{{\\bm{\\Sigma}}_{\\youtput \\mid \\xcontextinput}^{-1}} + \\log\\det {\\bm{\\Sigma}}_{\\youtput \\mid \\xcontextinput} + {N_y} K \\log 2 \\pi \\right).\n\\end{align}\nThe NLL (a) serves as a loss function quantifying the quality of $\\xi$ during training and (b) serves as an uncertainty signal at test time to evaluate whether context data $(\\xinput^i, \\youtput^i)$ is OoD.\nGiven this model, \\textit{adaptation} is tractable as we can condition this GP on the context data analytically. In addition, we can efficiently make probabilistic predictions by evaluating the mean and covariance of the resulting posterior predictive distribution on the query inputs, using \\eqref{eq:post_pred_dist}.\n\n\\subsubsection{Parameterizing the prior covariance over the weights}\n\\label{sec:prior_covariance}\nWhen working with deep neural networks, the number of weights $P$ can surpass $10^6$. While it remains tractable to deal with $\\param_0$ and ${\\bm{\\mu}}$, whose memory footprint grows linearly with $P$, it can quickly become intractable to make computations with (let alone store) a dense prior covariance matrix over the weights ${\\bm{\\Sigma}} \\in \\mathbb{R}^{P \\times P}$. Thus, we must impose some structural assumptions on the prior covariance to scale to deep neural network models.\n\n\n\\textbf{Imposing a unit covariance.}~\\, One simple way to tackle this issue would be to remove ${\\bm{\\Sigma}}$ from the learnable parameters $\\xi$, i.e., fixing it to be the identity ${\\bm{\\Sigma}} = {\\bm{I}}_{P}$. In this case, $\\xi = (\\param_0, {\\bm{\\mu}})$. \nThis computational benefit comes at the cost of model expressivity, as we lose a degree of freedom in how we can optimize our learned prior distribution $\\Tilde{p}_\\xi(f)$. In particular, we are unable to choose a prior over the weights of our model that captures correlations between elements of the feature map.\n\n\\textbf{Learning a low-dimensional representation of the covariance.}~\\,\nAn alternative is to learn a low-rank representation of ${\\bm{\\Sigma}}$, allowing for a learnable weight-space prior covariance that can encode correlations. Specifically, we consider a covariance of the form ${\\bm{\\Sigma}} = {\\bm{Q}}^\\top \\diag{{\\bm{s}}^2} {\\bm{Q}}$, where ${\\bm{Q}}$ is a fixed projection matrix on an $s$-dimensional subspace of $\\mathbb{R}^{P}$, while ${\\bm{s}}^2$ is learnable.\nIn this case, the parameters that are learned are $\\xi = (\\param_0, {\\bm{\\mu}}, {\\bm{s}})$.\nWe define ${\\bm{S}} := \\diag{{\\bm{s}}^2}$.\nThe computation of the covariance of the prior predictive (\\eqref{eq:prior_pred_dist_ntk}) could then be broken down into two steps:\n\\begin{displaymath}\n\\left\\{\n \\begin{array}{l}\n A := {\\bm{J}}(\\param_0, \\xinput^i) {\\bm{Q}}^\\top \\\\\n {\\bm{J}}(\\param_0, \\xinput^i) {\\bm{\\Sigma}} {\\bm{J}}(\\param_0, \\xinput^i)^\\top = A {\\bm{S}} A^\\top \n \\end{array}\n\\right.\n\\end{displaymath}\nwhich requires a memory footprint of $O(P(s + {N_y} K) )$, if we include the storage of the Jacobian.\nBecause ${N_y} K \\ll P$ in typical deep learning contexts, it suffices that $s \\ll P$ so that it becomes tractable to deal with this new representation of the covariance.\n\n\\textbf{A trade-off between feature-map expressiveness and learning a rich prior over the weights.} Note that even if a low-dimensional representation of ${\\bm{\\Sigma}}$ enriches the prior distribution over the weights, it also restrains the expressiveness of the feature map in the kernel by projecting the $P$-dimensional features ${\\bm{J}}(\\param_0, {\\bm{X}})$ on a subspace of size $s \\ll P$ via ${\\bm{Q}}$.\nThis presents a trade-off: we can use the full feature map, but limit the weight-space prior covariance to be the identity matrix by keeping ${\\bm{\\Sigma}} = {\\bm{I}}$ (case \\approach-\\textsc{I}). Alternatively, we could learn a low-rank representation of ${\\bm{\\Sigma}}$ by randomly choosing $s$ orthogonal directions in $\\mathbb{R}^{P}$, with the risk that they could limit the expressiveness of the feature map if the directions are not relevant to the problem that is considered (case \\approach-\\textsc{R}).\nAs a compromise between these two cases, we can choose the projection matrix more intelligently and project to the most impactful subspace of the full feature map --- in this way, we can reap the benefits of a tuneable prior covariance while minimizing the useful features that the projection drops. To select this subspace, we construct this projection map by choosing the top $s$ eigenvectors of the Fisher information matrix (FIM) evaluated on the training dataset $\\mathcal{D}$ (case \\approach-\\textsc{F}). Recent work has shown that the FIM for deep neural networks tends to have rapid spectral decay \\citep{scod}, which suggests that keeping only a few of the top eigenvectors of the FIM is enough to encode an expressive task-tailored prior. See Appendix~\\ref{app:fim} for more details. \n\n\\subsubsection{Generalizing the structure to a mixture of Gaussians}\n\\label{sec:mixture}\n\nWhen learning on multiple clusters of tasks, $p(f)$ can become non-unimodal, and thus cannot be accurately described by a single GP.\nInstead, we can capture this multimodality by structuring $\\Tilde{p}_\\xi(f)$ as a \\textit{mixture} of Gaussian processes.\n\n\\textbf{Building a more general structure.}~\\, We assume that at train time, a task $\\mathcal{T}^i$ comes from any cluster $\\left\\{\\mathcal{C}_j \\right\\}_{j=1}^{j=\\alpha}$ with equal probability.\nThus, we choose to construct $\\Tilde{p}_\\xi(f)$ as an equal-weighted mixture of $\\alpha$ Gaussian processes.\n\nFor each element of the mixture, the structure is similar to the single cluster case, where the parameters of the cluster's weight-space prior are given by $({\\bm{\\mu}}_j, {\\bm{\\Sigma}}_j)$. We choose to have both the projection matrix ${\\bm{Q}}$ and the linearization point $\\param_0$ (and hence, the feature map $\\phi(\\cdot) = {\\bm{J}}(\\param_0,\\cdot)$) shared across the clusters. This yields improved computational efficiency, as we can compute the projected features once, simultaneously, for all clusters.\nThis yields the parameters $\\xi_\\alpha = (\\param_0, {\\bm{Q}}, ({\\bm{\\mu}}_1, {\\bm{s}}_1), \\ldots, ({\\bm{\\mu}}_\\alpha, {\\bm{s}}_\\alpha))$.\n\nThis can be viewed as a mixture of linear regression models, with a common feature map but \nseparate, independent prior distributions over the weights for each cluster. These separate distributions are encoded using the low-dimensional representations ${\\bm{S}}_j$ for each ${\\bm{\\Sigma}}_j$. \nNotice how this is a generalization of the single cluster case, for when $\\alpha=1$, $\\Tilde{p}_\\xi(f)$ becomes a Gaussian and $\\xi_\\alpha = \\xi$\\footnote{In theory, it is possible to drop ${\\bm{Q}}$ and extend the identity covariance case to the multi-cluster setting; however, this leads to each cluster having an identical covariance function, and thus is not effective at modeling heterogeneous behaviors among clusters.}.\n\n\n\\textbf{Prediction and likelihood computation.}~\\, The NLL of a batch of inputs under this mixture model can be computed as\n\\begin{equation}\n \\label{eq:nll_mixture}\n \\mathrm{NLL}_{\\text{mixt}}(\\xinput^i, \\youtput^i) = \\log \\alpha - \\log \\add \\exp (-\\mathrm{NLL}_1(\\xinput^i, \\youtput^i), \\ldots, -\\mathrm{NLL}_\\alpha(\\xinput^i, \\youtput^i)),\n\\end{equation}\nwhere $\\mathrm{NLL}_j(\\xinput^i, \\youtput^i)$ is the NLL with respect to each individual Gaussian, as computed in \\eqref{eq:single-nll}, and $\\log\\add\\exp$ computes the logarithm of the sum of the exponential of the arguments, taking care to avoid underflow issues.\n\nTo make exact predictions, we would require conditioning this mixture model. As this is not directly tractable, we propose to first \\textit{infer the cluster} from which a task comes from, by identifying the Gaussian $\\mathcal{G}_{j_0}$ that yields the highest likelihood for the context data $\\left( \\xinput^i, \\youtput^i \\right)$. Then, we can \\textit{adapt} by conditioning $\\mathcal{G}_{j_0}$ with the context data and finally \\textit{infer} by evaluating the resulting posterior distribution on the queried inputs $\\xinput^i_*$.\n\n\\subsection{Meta-training the Parametric Task Distribution}\nThe key to our meta-learning approach is to estimate the quality of $\\Tilde{p}_\\xi(f)$ via the NLL of context data from training tasks, and use its gradients to update the parameters of the distribution $\\xi$.\nOptimizing this loss over tasks in the dataset draws $\\Tilde{p}_\\xi(f)$ closer to the empirical distribution present in the dataset, and hence towards the true distribution $p(f)$.\n\n\nWe present three versions of \\textsc{UnLiMiTD}, depending on the choice of structure of the prior covariance over the weights (see Section~\\ref{sec:prior_covariance} for more details).\n\\approach-\\textsc{I}{} (Algorithm~\\ref{alg:meta_training_identity}) is the meta-training with the fixed identity prior covariance. \\approach-\\textsc{R}{} and \\approach-\\textsc{F}{} (Algorithm~\\ref{alg:meta_training_learnt_cov}) learn a low-dimensional representation of that prior covariance, either with random projections or with FIM-based projections.\n\n\\begin{algorithm}[t]\n\\caption{\\footnotesize \\approach-\\textsc{I}: meta-training with identity prior covariance}\n\\footnotesize\n\\label{alg:meta_training_identity}\n\\begin{algorithmic}[1]\n \\State Initialize $\\param_0$, ${\\bm{\\mu}}$.\n \\ForAll{epoch}\n \\State Sample $n$ tasks $\\{ \\mathcal{T}^i, (\\xinput^i, \\youtput^i) \\}_{i=1}^{i=n}$\n \\ForAll{$\\mathcal{T}^i, (\\xinput^i, \\youtput^i)$}\n \\State $NLL_i \\gets \\Call{GaussNLL}{\\youtput^i; {\\bm{J}}{\\bm{\\mu}},~ {\\bm{J}}\\jac^\\top + {\\bm{\\Sigma}}_\\varepsilon}$ \\Comment{${\\bm{J}} = {\\bm{J}}(\\param_0, \\xinput^i)$}\n \\EndFor\n \\State Update $\\param_0$, ${\\bm{\\mu}}$ with $\\nabla_{\\param_0 \\cup {\\bm{\\mu}}} \\sum_i NLL_i$\n \\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[t]\n\\caption{\\footnotesize \\approach-\\textsc{R}{} and \\approach-\\textsc{F}{}: meta-training with a learnt covariance}\n\\footnotesize\n\\label{alg:meta_training_learnt_cov}\n\\begin{algorithmic}[1]\n \\If{using random projections}\n \\State Find random projection ${\\bm{Q}}$\n \\State Initialize $\\param_0$, ${\\bm{\\mu}}$, ${\\bm{s}}$\n \\ElsIf{using FIM-based projections}\n \\State Find intermediate $\\param_0$, ${\\bm{\\mu}}$ with \\approach-\\textsc{I}{} \\Comment{see Alg.~\\ref{alg:meta_training_identity}}\n \\State Find ${\\bm{Q}}$ via \\Call{FIMProj}{s}; initialize ${\\bm{s}}$. \\Comment{see Alg.~\\ref{alg:fim_proj}}\n \\EndIf\n \\ForAll{epoch}\n \\State Sample $n$ tasks $\\{ \\mathcal{T}^i, (\\xinput^i, \\youtput^i) \\}_{i=1}^{i=n}$\n \\ForAll{$\\mathcal{T}^i, (\\xinput^i, \\youtput^i)$}\n \\State $NLL_i \\gets \\Call{GaussNLL}{\\youtput^i; {\\bm{J}}{\\bm{\\mu}},~ {\\bm{J}} {\\bm{Q}}^\\top \\diag{{\\bm{s}}^2} {\\bm{Q}} {\\bm{J}}^\\top + {\\bm{\\Sigma}}_\\varepsilon}$ \\Comment{${\\bm{J}} = {\\bm{J}}(\\param_0, \\xinput^i)$}\n \\EndFor\n \\State Update $\\param_0$, ${\\bm{\\mu}}$, ${\\bm{s}}$ with $\\nabla_{\\param_0 \\cup {\\bm{\\mu}} \\cup {\\bm{s}}} \\sum_i NLL_i$\n \\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n\\textbf{Computing the likelihood.}~\\,\nIn the algorithms, the function \\Call{GaussNLL}{$\\youtput^i$; $m$, $K$} stands for NLL of $\\youtput^i$ under the Gaussian $\\mathcal{N}(m, K)$ (see \\eqref{eq:single-nll}).\nIn the mixture case, we instead use \\Call{MixtNLL}{}, which wraps \\eqref{eq:nll_mixture} and calls \\Call{GaussNLL}{} for the individual NLL computations (see discussion in Section~\\ref{sec:mixture}).\nIn this case, ${\\bm{\\mu}}$ becomes $\\{{\\bm{\\mu}}_j\\}_{j=1}^{j=\\alpha}$ and ${\\bm{s}}$ becomes $\\{{\\bm{s}}_j\\}_{j=1}^{j=\\alpha}$ when applicable.\n\n\\textbf{Finding the FIM-based projections.}~\\,\nThe FIM-based projection matrix aims to identify the elements of $\\phi = {\\bm{J}}(\\param_0, {\\bm{X}})$ that are most relevant for the problem (see Section~\\ref{sec:prior_covariance} and Appendix~\\ref{app:fim}).\nHowever, this feature map evolves during training, because it is $\\param_0$-dependent.\nHow do we ensure that the directions we choose for ${\\bm{Q}}$ remain relevant during training?\nWe leverage results from \\citet{ntk_evolution}, stating that the NTK (the kernel associated with the Jacobian feature map, see Section~\\ref{sec:linearization}) changes significantly at the beginning of training and that its evolution slows down as training goes on. This suggests that as a heuristic, we can compute the FIM-based directions after partial training, as they are unlikely to deviate much after the initial training. \nFor this reason, \\approach-\\textsc{F}{}\n(Algorithm~\\ref{alg:meta_training_learnt_cov}) first calls \\approach-\\textsc{I}{} (Algorithm~\\ref{alg:meta_training_identity}) before computing the FIM-based ${\\bm{Q}}$ that yields intermediate parameters $\\param_0$ and ${\\bm{\\mu}}$.\nThen the usual training takes place with the learning of ${\\bm{s}}$ in addition to $\\param_0$ and ${\\bm{\\mu}}$.\n\n\\section{Related work}\n\\label{sec:related_work}\n\\textbf{Bayesian inference with linearized DNNs.}~\\,\nBayesian inference with neural networks is often intractable because the posterior predictive has rarely a closed-form expression.\nWhereas \\textsc{UnLiMiTD}{} linearizes the network to allow for practical Bayesian inference, existing work has used other approximations to tractably express the posterior.\nFor example, it has been shown that in the infinite-width approximation, the posterior predictive of a Bayesian neural network behaves like a GP \\citep{shallow_nns_infinite_width, dnns_infinite_wdith}. This analysis can in some cases yield a good approximation to the Bayesian posterior of a DNN \\citep{cnns_infinite_width}.\nIt is also common to use Laplace's method to approximate the posterior predictive by a Gaussian distribution and allow practical use of the Bayesian framework for neural networks.\nThis approximation relies in particular on the computation of the Hessian of the network: this is in general intractable, and most approaches use the so-called Gauss-Newton approximation of the Hessian instead \\citep{laplace_scalable}.\nRecently, it has been shown that the Laplace method using the Gauss-Newton approximation is equivalent to working with a certain linearized version of the network and its resulting posterior GP \\citep{laplace_linearization}.\n\nBayesian inference is applied in a wide range of subjects. For example, recent advances in transfer learning have been possible thanks to Bayesian inference with linearized neural networks.\n\\citet{maddox} have linearized pre-trained networks and performed domain adaptation by conditioning the prior predictive with data from the new task: the posterior predictive is then used to make predictions. Our approach leverages a similar adaption method and demonstrates how the prior distribution can be learned in a meta-learning setup.\n\n\\textbf{Meta-learning.}~\\,\nMAML is a meta-learning algorithm that uses as adaptation a few steps of gradient descent \\citep{maml}.\nIt has the benefit of being model-agnostic (it can be used on any model for which we can compute gradients w.r.t. the weights), whereas \\textsc{UnLiMiTD}{} requires the model to be a differentiable regressor.\nMAML has been further generalized to probabilistic meta-learning models such as PLATIPUS or BaMAML \\citep{bayesian_maml, probabilistic_maml}, where the simple gradient descent step is augmented to perform approximate Bayesian inference. These approaches, like ours, learn (during meta-training) and make use of (at test-time) a prior distribution on the weights. In contrast, however, \\textsc{UnLiMiTD}{} uses exact Bayesian inference at test-time.\nMAML has also been improved for multimodal meta-learning via MMAML \\citep{mmaml, revisit_mmaml}. Similarly to our method, they add a step to identify the cluster from which the task comes from \\citep{mmaml}.\nOoD detection in meta-learning has been studied by \\citet{ood_maml}, who build upon MAML to perform OoD detection in the classification setting, to identify unseen classes during training.\n\\citet{meta_learning_ood} also implemented OoD detection for classification, by learning a Gaussian mixture model on a latent space. \n\\textsc{UnLiMiTD}{} extends these ideas to the regression task, aiming to identify when test data is drawn from an unfamiliar function.\n\nALPaCA is a Bayesian meta-learning algorithm for neural networks, where only the last layer is Bayesian \\citep{alpaca}.\nSuch framework yields an exact linear regression that uses as feature map the activations right before the last layer.\nOur work is a generalization of ALPaCA, in the sense that \\textsc{UnLiMiTD}{} restricted to the last layer matches ALPaCA's approach.\nMore on this link between the methods is discussed in Appendix~\\ref{app:link_with_alpaca}.\n\n\\section{Results and discussion}\n\\label{sec:results}\nWe wish to evaluate four key aspects of \\textsc{UnLiMiTD}.\n(1) At test time, how do the probabilistic predictions compare to baselines?\n(2) How well does the detection of context data from OoD tasks perform?\n(3) How do these results hold in the multimodal setting?\n(4) Which approach performs better between (a) the identity covariance (\\approach-\\textsc{I}), (b) the low-dimensional covariance with random directions (\\approach-\\textsc{R}) and the compromise (c) using FIM-based directions (\\approach-\\textsc{F}) (see trade-off in Section~\\ref{sec:prior_covariance})?\nThat is, what is best between learning a rich prior distribution over the weights, keeping a full feature map, and a compromise between the two?\n\nWe consider a cluster of sine tasks, one of linear tasks and one of quadratic tasks, regression problems inspired from \\citet{mmaml}.\nDetails on the problems can be found in Appendix~\\ref{app:problem-details}.\n\n\\textbf{Unimodal meta-learning: The meta-learned prior accurately fits the tasks.}~\\,\nFirst, we investigate the performance of \\textsc{UnLiMiTD}{} on a unimodal task distribution consisting of sinusoids of varying amplitude and phase, using the single GP structure for $\\Tilde{p}_\\xi(f)$.\nWe compare the performance between \\approach-\\textsc{I}, \\approach-\\textsc{R}{} and \\approach-\\textsc{F}.\nWe also compare the results between training with an infinite amount of available sine tasks (infinite task dataset), and with a finite amount of available tasks (finite task dataset).\nMore training details can be found in Appendix~\\ref{app:train-details-single}.\nExamples of predictions at the test time are available in Figure~\\ref{fig:single-predictions}, along with confidence levels.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/predictions\/single_pred_fim_infinite_1.pdf}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/predictions\/single_pred_fim_infinite_5.pdf}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/predictions\/single_pred_fim_infinite_10.pdf}\n \\end{subfigure}\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/predictions\/single_pred_fim_finite_1.pdf}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/predictions\/single_pred_fim_finite_5.pdf}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/predictions\/single_pred_fim_finite_10.pdf}\n \\end{subfigure}\n \\caption{ Example of predictions for a varying number of context inputs $K$, after meta-training with \\approach-\\textsc{F}. Top: \\approach-\\textsc{F}, infinite task dataset. Bottom: \\approach-\\textsc{F}, finite task dataset. The standard deviation is from the posterior predictive distribution.\n Note how the uncertainty levels are coherent with the actual prediction error.\n Also, note how uncertainty decreases when there is more context data.\n Notice how \\approach-\\textsc{F}{} recovers the shape of the sine even with a low number of context inputs.\n Finally, note how \\approach-\\textsc{F}{} is able to reconstruct the sine even when trained on fewer tasks (bottom). More comprehensive plots available in Figure~\\ref{fig:single-predictions-full}.}\n \\label{fig:single-predictions}\n\\end{figure}\n\nIn both OoD detection and quality of predictions, \\approach-\\textsc{R}{} and \\approach-\\textsc{F}{} \nperform better than \\approach-\\textsc{I}{} (Figure~\\ref{fig:single-performance}), and this is reflected in the quality of the learned prior $\\Tilde{p}_\\xi(f)$ in each case (see Appendix~\\ref{app:additional-single}).\nWith respect to the trade-off mentioned in Section~\\ref{sec:prior_covariance}, we find that for small networks, a rich prior over the weights matters more than the full expressiveness of the feature map, making both \\approach-\\textsc{R}{} and \\approach-\\textsc{F}{} appealing.\nHowever, after running further experiments on a deep-learning image-domain problem, this conclusion does not hold for deep networks (see Appendix~\\ref{app:deep}), where keeping an expressive feature map is important (\\approach-\\textsc{I}{} and \\approach-\\textsc{F}{} are appealing in that case).\nThus, \\approach-\\textsc{F}{} is the variant that we retain, for it allows similar or better performances than the other variants in all situations.\n\nNote how \\approach-\\textsc{F}{} outperforms MAML: it achieves much better generalization when decreasing the number of context samples $K$ (Figure~\\ref{fig:single-performance}).\nIndeed, \\approach-\\textsc{F}{} trained with a finite task dataset performs better than MAML with an infinite task dataset: it is able to capture better the common features of the tasks with a smaller task dataset.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/single_data.pdf}\n \\caption{Examples of context data from in-dist. and OoD tasks}\n \\label{fig:single-data}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{single_auc.pdf}\n \\caption{AUC for OoD detection}\n \\label{fig:single-auc}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{single_mse.pdf}\n \\caption{MSE on predictions}\n \\label{fig:single-mse}\n \\end{subfigure}\n \\caption{Unimodal case: Performance of \\textsc{UnLiMiTD}{} for OoD detection and inference, as a function of the number of context datapoints $K$. The training dataset consists of sinusoids, while OoD tasks are lines and quadratic tasks. We compare different variants (\\approach-\\textsc{I}, \\approach-\\textsc{R}{} and \\approach-\\textsc{F}), and against MAML for predictions. We also compare training with a finite and infinite task dataset. Note how \\approach-\\textsc{R}{} and \\approach-\\textsc{F}{} have efficient OoD detection and outperform MAML in predictions. Also, note how MAML trained with an infinite task dataset performs worse than \\approach-\\textsc{R}{} and \\approach-\\textsc{F}{} trained on a finite task dataset.}\n \\label{fig:single-performance}\n\\end{figure}\n\n\\textbf{Multimodal meta-learning: Comparing the mixture model against a single GP.}~\\,\nNext, we consider a multimodal task distribution with training data consisting of sinusoids as well as lines with varying slopes. \nHere, we compare the performance between choosing the mixture structure or the single GP structure (see discussion in Section~\\ref{sec:mixture}): in both cases, we use \\approach-\\textsc{F}.\nMore training details can be found in Appendix~\\ref{app:train-details-multi}).\n\nBoth the OoD detection and the prediction performances are better with the mixture structure than with the single GP structure (Figure~\\ref{fig:multi-performance}), indicating that the mixture model is a useful structure for $\\Tilde{p}_\\xi(f)$.\nThis is reflected in the quality of the learned priors (see Appendix \\ref{app:additional-multi} for qualitative results including samples from the learned priors).\nNote how the single GP structure still performs better than both MAML and MMAML for prediction, especially in the low-data regime.\nThis demonstrates the strength of our probabilistic approach for multimodal meta-learning: even if the probabilistic assumptions are not optimal, the predictions are still accurate and can beat baselines.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/multi_data.pdf}\n \\caption{Examples of context data from in-dist. and OoD tasks}\n \\label{fig:multi-data}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{multi_auc.pdf}\n \\caption{AUC for OoD detection}\n \\label{fig:multi-auc}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{multi_mse.pdf}\n \\caption{MSE on predictions}\n \\label{fig:multi-mse}\n \\end{subfigure}\n \\caption{Multimodal case: Performance of \\textsc{UnLiMiTD}{} for OoD detection and inference, as a function of the number of context datapoints $K$. The training dataset includes both sines and lines, while OoD tasks are quadratic functions. We compare the different variants (\\approach-\\textsc{F}{} with a single GP or a mixture model), and against MAML\/MMAML for predictions. Note how both versions of \\approach-\\textsc{F}{} yield better predictions than the baselines. In particular, even with a single GP, \\approach-\\textsc{F}{} outperforms the baselines.}\n \\label{fig:multi-performance}\n\\end{figure}\n\n\\section{Conclusion}\\label{sec:conclusion}\n\nWe propose \\textsc{UnLiMiTD}{}, a novel meta-learning algorithm that models the underlying task distribution using a parametric and tuneable distribution, leveraging Bayesian inference with linearized neural networks.\nWe compare three variants, and show that among these, the Fisher-based parameterization, \\approach-\\textsc{F}{}, effectively balances scalability and expressivity, even for deep learning applications.\nWe have demonstrated that (1) our approach makes efficient probabilistic predictions on in-distribution tasks, which compare favorably to, and often outperform, baselines, (2) it allows for effective detection of context data from OoD tasks, and (3) that both these findings continue to hold in the multimodal task-distribution setting.\n\nThere are several avenues for future work. One direction entails understanding how the performance of \\approach-\\textsc{F}{} is impacted if the FIM-based directions are computed too early in the training and the NTK changes significantly afterwards.\nOne could also generalize our approach to non-Gaussian likelihoods, making \\textsc{UnLiMiTD}{} effective for classification tasks.\nFinally, further research can push the limits of multimodal meta-learning, e.g., by implementing non-parametric Bayesian methods to automatically infer an optimal number of clusters, thereby eliminating a hyperparameter of the current approach.\n\n\n\\subsubsection*{Acknowledgements}\nThe authors acknowledge the MIT SuperCloud \\citep{supercloud} and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper.\nThe authors would like to thank MISTI MIT-France for supporting this research.\nC.A. further acknowledges support from Mines Paris Foundation. N.A. acknowledges support from the Edgerton Career Development Professorship.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Glossary}\\label{glossary}Here we explain some terms that have specific technical definitions in \\gambit.\\begin{description}}\n\\newcommand{\\end{description}}{\\end{description}}\n\n\\newcommand{\\begin{lstlisting}}{\\begin{lstlisting}}\n\\newcommand{\\end{lstlisting}}{\\end{lstlisting}}\n\\newcommand{\\metavarf}[1]{\\textit{\\color{darkgreen}\\footnotesize\\textrm{#1}}}\n\\newcommand{\\metavars}[1]{\\textit{\\color{darkgreen}\\scriptsize\\textrm{#1}}}\n\\newcommand{\\metavarf}{\\metavarf}\n\n\\DeclareMathOperator{\\sign}{sign}\n\\DeclareMathOperator\\erf{erf}\n\n\\newcommand{\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace}{\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace}\n\\newcommand{\\text{M\\eV}\\xspace}{\\text{M\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace}\\xspace}\n\\newcommand{\\text{G\\eV}\\xspace}{\\text{G\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace}\\xspace}\n\\newcommand{\\text{T\\eV}\\xspace}{\\text{T\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace}\\xspace}\n\\newcommand{\\text{pb}\\xspace}{\\text{pb}\\xspace}\n\\newcommand{\\text{fb}\\xspace}{\\text{fb}\\xspace}\n\\newcommand{\\ensuremath{\\pb^{-1}}\\xspace}{\\ensuremath{\\text{pb}\\xspace^{-1}}\\xspace}\n\\newcommand{\\ensuremath{\\fb^{-1}}\\xspace}{\\ensuremath{\\text{fb}\\xspace^{-1}}\\xspace}\n\n\\newcommand{\\ensuremath{p_\\mathrm{T}}\\xspace}{\\ensuremath{p_\\mathrm{T}}\\xspace}\n\\newcommand{\\ensuremath{E_\\mathrm{T}}\\xspace}{\\ensuremath{E_\\mathrm{T}}\\xspace}\n\\newcommand{\\ensuremath{E_\\mathrm{T}^\\mathrm{\\mspace{1.5mu}miss}}\\xspace}{\\ensuremath{E_\\mathrm{T}^\\mathrm{\\mspace{1.5mu}miss}}\\xspace}\n\\newcommand{\\etmissx}[1]{\\ensuremath{E_\\mathrm{T}^\\mathrm{\\mspace{1.5mu}miss,{#1}}}\\xspace}\n\\newcommand{\\ensuremath{H_\\mathrm{T}}\\xspace}{\\ensuremath{H_\\mathrm{T}}\\xspace}\n\\newcommand{\\ensuremath{\\Delta\\phi}\\xspace}{\\ensuremath{\\Delta\\phi}\\xspace}\n\\newcommand{\\scriptscriptstyle}{\\scriptscriptstyle}\n\\newcommand{m_{\\sss S}}{m_{\\scriptscriptstyle S}}\n\\newcommand{\\overline{m}_{\\sss S}}{\\overline{m}_{\\scriptscriptstyle S}}\n\\newcommand{\\lambda_{h\\sss S}}{\\lambda_{h\\scriptscriptstyle S}}\n\\newcommand{\\lambda_{\\sss S}}{\\lambda_{\\scriptscriptstyle S}}\n\\newcommand{\\lambda_{h}}{\\lambda_{h}}\n\\newcommand{m_h}{m_h}\n\\newcommand{\\overline{m}_h}{\\overline{m}_h}\n\\newcommand{$\\overline{DR}$\\xspace}{$\\overline{DR}$\\xspace}\n\\newcommand{\\DR}{$\\overline{DR}$\\xspace}\n\\newcommand{$\\overline{MS}$\\xspace}{$\\overline{MS}$\\xspace}\n\\newcommand{\\overline{MS}}{\\overline{MS}}\n\\newcommand{\\text{CL}\\xspace}{\\text{CL}\\xspace}\n\\newcommand{\\ensuremath{\\CL_{s}}\\xspace}{\\ensuremath{\\text{CL}\\xspace_{s}}\\xspace}\n\\newcommand{\\ensuremath{\\CL_{s+b}}\\xspace}{\\ensuremath{\\text{CL}\\xspace_{s+b}}\\xspace}\n\\newcommand{\\ensuremath{\\mathrm{BR}}\\xspace}{\\ensuremath{\\mathrm{BR}}\\xspace}\n\\newcommand{$\\mathbb{Z}_2$\\xspace}{$\\mathbb{Z}_2$\\xspace}\n\\newcommand{$\\mathbb{Z}_3$\\xspace}{$\\mathbb{Z}_3$\\xspace}\n\n\\newcommand{i.e.\\ }{i.e.\\ }\n\\newcommand{e.g.\\ }{e.g.\\ }\n\\newcommand{ATLAS\\xspace}{ATLAS\\xspace}\n\\newcommand{CMS\\xspace}{CMS\\xspace}\n\\newcommand{\\textsf{GAMBIT}\\xspace}{\\textsf{GAMBIT}\\xspace}\n\\newcommand{1.0.0}{1.0.0}\n\\newcommand{\\gambit \\textsf{\\gambitversion}\\xspace}{\\textsf{GAMBIT}\\xspace \\textsf{1.0.0}\\xspace}\n\\newcommand{\\textsf{DarkBit}\\xspace}{\\textsf{DarkBit}\\xspace}\n\\newcommand{\\textsf{ColliderBit}\\xspace}{\\textsf{ColliderBit}\\xspace}\n\\newcommand{\\textsf{FlavBit}\\xspace}{\\textsf{FlavBit}\\xspace}\n\\newcommand{\\textsf{SpecBit}\\xspace}{\\textsf{SpecBit}\\xspace}\n\\newcommand{\\textsf{DecayBit}\\xspace}{\\textsf{DecayBit}\\xspace}\n\\newcommand{\\textsf{PrecisionBit}\\xspace}{\\textsf{PrecisionBit}\\xspace}\n\\newcommand{\\textsf{ScannerBit}\\xspace}{\\textsf{ScannerBit}\\xspace}\n\\newcommand{\\textsf{ExampleBit\\_A}\\xspace}{\\textsf{ExampleBit\\_A}\\xspace}\n\\newcommand{\\textsf{ExampleBit\\_B}\\xspace}{\\textsf{ExampleBit\\_B}\\xspace}\n\\newcommand{\\textsf{NeutrinoBit}\\xspace}{\\textsf{NeutrinoBit}\\xspace}\n\\newcommand{\\textsf{BOSS}\\xspace}{\\textsf{BOSS}\\xspace}\n\\newcommand{\\gambit}{\\textsf{GAMBIT}\\xspace}\n\\newcommand{\\darkbit}{\\textsf{DarkBit}\\xspace}\n\\newcommand{\\textsf{OpenMP}\\xspace}{\\textsf{OpenMP}\\xspace}\n\\newcommand{\\textsf{MPI}\\xspace}{\\textsf{MPI}\\xspace}\n\\newcommand{\\textsf{POSIX}\\xspace}{\\textsf{POSIX}\\xspace}\n\\newcommand{\\textsf{BuckFast}\\xspace}{\\textsf{BuckFast}\\xspace}\n\\newcommand{\\textsf{Delphes}\\xspace}{\\textsf{Delphes}\\xspace}\n\\newcommand{\\textsf{EOS}\\xspace}{\\textsf{EOS}\\xspace}\n\\newcommand{\\textsf{Flavio}\\xspace}{\\textsf{Flavio}\\xspace}\n\\newcommand{\\textsf{Pythia}\\xspace}{\\textsf{Pythia}\\xspace}\n\\newcommand{\\textsf{Pythia\\,8}\\xspace}{\\textsf{Pythia\\,8}\\xspace}\n\\newcommand{\\textsf{PythiaEM}\\xspace}{\\textsf{PythiaEM}\\xspace}\n\\newcommand{\\textsf{Prospino}\\xspace}{\\textsf{Prospino}\\xspace}\n\\newcommand{\\textsf{NLL-fast}\\xspace}{\\textsf{NLL-fast}\\xspace}\n\\newcommand{\\textsf{MadGraph}\\xspace}{\\textsf{MadGraph}\\xspace}\n\\newcommand{\\textsf{MadGraph5\\_aMC@NLO}\\xspace}{\\textsf{MadGraph5\\_aMC@NLO}\\xspace}\n\\newcommand{\\textsf{FastJet}\\xspace}{\\textsf{FastJet}\\xspace}\n\\newcommand{\\textsf{SModelS}\\xspace}{\\textsf{SModelS}\\xspace}\n\\newcommand{\\textsf{FastLim}\\xspace}{\\textsf{FastLim}\\xspace}\n\\newcommand{\\textsf{CheckMATE}\\xspace}{\\textsf{CheckMATE}\\xspace}\n\\newcommand{\\textsf{HiggsBounds}\\xspace}{\\textsf{HiggsBounds}\\xspace}\n\\newcommand{\\textsf{SUSYPope}\\xspace}{\\textsf{SUSYPope}\\xspace}\n\\newcommand{\\textsf{HiggsSignals}\\xspace}{\\textsf{HiggsSignals}\\xspace}\n\\newcommand{\\textsf{DarkSUSY}\\xspace}{\\textsf{DarkSUSY}\\xspace}\n\\newcommand{\\ds}{\\textsf{DarkSUSY}\\xspace}\n\\newcommand{\\textsf{WimpSim}\\xspace}{\\textsf{WimpSim}\\xspace}\n\\newcommand{\\textsf{3-BIT-HIT}\\xspace}{\\textsf{3-BIT-HIT}\\xspace}\n\\newcommand{\\textsf{PPPC4DMID}\\xspace}{\\textsf{PPPC4DMID}\\xspace}\n\\newcommand{\\textsf{micrOMEGAs}\\xspace}{\\textsf{micrOMEGAs}\\xspace}\n\\newcommand{\\textsf{micrOMEGAs}\\xspace}{\\textsf{micrOMEGAs}\\xspace}\n\\newcommand{\\textsf{Rivet}\\xspace}{\\textsf{Rivet}\\xspace}\n\\newcommand{\\textsf{Feynrules}\\xspace}{\\textsf{Feynrules}\\xspace}\n\\newcommand{\\textsf{FeynHiggs}\\xspace}{\\textsf{FeynHiggs}\\xspace}\n\\newcommand{\\feynhiggs}{\\textsf{FeynHiggs}\\xspace}\n\\newcommand{\\textsf{EOS}\\xspace}{\\textsf{EOS}\\xspace}\n\\newcommand{\\textsf{FlavorKit}\\xspace}{\\textsf{FlavorKit}\\xspace}\n\\newcommand\\FS{\\FlexibleSUSY}\n\\newcommand\\flexiblesusy{\\FlexibleSUSY}\n\\newcommand\\FlexibleSUSY{\\textsf{FlexibleSUSY}\\xspace}\n\\newcommand\\FlexibleEFTHiggs{\\textsf{FlexibleEFTHiggs}\\xspace}\n\\newcommand\\SOFTSUSY{\\textsf{SOFTSUSY}\\xspace}\n\\newcommand\\SUSPECT{\\textsf{SuSpect}\\xspace}\n\\newcommand\\NMSSMCalc{\\textsf{NMSSMCALC}\\xspace}\n\\newcommand\\NMSSMTools{\\textsf{NMSSMTools}\\xspace}\n\\newcommand\\NMSPEC{\\textsf{NMSPEC}\\xspace}\n\\newcommand\\NMHDECAY{\\textsf{NMHDECAY}\\xspace}\n\\newcommand\\HDECAY{\\textsf{HDECAY}\\xspace}\n\\newcommand\\prophecy{\\textsf{PROPHECY4F}\\xspace}\n\\newcommand\\SDECAY{\\textsf{SDECAY}\\xspace}\n\\newcommand\\SUSYHIT{\\textsf{SUSY-HIT}\\xspace}\n\\newcommand\\susyhd{\\textsf{SUSYHD}\\xspace}\n\\newcommand\\HSSUSY{\\textsf{HSSUSY}\\xspace}\n\\newcommand\\susyhit{\\SUSYHIT}\n\\newcommand\\gmtwocalc{\\textsf{GM2Calc}\\xspace}\n\\newcommand\\SARAH{\\textsf{SARAH}\\xspace}\n\\newcommand\\SPheno{\\textsf{SPheno}\\xspace}\n\\newcommand\\spheno{\\SPheno}\n\\newcommand\\superiso{\\textsf{SuperIso}\\xspace}\n\\newcommand\\superisofour{\\textsf{SuperIso 4}\\xspace}\n\\newcommand\\heplike{\\textsf{HEPLike}\\xspace}\n\\newcommand\\SFOLD{\\textsf{SFOLD}\\xspace}\n\\newcommand\\HFOLD{\\textsf{HFOLD}\\xspace}\n\\newcommand\\FeynHiggs{\\textsf{FeynHiggs}\\xspace}\n\\newcommand\\Mathematica{\\textsf{Mathematica}\\xspace}\n\\newcommand\\Kernel{\\textsf{Kernel}\\xspace}\n\\newcommand\\WSTP{\\textsf{WSTP}\\xspace}\n\\newcommand\\lilith{\\textsf{Lilith}\\xspace}\n\\newcommand\\nulike{\\textsf{nulike}\\xspace}\n\\newcommand\\gamLike{\\textsf{gamLike}\\xspace}\n\\newcommand\\gamlike{\\gamLike}\n\\newcommand\\daFunk{\\textsf{daFunk}\\xspace}\n\\newcommand\\pippi{\\textsf{pippi}\\xspace}\n\\newcommand\\MultiNest{\\textsf{MultiNest}\\xspace}\n\\newcommand\\multinest{\\MultiNest}\n\\newcommand\\great{\\textsf{GreAT}\\xspace}\n\\newcommand\\twalk{\\textsf{T-Walk}\\xspace}\n\\newcommand\\diver{\\textsf{Diver}\\xspace}\n\\newcommand\\ddcalc{\\textsf{DDCalc}\\xspace}\n\\newcommand\\tpcmc{\\textsf{TPCMC}\\xspace}\n\\newcommand\\nest{\\textsf{NEST}\\xspace}\n\\newcommand\\luxcalc{\\textsf{LUXCalc}\\xspace}\n\\newcommand\\xx{\\raisebox{0.2ex}{\\smaller ++}\\xspace}\n\\newcommand\\Cpp{\\textsf{C\\xx}\\xspace}\n\\newcommand\\Cppeleven{\\textsf{C\\raisebox{0.2ex}{\\smaller ++}11}\\xspace}\n\\newcommand\\plainC{\\textsf{C}\\xspace}\n\\newcommand\\Python{\\textsf{Python}\\xspace}\n\\newcommand\\python{\\Python}\n\\newcommand\\Fortran{\\textsf{Fortran}\\xspace}\n\\newcommand\\YAML{\\textsf{YAML}\\xspace}\n\\newcommand\\Yaml{\\YAML}\n\\newcommand{\\textsf{Capt'n General}\\xspace}{\\textsf{Capt'n General}\\xspace}\n\n\\newcommand\\beq{\\begin{equation}}\n\\newcommand\\eeq{\\end{equation}}\n\n\\newcommand{\\mail}[1]{\\href{mailto:#1}{#1}}\n\\renewcommand{\\url}[1]{\\href{#1}{#1}}\n\n\\newcommand{\\TODO}[1]{\\textbf{\\textcolor{red}{To do: #1}}}\n\\newcommand{\\Pat}[1]{\\textbf{\\color{teal}Pat: #1}}\n\\newcommand{\\Anders}[1]{\\textbf{\\color{brown}Anders: #1}}\n\\newcommand{\\Martin}[1]{\\textbf{\\color{orange}Martin: #1}}\n\n\n\\def\\alpha_t{\\alpha_t}\n\\def\\alpha_b{\\alpha_b}\n\\def\\alpha_s{\\alpha_s}\n\\def\\alpha_{\\tau}{\\alpha_{\\tau}}\n\\def\\mathcal{O}(\\at){\\mathcal{O}(\\alpha_t)}\n\\def\\mathcal{O}(\\ab){\\mathcal{O}(\\alpha_b)}\n\\def\\mathcal{O}(\\atau){\\mathcal{O}(\\alpha_{\\tau})}\n\\def\\mathcal{O}(\\at\\ab){\\mathcal{O}(\\alpha_t\\alpha_b)}\n\\def\\mathcal{O}(\\at\\as){\\mathcal{O}(\\alpha_t\\alpha_s)}\n\\def\\mathcal{O}(\\ab\\as){\\mathcal{O}(\\alpha_b\\alpha_s)}\n\\def\\mathcal{O}(\\at\\ab + \\ab^2){\\mathcal{O}(\\alpha_t\\alpha_b + \\alpha_b^2)}\n\\def\\mathcal{O}(\\at^2 + \\at\\ab + \\ab^2){\\mathcal{O}(\\alpha_t^2 + \\alpha_t\\alpha_b + \\alpha_b^2)}\n\\def\\mathcal{O}(\\at\\as + \\at^2){\\mathcal{O}(\\alpha_t\\alpha_s + \\alpha_t^2)}\n\\def\\mathcal{O}(\\at\\as +\\ab\\as){\\mathcal{O}(\\alpha_t\\alpha_s +\\alpha_b\\alpha_s)}\n\\def\\mathcal{O}(\\at\\as + \\at^2 +\\ab\\as){\\mathcal{O}(\\alpha_t\\alpha_s + \\alpha_t^2 +\\alpha_b\\alpha_s)}\n\\def\\mathcal{O}(\\at^2){\\mathcal{O}(\\alpha_t^2)}\n\\def\\mathcal{O}(\\ab^2){\\mathcal{O}(\\alpha_b^2)}\n\\def\\mathcal{O}(\\atau^2){\\mathcal{O}(\\alpha_{\\tau}^2)}\n\\def\\mathcal{O}(\\ab \\atau){\\mathcal{O}(\\alpha_b \\alpha_{\\tau})}\n\\def\\mathcal{O}((\\at+\\ab)^2){\\mathcal{O}((\\alpha_t+\\alpha_b)^2)}\n\\def\\mathcal{O}(\\as){\\mathcal{O}(\\alpha_s)}\n\\def\\mathcal{O}(\\atau^2 +\\ab \\atau ){\\mathcal{O}(\\alpha_{\\tau}^2 +\\alpha_b \\alpha_{\\tau} )}\n\n\n\\begin{document}\n\n\\begin{frontmatter}\n\n\n\n\\title{GAMBIT and its Application in the Search for Physics Beyond the Standard Model}\n\n\n\\author[imperial,oslo]{Anders Kvellestad}\n\\author[imperial,uq]{Pat Scott}\n\\author[adelaide]{Martin White}\n\\address[imperial]{Department of Physics, Imperial College London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ, UK}\n\\address[oslo]{Department of Physics, University of Oslo, N-0316 Oslo, Norway}\n\\address[uq]{School of Mathematics and Physics, The University of Queensland, St.\\ Lucia, Brisbane, QLD 4072, Australia}\n\\address[adelaide]{Department of Physics, University of Adelaide, Adelaide, SA 5005, Australia}\n\n\\begin{abstract}\nThe Global and Modular Beyond-Standard Model Inference Tool (\\textsf{GAMBIT}\\xspace) is an open source software framework for performing global statistical fits of particle physics models, using a wide range of particle and astroparticle data. In this review, we describe the design principles of the package, the statistical and sampling frameworks, the experimental data included, and the first two years of physics results generated with it. This includes supersymmetric models, axion theories, Higgs portal dark matter scenarios and an extension of the Standard Model to include right-handed neutrinos. Owing to the broad spectrum of physics scenarios tackled by the \\textsf{GAMBIT}\\xspace community, this also serves as a convenient, self-contained review of the current experimental and theoretical status of the most popular models of dark matter.\n\\end{abstract}\n\n\n\\begin{keyword}\n Supersymmetry \\sep Axions \\sep Global fits \\sep Right-handed neutrinos \\sep Higgs portal dark matter\n\n\n\n\\end{keyword}\n\n\\end{frontmatter}\n\n\n\\section{Introduction}\n\nThe core of the scientific method in the physical sciences is the identification of mathematical theories that describe some aspect of our Universe in terms of a number of free parameters. The use of global statistical fits, in either a Bayesian or frequentist framework, allows us to find the preferred parameter values of a candidate theory given experimental data, and to compare the abilities of different models to describe that data. Our current knowledge of particle physics is enshrined in the Standard Model (SM), and global fit techniques are routinely used to provide the most accurate estimates the parameters of the neutrino sector~\\cite{Bari13,Tortola14,NuFit15}, the CKM matrix~\\cite{CKMFitter}, and the electroweak sector~\\cite{ZFitter,GFitter11}.\n\nDespite its incredible successes in explaining experimental data, the SM still faces a number of experimental and theoretical challenges. Many if not all of these can be explained by new physics Beyond the Standard Model (BSM). Such physics could show up in a number of experiments, including direct searches for new particles at high energy particle colliders~\\cite{ATLAS_diphoton,ATLAS15,CMS_SMS}, measurements of rare Standard Model processes~\\cite{gm2exp,BelleII,CMSLHCb_Bs0mumu}, direct searches for dark matter~\\cite{XENON2013,PICO60,LUX2016}, indirect astroparticle searches for distant annihilation or decay of dark matter~\\cite{BringmannWeniger,LATdwarfP8,IC79_SUSY}, and cosmological observations~\\cite{Planck15cosmo,Slatyer15a,keVsterile_whitepaper}. Unfortunately, despite the existence of many candidate theories beyond the SM, there is no unambiguous prediction of what we expect to observe, or in which experimental field we expect to observe it. It is therefore highly likely that the next theory of particle physics will have to be pieced together by combining clues from a number of disparate fields and experiments. In the process, it is essential to also consistently combine \\emph{null} results in experiments that had the potential to discover a given candidate theory, but failed to do so. Even in the complete absence of positive discoveries in the near future, it is essential to determine which candidate BSM theories are now comfortably excluded, and which regions of which candidate theories are now the most amenable to future discovery.\n\nGlobal fits of BSM theories have thus been a very active area of research for well over a decade \\cite{Baltz04,Allanach06,SFitter, Ruiz06}, with increases in computing power opening the option of exploring models with larger and larger parameter spaces. Nevertheless, it remains a considerable challenge to efficiently explore the high-dimensional parameter spaces of candidate theories whilst rigorously calculating likelihoods for a large range of experiments, each of which may require a costly simulation procedure. To further complicate matters, one must consistently handle systematic uncertainties that may be correlated across different datasets, resulting from either instrumental effects, or our imprecise knowledge of the nuclear, astro- or particle physics relevant to a given set of experiments. Prior to 2017, most global fits were focussed on supersymmetric theories, involving dedicated software that was built from the ground up with a knowledge of the supersymmetric parameters~\\cite{Baltz04,Allanach06,SFitter, Ruiz06,Strege15,Fittinocoverage,Catalan:2015cna,MasterCodeMSSM10,2007NewAR..51..316T,2007JHEP...07..075R,Roszkowski09a,Martinez09,Roszkowski09b,Roszkowski10,Scott09c,BertoneLHCDD,SBCoverage,Nightmare,BertoneLHCID,IC22Methods,SuperbayesXENON100,SuperBayesGC, Buchmueller08,Buchmueller09,MasterCodemSUGRA,MasterCode11,MastercodeXENON100,MastercodeHiggs,Buchmueller:2014yva,Bagnaschi:2016afc,Bagnaschi:2016xfg,Allanach:2007qk,Abdussalam09a,Abdussalam09b,Allanach11b,Allanach11a,Farmer13,arXiv:1212.4821,Fowlie13,Henrot14,Kim:2013uxa,arXiv:1503.08219,arXiv:1604.02102,Han:2016gvr, Bechtle:2014yna, arXiv:1405.4289, arXiv:1402.5419, MastercodeCMSSM, arXiv:1312.5233, arXiv:1310.3045, arXiv:1309.6958, arXiv:1307.3383, arXiv:1304.5526, arXiv:1212.2886, Strege13, Gladyshev:2012xq, Kowalska:2012gs, Mastercode12b, arXiv:1207.1839, arXiv:1207.4846, Roszkowski12, SuperbayesHiggs, Fittino12, Mastercode12, arXiv:1111.6098, Fittino, Trotta08, Fittino06,\narXiv:1608.02489, arXiv:1507.07008, Mastercode15, arXiv:1506.02499, arXiv:1504.03260, Mastercode17}. These results typically covered low-dimensional subsets of the minimal supersymmetric SM (MSSM) or, in some cases the next-to-minimal variant, with relatively few global studies of other theories completed~\\cite{Cheung:2012xb,Arhrib:2013ela,Sming14,Chowdhury15,Liem16,LikeDM,Banerjee:2016hsk,Matsumoto:2016hbs,Cuoco:2016jqt,Cacchio:2016qyh,BertoneUED,Chiang:2018cgb,hepfit,Matsumoto:2018acr}.\n\nIn 2017, the \\textsf{GAMBIT}\\xspace collaboration released the Global and Modular Beyond-Standard Model Inference Tool (\\textsf{GAMBIT}\\xspace) \\cite{gambit}, an open-source package able to produce results in both the Bayesian and frequentist statistical frameworks, and easily extendible to new BSM models and new experimental datasets. A fully modular design enables much of the code to be reused when changing the theoretical model of interest. \\textsf{GAMBIT}\\xspace includes a wide variety of efficient sampling algorithms for posterior evaluation and optimisation, and ensures computational efficiency through massive, multi-level parallelisation, both of the sampling algorithms and individual likelihood calculations.\n\nThe purpose of this article is to give a brief introduction to the \\textsf{GAMBIT}\\xspace software and science programme, reviewing the most important results obtained with \\textsf{GAMBIT}\\xspace in the first few years since its initial release. These serve to illustrate the versatility of the code in attacking completely different BSM models, and the constraining power of the highly detailed and rigorous simulations of different particle and astroparticle datasets in \\textsf{GAMBIT}\\xspace. Given the centrality of dark matter (DM) in the current search for BSM physics, this review also serves as a convenient summary of the status of the most widely-studied DM candidates.\n\nIn Section~\\ref{sec:gambit}, we describe the structure and design of the \\textsf{GAMBIT}\\xspace package, including the core framework, the means by which \\textsf{GAMBIT}\\xspace supports generic BSM models, the sampling and statistics module, and the various physics modules able to produce theoretical predictions and experimental likelihoods. In Section~\\ref{sec:physics}, we summarise the results of recent \\textsf{GAMBIT}\\xspace global fits of various supersymmetric theories, Higgs portal and axion DM models, and a right-handed neutrino extension of the SM. We then conclude in Section~\\ref{sec:summary}.\n\n\\section{The \\textsf{GAMBIT}\\xspace software}\n\\label{sec:gambit}\n\nThe core \\textsf{GAMBIT}\\xspace software is written in \\Cppeleven, but interfaces seamlessly with extensions and existing physics codes written in \\python, \\Mathematica, \\Fortran and \\plainC. Since the release of \\textsf{GAMBIT}\\xspace \\textsf{1.0.0} in 2017 \\cite{gambit}, the most notable updates have been versions \\textsf{1.1} (adding support for \\Mathematica) \\cite{gambit_addendum}, \\textsf{1.2} (adding support for \\python and higher-spin Higgs portal models) \\cite{HP}, \\textsf{1.3} (adding support for axion and axion-like particles) \\cite{Axions} and \\textsf{1.4} (adding support for right-handed neutrinos) \\cite{RHN}. The current public release is \\textsf{v1.4.2}. The source code is openly available from \\href{http:\/\/gambit.hepforge.org}{http:\/\/gambit.hepforge.org} under the 3-clause BSD license.\n\n\\subsection{Core design}\n\\label{sec:core}\n\nThe core principles of \\textsf{GAMBIT}\\xspace's software design are modularity and flexibility. All theoretical predictions and experimental likelihood evaluations are separated into a series of smaller, self-contained sub-calculations, with each sub-calculation represented by a single function. Each function is assigned a metadata string that identifies the physical quantity that the function is able to calculate. Examples might be the mass of the lightest Higgs boson, or the likelihood for the latest run of the LUX direct detection experiment. Functions are further tagged with additional metadata strings indicating any other physical inputs required for them to run. In the case of the Higgs mass, one might require e.g. the SM electroweak vacuum expectation value and the masses and couplings of various other particles. In the case of the LUX likelihood, one might require the number of events observed by LUX, its detector efficiency as a function of nuclear recoil energy, and the theoretically predicted event rate.\nAt runtime, \\textsf{GAMBIT}\\xspace identifies which functions are actually required for the analysis of a given theory, and connects them dynamically in order to enable the calculation in the most efficient manner possible.\n\nThe individual functions are grouped together according to physics theme, into seven different \\textbf{physics modules}. We describe these specific modules in Section \\ref{sec:modules} below. Individual functions are thus referred to as \\textbf{module functions}. The module functions are the true building blocks of a \\textsf{GAMBIT}\\xspace analysis, allowing the code to automatically adapt itself to incorporate new observables, likelihoods, theories and experimental datasets. The metadata string associated with the output of a module function is referred to as its \\textbf{capability}, and the metadata associated with the required inputs are referred to as \\textbf{dependencies}. The process of dynamically connecting the outputs of module functions to the inputs of others at runtime thus consists of matching dependency strings to the capability strings of other functions (and ensuring that their \\Cpp types also match). This process is known as \\textbf{dependency resolution}, and is performed by the \\textsf{GAMBIT}\\xspace dependency resolver.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width = 0.75\\textwidth]{figures\/dep_tree}\n\t\\caption{An example \\textsf{GAMBIT}\\xspace dependency tree for a simple fit of flavour Wilson coefficients to $b\\to s\\gamma$ and $B\\to ll$ data. Boxes (graph nodes) correspond to single module functions. Function capabilities are marked in red, and return types of the functions, their actual function names and enveloping modules are indicated in black. Arrows (graph edges) indicate the direction of information flow, from the capability (output) of one function to the dependencies (inputs) of others. The input file used to instigate this fit (\\texttt{WC\\_lite.yaml}) is one of the example files distributed with \\textsf{GAMBIT}\\xspace. This particular fit makes use of the \\textsf{GAMBIT}\\xspace modules \\textsf{FlavBit}\\xspace, \\textsf{SpecBit}\\xspace and \\textsf{DecayBit}\\xspace, as well as the backend (external package) \\superiso \\textsf{4.1} \\cite{Mahmoudi:2007vz,Mahmoudi:2008tp,Mahmoudi:2009zz}.}\n\t\\label{fig:dep_tree}\n\\end{figure}\n\nThe result is a (potentially extremely large) directed graph connecting the outputs and inputs of different module functions, known as a \\textbf{dependency tree}. The module functions themselves constitute graph nodes, and their resolved dependencies graph edges. An example of such a graph is shown in Fig.\\ \\ref{fig:dep_tree}. For such a graph to constitute a viable computational pathway to all theoretical predictions and experimental likelihoods of interest, two things are required. The first is for the actual numerical values of some dependencies to be known in advance. These are the parameters of the theory that the user wishes to analyse, and must be chosen `from on high' before the dependency tree can be evaluated. These are selected by the user's chosen statistical parameter sampling algorithm, discussed below in Section \\ref{sec:stats}.\n\nFrom the values of a model's parameters, all other intermediate quantities can be obtained, as long as the second criterion is also met. This condition is that no closed loops exist in the graph, i.e.\\ there are no dependencies of any module function upon things that can only be computed by knowing the result of the function. Many algorithms exist within graph theory for taking such a directed acyclic graph and obtaining a linear ordering of its nodes that respects the underlying structure of the graph. \\textsf{GAMBIT}\\xspace uses the \\textsf{Boost::Graph} library to obtain such an ordering, and then employs that ordering to evaluate the module functions in turn. This ensures that all module functions run before any other functions that depend upon their results. Within topologically equivalent subsets of the ordering, \\textsf{GAMBIT}\\xspace also further dynamically optimises the module function evaluation order for speed, according to previous function evaluation times and likelihoods.\n\nModule functions may also make use of functions provided by external packages, or \\textbf{backends}. These are also connected dynamically at runtime to module functions by the dependency resolver, in much the same way as it ensures that dependencies upon the results of other module functions are fulfilled. This layer of abstraction allows \\textsf{GAMBIT}\\xspace to provide its module functions with seamless and interchangeable access to functions from external codes written in \\plainC, \\Cpp, \\python\\textsf{2}, \\python\\textsf{3}, \\Mathematica and all variants of \\Fortran. The \\textsf{GAMBIT}\\xspace build system allows users to select and automatically download, configure and build whatever combination of backends they prefer to use, and the dependency resolver automatically adapts to the presence or absence of different backends when selecting which functions to connect to others. Backends presently supported in version \\textsf{1.4.2} of \\textsf{GAMBIT}\\xspace are \\textsf{Capt'n General}\\xspace \\cite{HP}, \\ds \\cite{darksusy4,darksusy}, \\ddcalc \\cite{DarkBit,HP}, \\textsf{FeynHiggs}\\xspace \\cite{Heinemeyer:1998yj,Heinemeyer:1998np,Degrassi:2002fi,Frank:2006yh,Hahn:2013ria}, \\gamlike \\cite{DarkBit}, \\gmtwocalc \\cite{gm2calc}, \\textsf{HiggsBounds}\\xspace \\cite{Bechtle:2008jh,Bechtle:2011sb,Bechtle:2013wla}, \\textsf{HiggsSignals}\\xspace \\cite{HiggsSignals}, \\textsf{micrOMEGAs}\\xspace \\cite{Belanger:2001fz,Belanger:2004yn,Belanger:2006is,Belanger:2008sj,Belanger:2010gh,Belanger:2013oya,micromegas}, \\nulike \\cite{IC22Methods,IC79_SUSY}, \\textsf{Pythia}\\xspace \\cite{Sjostrand:2006za,Sjostrand:2014zea}, \\spheno \\cite{Porod:2003um,Porod:2011nf}, \\superiso \\cite{Mahmoudi:2007vz,Mahmoudi:2008tp,Mahmoudi:2009zz}, \\susyhd \\cite{Vega:2015fna} and \\susyhit \\cite{Djouadi:2006bz}. Many more are also already supported in the current development version, which will be released in 2020.\n\n\\begin{figure}[tbp]\n\t\\centering\n\t\\includegraphics[width = \\textwidth]{figures\/BasicStructure}\n\t\\caption{The overall structure of a \\textsf{GAMBIT}\\xspace run, illustrating the roles of the input \\YAML file, modules, module functions, capabilities, dependencies, backends, backend functions, dependency resolver, hierarchical model database and the sampling machinery. The user specifies one or more models to scan in the input \\YAML file, and chooses likelihoods and observables to compute in the scan, making their choice by capability rather than by choosing specific functions. The dependency resolver automatically identifies and connects appropriate module and backend functions in order to facilitate the computation of the requested likelihoods and observables, and the scanning machinery (\\textsf{ScannerBit}\\xspace) selects parameter combinations to pass through the resulting dependency tree. From \\cite{gambit}.}\n\t\\label{fig:corechain}\n\\end{figure}\n\nActual runs of \\textsf{GAMBIT}\\xspace are driven by a single input file, in \\YAML format. In this file, the user selects the model(s) to analyse, gives details of which algorithms to use in order to sample the models' parameters, and provides a list of all likelihoods and physical observables that should be calculated in the scan. The model parameter values constitute one boundary condition for dependency resolution (the dependency tree must begin from the parameters), and the target likelihoods and observables the other (the final outputs of the tree must be the required likelihoods and observables). The dependency resolver is then responsible for identifying and filling in all the required steps in between. To help direct this process and break degeneracies in the valid choices available to the dependency resolver at each step, the \\YAML input file may also set rules that the dependency resolver must respect. These may be e.g.\\ restrictions about which functions should be selected to fill which specific dependencies, or which version of a given backend should be used throughout the run. These rules can be arbitrarily complicated, general or specific. A rule can also contain explicit keyword options that will be passed to all module functions that fulfil the rule, allowing enormous control to be exercised over the details of the individual calculations from a single input file.\n\nThe core design elements of \\textsf{GAMBIT}\\xspace described so far in this section are module functions, backend abstraction, dependency resolution, and an input format that borders on its own programming language. Together, these combine to provide an extremely flexible and extendible framework for performing global analyses of theories for BSM physics. Fig.\\ \\ref{fig:corechain} illustrates how all of these features work together to enable a \\textsf{GAMBIT}\\xspace scan. Further technical details can be found in Ref.\\ \\cite{gambit}.\n\n\n\\subsection{Model support}\n\\label{sec:models}\n\nAnother feature illustrated in Fig.\\ \\ref{fig:corechain} is the \\textsf{GAMBIT}\\xspace hierarchical model database. Models are defined both in terms of their parameters, and in terms of their relationships to each other via parameter translation routines. Models may descend from one another, meaning that a parameter combination in a child model can be translated `up' its family tree to a point in an appropriate subspace of its parent model, or in any other more distant ancestor model. Cross-family `friend' translation pathways can also be defined. These pathways allow module functions to be designed to work with one model, but to be used with another model without further alteration, so long as a translation pathway exists from one model to the other.\n\nModule functions, backend functions and all rules set in \\YAML files can be endowed with model-specific restrictions. This allows the model-dependence of every sub-calculation to be tracked explicitly, and for the dependency resolver to explicitly ensure that the entire dependency tree of every scan is validated for use with the model under investigation.\n\nThe complete model database of \\textsf{GAMBIT}\\xspace \\textsf{1.4.2} is shown in Fig.\\ \\ref{fig:model_tree}.\n\n\\begin{sidewaysfigure}[tbp]\n\t\\centering\n\t\\includegraphics[width = \\textheight]{figures\/model_tree}\\vspace{2mm}\n\t\\caption{Hierarchical model database of \\textsf{GAMBIT}\\xspace \\textsf{1.4.2}. Models are shown as boxes, child-to-parent translations as black arrows, and friend translations as red arrows.}\n\t\\label{fig:model_tree}\n\\end{sidewaysfigure}\n\n\n\\subsection{Sampling and statistics}\n\\label{sec:stats}\n\nIn carrying out a global statistical analysis of a BSM theory, one may be interested in determining which parameter combinations are able to explain the totality of observed data within a given model, and to what extent -- or one may be more interested in using the experimental data to choose between entire theories. The first of these tasks is parameter estimation, whereas the second is model comparison. There are two philosophically distinct ways of posing both these questions:\n\\begin{enumerate}\n\\item How probable is it that we would have observed the data that we have, if a model and a specific combination of its parameters were true?\n\\item How probable is it that a model (or a specific combination of its parameters) is true, given the data that we have observed to date?\n\\end{enumerate}\nQuestion 1 concerns frequentist statistics, whereas Question 2 is fundamentally Bayesian.\n\nIn the context of parameter estimation, the choice of question dictates whether the appropriate quantity to consider is the frequentist profile likelihood, or the Bayesian posterior. The profile likelihood for some parameters of interest $\\boldsymbol{\\theta}$ is the maximum value of the likelihood at each parameter combination $\\boldsymbol{\\theta}$, regardless of the values of any other parameters $\\boldsymbol{\\alpha}$:\n\\begin{equation}\n \\hat{\\mathcal{L}}(\\boldsymbol{\\theta}) = \\mathrm{max}_{\\boldsymbol{\\alpha}}\\,\\mathcal{L}(\\boldsymbol{\\theta},\\boldsymbol{\\alpha}),\n\\end{equation}\nwhere the parameters $\\boldsymbol{\\alpha}$ are other `nuisance' parameters not of direct interest. Conversely, the Bayesian posterior probability distribution for the parameters $\\boldsymbol{\\theta}$ is given by Bayes' Theorem as the integral of the likelihood over $\\boldsymbol\\alpha$, weighted by ones prior belief $\\pi(\\boldsymbol{\\theta},\\boldsymbol{\\alpha})$ as to the plausibility of different values of $\\boldsymbol{\\theta}$ and $\\boldsymbol{\\alpha}$:\n\\begin{equation}\n\\mathcal{P}(\\boldsymbol\\theta) = \\int \\mathcal{P}(\\boldsymbol\\theta,\\boldsymbol{\\alpha})\\,\\mathrm{d}\\boldsymbol{\\alpha} = \\frac{1}{\\mathbb{Z}}\\int \\mathcal{L}(\\boldsymbol\\theta,\\boldsymbol{\\alpha})\\pi(\\boldsymbol\\theta,\\boldsymbol{\\alpha})\\,\\mathrm{d}\\boldsymbol{\\alpha}.\n\\end{equation}\nHere $\\mathbb{Z} \\equiv \\int \\mathcal{L}(\\boldsymbol\\theta,\\boldsymbol{\\alpha})\\pi(\\boldsymbol\\theta,\\boldsymbol{\\alpha})\\,\\mathrm{d}\\boldsymbol{\\alpha}\\,\\mathrm{d}\\boldsymbol{\\theta}$ is a normalisation factor referred to as the model evidence; taking ratios of evidences of different models is the most common method of Bayesian model comparison. In contrast, frequentist model testing typically involves building up the distribution of the likelihood or other test statistic by simulation, in order to determine the precise probability of obtaining the observed (or worse-fitting) data if the model is assumed to be correct.\n\nThe choice of Bayesian posterior or profile likelihood has strong implications for the required design of the algorithm with which to sample the model parameters: efficiently obtaining converged estimates of profile likelihood and posterior distributions requires drastically different sampling distributions. In neither case is random sampling at all sufficient nor correct, whether for accurate estimation of statistical properties nor for making statements about what is `typical' or `normal' within the parameter space of a given theory. Efficient profile likelihood evaluation requires fast location of and convergence towards the maximum likelihood, whereas efficient posterior and evidence evaluation requires samples obtained with a density approximately proportional to the value of the posterior itself (as indeed is the case in most other numerical integration problems).\n\nSampling and statistical considerations in \\textsf{GAMBIT}\\xspace are handled mostly by the \\textsf{ScannerBit}\\xspace module \\cite{ScannerBit}. It contains all the tools necessary to transform the likelihood function provided by the dependency resolver into converged profile likelihoods and Bayesian posteriors. It also facilitates Bayesian model comparison by calculating evidences (see e.g.\\ Refs.\\ \\cite{HP,Axions} for recent examples), and frequentist model testing by providing information that can be used to perform statistical simulations (see Ref.\\ \\cite{EWMSSM} for a detailed example).\n\nFor Bayesian analyses, \\textsf{ScannerBit}\\xspace provides a series of different prior transformations, allowing the user to choose what assumptions to make about the probabilities of different model parameter at the beginning of a run, and to sample accordingly.\n\n\\textsf{ScannerBit}\\xspace contains interfaces to various built-in and external implementations of a number of leading sampling algorithms. These include algorithms optimised for profile likelihood evaluation, and algorithms optimised for posterior and evidence calculations. Amongst these are \\twalk \\cite{ScannerBit}, a built-in ensemble Markov Chain Monte Carlo (MCMC) well suited to posterior evaluation, \\great \\cite{great}, a regular MCMC, \\diver \\cite{ScannerBit}, a differential evolution optimiser able to efficiently map profile likelihoods, and \\multinest \\cite{Feroz:2007kg,Feroz:2008xx} and \\textsf{polychord} \\cite{Handley:2015}, nested samplers well suited to evidence and posterior evaluation. Detailed performance comparisons between the different samplers can be found in Ref.\\ \\cite{ScannerBit}.\n\nFor consistency and the convenience of module function writers, \\textsf{GAMBIT}\\xspace also provides a series of relatively simple pre-profiled and pre-marginalised likelihood functions \\cite{gambit}. These functions provide likelihoods where the influence of one or more Gaussian or log-normally distributed nuisance parameters is profiled or integrated out without the assistance of explicit sampling by \\textsf{ScannerBit}\\xspace.\n\n\\subsection{Physics modules}\n\\label{sec:modules}\nThe physics content of \\textsf{GAMBIT}\\xspace currently resides in seven modules, which contain the module and backend functions relevant for all necessary theoretical calculations, simulations of particle astrophysics experiments and likelihood calculations. Future \\textsf{GAMBIT}\\xspace updates will both refine the code in each module, and add new modules for new branches of physics (such as the forthcoming \\textsf{CosmoBit} module).\n\n\\subsubsection{SpecBit}\n\nBSM physics theories necessarily introduce new particles. The first step in evaluating the likelihood of any parameter combination in a new theory is typically to calculate the masses and decay branching fractions of the new particles. These calculations get very complicated once one moves beyond tree level, as loop corrections can involve any number of new states in the theory, and loop corrections that shift the masses and decays of the existing SM particles must also be taken into account.\n\nParticle mass and coupling calculations are handled in the \\textsf{SpecBit}\\xspace module, which includes module functions for obtaining the pole masses and mixings of all new physical states in a model, scheme-dependent quantities such as those defined in the \\DR and $\\overline{MS}$\\xspace schemes, and SM masses and couplings (e.g. couplings of the SM-like Higgs). Generally, this information is obtained by running an appropriate spectrum generator but, in the simple case that the pole masses of a model are specified as input parameters, \\textsf{SpecBit}\\xspace simply formats the information to match that expected from a spectrum generator. In any case, it is important to realise that a spectrum cannot be stored simply as a set of numbers, since different experimental likelihoods may require predictions of running particle properties at different physical scales. Thus, \\textsf{SpecBit}\\xspace facilitates the passing of a spectrum object to module functions that contains knowledge of the renormalisation group equations of a model, allowing module functions in other parts of the \\textsf{GAMBIT}\\xspace code to locally run the \\DR or $\\overline{MS}$\\xspace parameters to the appropriate scale. Although \\textsf{SpecBit}\\xspace can be extended to include any model, its development has tended to proceed through updates that add functionality for the specific models targeted in \\textsf{GAMBIT}\\xspace physics papers. To date, this includes functions for GUT-\\cite{CMSSM} and weak-scale \\cite{MSSM,EWMSSM} parameterisations of the MSSM, singlet DM models with either a scalar, fermion or vector DM candidate \\cite{SSDM,SSDM2,HP}, minimal electroweak triplet and quintuplet DM \\cite{Piteration,McKay2}, and a low-energy object that holds SM-like particle information. A range of backends is used to supply the \\textsf{SpecBit}\\xspace calculations, including \\SPheno \\cite{Porod:2003um,Porod:2011nf} and \\FlexibleSUSY \\cite{Athron:2014yba} for BSM mass spectrum calculations. The latter is typically used for all spectrum generation requirements outside of the MSSM, including for the scalar singlet model examples described in this review. The Higgs and $W$ masses can also be calculated via the \\FeynHiggs \\cite{Heinemeyer:1998yj,Heinemeyer:1998np,Degrassi:2002fi,Frank:2006yh,Hahn:2013ria,Bahl:2016brp} backend.\n\n\\subsubsection{DecayBit}\n\nParticle decay calculations are handled by the \\textsf{DecayBit}\\xspace module, after accepting the masses and couplings of particles from \\textsf{SpecBit}\\xspace. These are used to calculate decay widths and branching fractions for each particle, which are stored in a single decay table entry for each particle. The collection of entries is then gathered into a full \\gambit decay table, which is passed on to other \\gambit modules.\n\n\\textsf{DecayBit}\\xspace includes known SM particle decays, modifications of SM particle decays through new physics effects, and the decays of BSM particles. For the SM, \\textsf{DecayBit}\\xspace contains the Particle Data Group results for the total widths for the $W$, $Z$, $t$, $b$, $\\tau$ and $\\mu$ (plus antiparticles), and for the most common mesons $\\pi^0$, $\\pi^\\pm$, $\\eta$, $\\rho^0, \\rho^\\pm$ and $\\omega$~\\cite{PDB}. In addition, partial widths to all distinct final states are provided for $W$, $Z$, $t$, $b$, $\\tau$, $\\mu$, $\\pi^0$ and $\\pi^\\pm$. These ``pure SM'' decays are used in \\textsf{GAMBIT}\\xspace whenever an SM decay acquires no BSM contribution in a model, or when the only effect of the BSM physics is to introduce a new decay channel, in which case the pure SM decays can be appended to the new list of decay channels. For the pure SM Higgs boson, the user can decide whether to calculate the partial and total decay widths at the predicted value of the Higgs mass with \\textsf{FeynHiggs}\\xspace, or to use pre-computed tables provided in \\textsf{DecayBit}\\xspace, sourced from Ref.\\ \\cite{YellowBook13}.\n\nBSM decays are handled on a model-by-model basis. For Higgs portal DM models, \\textsf{DecayBit}\\xspace contains analytic expressions for the partial width for a Higgs decay to two DM particles, and this decay is added to the list of SM Higgs partial widths, before rescaling the decay branching fractions and the total width. For MSSM variants, \\textsf{DecayBit}\\xspace calculates both the decays of all sparticles and additional Higgs bosons, and the SUSY corrections to the decays of the SM-like Higgs boson and the top quark. Higgs decay results may be sourced from either \\HDECAY via \\SUSYHIT, or \\FeynHiggs, whilst top quark decays are only available via \\FeynHiggs. Sparticle decays are obtained from \\SDECAY via \\SUSYHIT, but we note that a patch to the code is required to allow \\gambit to call functions from a shared library, and to solve problems with negative decay widths for some models due to large and negative 1-loop QCD corrections. Full details are given in~\\cite{SDPBit}; the patch is applied automatically when \\SUSYHIT is retrieved and built from within \\textsf{GAMBIT}\\xspace.\n\nA recent update of \\textsf{DecayBit}\\xspace has seen the addition of routines for observables relating to right-handed neutrino studies. This includes the invisible decay width of the $Z$ boson $\\Gamma_{\\rm{inv}}$, and the leptonic $W$ boson decay widths $\\Gamma_{W\\to e \\bar\\nu_e}$, $\\Gamma_{W\\to \\mu \\bar\\nu_\\mu}$ and $\\Gamma_{W\\to \\tau \\bar\\nu_\\tau}$. Measurements and uncertainties are taken from Ref.~\\cite{PDG17}, whilst theoretical results are taken from Refs.~\\cite{Drewes:2015iva,Antusch:2014woa,Antusch:2015mia,Ferroglia:2012ir,Antusch:2015mia,Fernandez-Martinez:2015hxa,Abada:2013aba,Dubovyk:2018rlg,Antusch:2014woa}.\n\n\\subsubsection{PrecisionBit}\n\nSome of the most severe constraints on BSM physics scenarios come from precision measurements of the electroweak sector, and other SM quantities. In \\textsf{GAMBIT}\\xspace, these are handled by the \\textsf{PrecisionBit}\\xspace module, which provides nuisance likelihoods for SM quantities such as the top quark mass and strong coupling constant, which have been measured with high precision. A related function is the calculation of the BSM corrections to SM observables such as the mass of the $W$ boson and the weak mixing angle, and the provision of likelihood functions that compare these predictions with experimental data.\n\nA schematic representation of \\textsf{PrecisionBit}\\xspace is shown in Fig.\\ \\ref{fig:precisionbit}, providing our first interesting example of the interaction between \\textsf{GAMBIT}\\xspace modules. Standard Model parameters that do not require BSM correction calculations are provided directly by the \\textsf{GAMBIT}\\xspace SM model, and are used in the calculation of SM nuisance likelihoods. The BSM parameters, meanwhile, are first used by \\textsf{SpecBit}\\xspace in the calculation of particle masses, couplings and precision Higgs properties. \\textsf{PrecisionBit}\\xspace then updates the results to form a precision-updated spectrum (including a dedicated calculation of the $W$ mass) which is used for calculating Higgs and $W$ mass likelihoods, in addition to a suite of electroweak precision observables (EWPO).\n\nLikelihoods exist for the Fermi coupling constant ($G_F$), the fine-structure constant ($\\alpha_{\\mathrm{em}}$), the $\\overline{MS}$\\xspace light quark ($u$,$d$,$s$) masses at $\\mu=2$~GeV, the charm ($m_c(m_c)$) and bottom ($m_b(m_b)$) masses, and the $W$, $Z$ and Higgs boson masses. There are also calculations and likelihoods for other precision observables such as the anomalous magnetic moment of the muon $a_\\mu=\\frac12(g-2)_\\mu$, the effective leptonic weak mixing angle sin$^2\\theta_{W,eff}$, and the departure from 1 of the ratio of the Fermi constants implied by the neutral and weak currents $\\Delta\\rho$. Note that, for the full suite of observables, calculations are currently only included for the MSSM; calculations for other models will be added as the corresponding models are implemented in \\textsf{GAMBIT}\\xspace.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.59\\textwidth]{figures\/PrecisionBit}\n\\caption{Schematic representation of the structure of \\textsf{PrecisionBit}\\xspace. From \\cite{SDPBit}.}\n\\label{fig:precisionbit}\n\\end{figure}\n\nLike \\textsf{DecayBit}\\xspace, \\textsf{PrecisionBit}\\xspace has also recently been updated to include observables for right-handed neutrino studies. These consist of right-handed neutrino contributions to $m_W$ and sin$^2\\theta_{W,eff}$.\n\n\\subsubsection{DarkBit}\nBSM physics models that include particle DM candidates can potentially give rise to observable consequences in a wide range of astrophysical DM experiments.\n\nIn gamma-ray indirect detection, the \\textsf{DarkBit}\\xspace module contains a dedicated signal yield calculator, along with an interface to \\gamlike, a likelihood calculator for current and future gamma-ray experiments. This combination can cope with signatures that result from an arbitrary mixture of final states, which significantly extends previous tools.\n\nFurther indirect detection constraints come from an interface to the \\nulike neutrino telescope likelihood package~\\cite{IC79_SUSY}.\n\nDirect DM search experiments are handled by the dedicated \\ddcalc package, which can be extended to include the effects of generic interactions between Weakly Interacting Massive Particles (WIMPs) and nucleons, as parameterised through effective operators. This includes both spin-dependent and spin-independent scattering. The package models a wide range of direct search experiments including Xenon100, SuperCDMS, SIMPLE, LUX, PandaX, PICO-60 and PICO-2L.\n\nFinally, the relic density of dark matter can be computed via interfaces to \\textsf{DarkSUSY}\\xspace and \\textsf{micrOMEGAs}\\xspace~\\cite{darksusy,micromegas_nu}, and used to constrain models by computing a likelihood based on the value observed by \\textit{Planck} \\cite{Planck18cosmo}.\n\nThe basic structure of \\textsf{DarkBit}\\xspace applicable to WIMP theories is sketched in Fig.\\ \\ref{fig:flow}, providing a good example of \\textsf{GAMBIT}\\xspace's modular design principle. None of the likelihoods requires knowledge of the BSM physics parameters, instead only requiring knowledge of derived quantities that can be shared between likelihood calculations. The first step in \\textsf{DarkBit}\\xspace is to create a Process Catalogue containing information on particle annihilation processes, using the particle masses and couplings provided by \\textsf{SpecBit}\\xspace. For indirect detection calculations, this is used to create the gamma ray or neutrino spectrum of the annihilation products, via a weighted sum of indiviual contributions. For long decay chains, a native cascade decay Monte Carlo generator is used. This final annihilation spectrum is then passed to the likelihood calculators for gamma ray and neutrino telescope experiments. The Process Catalogue is also used to provide the effective annihilation rate for relic density calculations, which is then passed to a Boltzmann solver, followed by the relic density likelihood calculator. For direct dectection signatures, the model parameters are used to set the WIMP-nucleon couplings, which are then used in the calculation of the direct detection likelihood via the \\ddcalc package.\n\nA recent update of \\textsf{DarkBit}\\xspace has added various module functions required for the calculation of axion observables and likelihoods. The included observables are detailed in Section~\\ref{sec:axionL}.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.60\\linewidth]{figures\/DarkBit_flow}\n \\caption{Schematic overview of the \\textsf{DarkBit}\\xspace module. The two-letter insets indicate what backend codes can\n be used: \\textsf{DarkSUSY}\\xspace (DS), \\textsf{micrOMEGAs}\\xspace (MO), \\gamLike (GL), \\nulike (NL) and\n \\ddcalc (DC). From \\cite{DarkBit}.\n }\n \\label{fig:flow}\n\\end{figure*}\n\nAn important additional function of \\textsf{DarkBit}\\xspace is to constrain nuisance parameters for various astrophysical unknowns that strongly affect direct and indirect searches for DM. \\textsf{DarkBit}\\xspace contains likelihoods for the parameters of the local DM spatial and velocity distributions, plus the nuclear matrix elements that enter direct search WIMP-nucleon scattering calculations.\n\n\\subsubsection{FlavBit}\n\nA very powerful indirect probe of BSM physics comes from the measurement of flavour physics processes, as theoretical predictions for these observables would be shifted by loop corrections from new particles. The excellent precision of flavour phyics measurements allows them to be sensitive to much higher energy scales than direct searches for new particles. Indeed, recent measurements from the LHCb experiment~\\cite{Aaij:2016flj,Aaij:2015esa,Aaij:2015yra,Aaij:2014ora,Aaij:2013qta,Aaij:2019wad,Aaij:2014pli} and from $B$ factories~\\cite{Lees:2012xj,Lees:2013uzd,Huschle:2015rga,Abdesselam:2016cgx,Abdesselam:2016llu,Aubert:2006vb,Lees:2015ymt,Wei:2009zv,Wehle:2016yoi} show tensions with the SM that are generating a considerable amount of theoretical interest.\n\n\\textsf{FlavBit}\\xspace implements flavour physics constraints from rare decay observables using the effective Hamiltonian approach, in which the cross-sections for transitions from initial states $i$ to final states $f$ are proportional to the squared matrix elements $|\\langle f |{\\cal H}_{\\rm eff}|i\\rangle|^2$. For example, an effective Hamiltonian for $b \\rightarrow s$ transitions given by\n\\begin{equation}\n\\mathcal{H}_{\\rm eff} = -\\frac{4G_{F}}{\\sqrt{2}} V_{tb} V_{ts}^{*} \\sum_{i=1}^{10} \\Bigl(C_{i}(\\mu) \\mathcal{O}_i(\\mu)+C'_{i}(\\mu) \\mathcal{O}'_i(\\mu)\\Bigr)\\;.\n\\end{equation}\nThe local operators $\\mathcal{O}_i$ represent long-distance interactions. The Wilson coefficients $C_i$ can be calculated using perturbative methods, by requiring matching between the high-scale theory and the low-energy effective theory, at some scale $\\mu_W$ which is of the order of $m_W$. The Wilson coefficients can then be evolved to the characteristic scale for $B$ physics calculations ($\\mu_b$, of the order of the $b$ quark mass) using the renormalisation group equations of the \\emph{effective} field theory. A similar approach can be taken to $b\\rightarrow d$ transitions, using a different basis of low-energy operators. The original list of observables in \\textsf{FlavBit}\\xspace was divided into four categories:\n\n\\begin{itemize}\n\\item {\\bf Tree-level leptonic and semi-leptonic decays: }includes decays of $B$ and $D$ mesons to leptons, including $B^\\pm \\to \\tau \\nu_\\tau$, $B \\to D^{(*)} \\tau \\nu_\\tau$ and $B \\to D^{(*)} \\ell \\nu_\\ell$.\n\\item {\\bf Electroweak penguin transitions: }includes measurements of rare decays of the form $B \\to M \\ell^+\\ell^-$ (where $M$ is a meson with a smaller mass than the parent meson), such as angular observables of the decay $B^0 \\to K^{*0} \\mu^+\\mu^-$.\n\\item {\\bf Rare purely leptonic decays: }includes $B$ decays with only leptons in the final state, such as $B^0_{(s)} \\to \\mu^+ \\mu^-$.\n\\item {\\bf Other observables: }includes $b\\to s$ transitions in the radiative decays $B \\to X_s \\gamma$, the mass difference ($\\Delta M_s$ between the heavy $B_H$ and light $B_L$ eigenstates of the $B^0_s$ system, and kaon and pion decays (e.g. the leptonic decay ratio ${\\cal B}(K^\\pm\\to \\mu \\nu_\\mu)\/{\\cal B}(\\pi^\\pm\\to \\mu \\nu_\\mu)$).\n\\end{itemize}\n\nTheoretical calculations for these processes are handled via an interface to \\superiso~\\cite{Mahmoudi:2007vz,Mahmoudi:2008tp,Mahmoudi:2009zz}. Experimental results used in the calculation of likelihoods come from a variety of sources, including the PDG, the BaBar and Belle experiments, the HFAG collaboration and the LHCb experiment. Full details are given in~\\cite{FlavBit}.\n\nMore recently, \\textsf{FlavBit}\\xspace has been updated with observables relevant to right-handed neutrinos. These include:\n\\begin{itemize}\n\\item {\\bf Lepton-flavour violating (LFV) muon and tau decay searches} performed by the MEG, BaBar, Belle, ATLAS, LHCb and SINDRUM collaborations~\\cite{TheMEG:2016wtm,Aubert:2009ag,Hayasaka:2007vc,Lees:2010ez,Hayasaka:2010np,Aad:2016wce,Aaij:2014azz,Bellgardt:1987du}. LFV processes can also result in a neutrinoless $\\mu - e$ conversion inside a nucleus, and these are included in the form of three results using Ti, Pb and Au nuclei obtained by the SINDRUM II experiment~\\cite{Kaulard:1998rb,Honecker:1996zf,Bertl:2006up}.\n\\item {\\bf Tests of lepton universality violation} in the semileptonic decays of $B$ mesons $B^{0\/\\pm} \\to X^{0\/\\pm} l^+ l^-$, as performed by LHCb \\cite{Aaij:2014ora,Aaij:2017vbb}.\n\\end{itemize}\n\nA forthcoming major update to the \\textsf{FlavBit}\\xspace module will add an interface for \\superisofour, with added support for theory uncertainty covariance matrices. The experimental likelihoods will also receive an update, via a new interface to the \\heplike package \\cite{HEPLike}.\n\n\\begin{figure}[tp]\n\\centering\n\\includegraphics[width=0.60\\linewidth]{figures\/colliderbit-flow}\n\\caption{Schematic diagram of the \\textsf{ColliderBit}\\xspace processing chain for LHC likelihoods. From \\cite{ColliderBit}.}\n\\label{fig:lhcchain}\n\\end{figure}\n\n\\subsubsection{ColliderBit}\n\\label{sec:colliderbit}\nA leading source of constraints on BSM physics models comes from high-energy collider searches for new particles, plus the relatively recent measurements of the Higgs boson mass and decay branching fractions. The \\textsf{ColliderBit}\\xspace module includes the most comprehensive list of recent LHC particle searches of any public package, alongside a new interpolation of LEP results for supersymmetric particle searches. Higgs signal strength and mass measurements (including limits on possible signatures arising from new Higgs bosons) are handled via an interface to the \\textsf{HiggsSignals}\\xspace~\\cite{HiggsSignals} and \\textsf{HiggsBounds}\\xspace~\\cite{Bechtle:2008jh,Bechtle:2011sb} packages, which includes data from LEP, the Tevatron and the LHC.\n\nLHC constraints are particularly difficult to model rigorously for general models. Searches for new particles are often optimised on, and interpreted in terms of, so-called ``simplified models'', which feature only a few options from the much broader phenomenology of the parent model. For example, searches for supersymmetric particles might assume that only a particular pair of sparticles is ever produced, with decays fixed to a particular final state. The resulting exclusion limit will never apply directly to a more general model, although one can obtain approximate limits by scaling individual simplified model limits by the known cross-sections and branching ratios for each parameter point~~\\cite{Kraml:2013mwa,Papucci:2014rja}.\n\nIn \\textsf{ColliderBit}\\xspace, we provide more rigorous limits by performing an actual reproduction of the ATLAS and CMS limit-setting procedures, as shown in Fig.\\ \\ref{fig:lhcchain}. This includes a cross-section calculation for new particle production processes, followed by Monte Carlo simulation of LHC events for each parameter point using a custom parallelised version of the \\textsf{Pythia\\,8}\\xspace generator~\\cite{Sjostrand:2006za,Sjostrand:2014zea}. The results can either be fed at the truth level into code that reproduces the kinematic selections of a wide range of LHC analyses, or passed through a custom detector simulation based on four-vector smearing before analysis. Cross-sections are currently taken at leading order (plus leading log) from \\textsf{Pythia\\,8}\\xspace, but a forthcoming update will allow user-specified cross-sections. The final step of the process is to calculate a combined likelihood by either taking the signal region in a given final state for each experiment that is expected to have the highest sensitivity to the model in question, or by using a covariance matrix for analyses in cases where this is published by the relevant experimental collaboration. The list of \\textsf{ColliderBit}\\xspace analyses is continually updated, and currently includes a broad selection of searches for supersymmetric particles, plus monojet searches for DM particles.\n\n\\subsubsection{NeutrinoBit}\n\\label{sec:neutrinobit}\nThe \\textsf{NeutrinoBit}\\xspace module contains a variety of module functions for calculating observables and likelihoods in the neutrino sector, both for SM(-like) neutrinos and for right-handed neutrinos (RHNs). RHNs could cause observable consequences in a number of experiments, although it should be noted that the recent \\textsf{GAMBIT}\\xspace study focussed on models that are capable of explaining the light neutrino oscillation data, which excludes most sterile neutrino dark matter models. This is because long-lived RHNs would require very small couplings with SM matter, in which case their contribution to light neutrino mass generation is negligible.\n\n\\textsf{NeutrinoBit}\\xspace currently contains likelihoods dealing with the following classes of experimental data:\n\n\\begin{itemize}\n\\item \\textbf{Active neutrino mixing: }\\textsf{NeutrinoBit}\\xspace includes likelihoods for the 3-flavour SM-like active neutrino mixing observables $\\theta_{12}$, $\\theta_{13}$, $\\theta_{23}$ (mixing angles), $\\delta_{\\mathrm{CP}}$ (CP-phase) and $\\Delta m^2_{21}$ and $\\Delta m^2_{3\\ell}$ (mass splittings) with $\\ell = 1$ for normal mass ordering and $\\ell = 2$ for inverted mass ordering. The likelihoods use the one-dimensional $\\Delta\\chi^2$ tables provided by the NuFIT collaboration~\\cite{Esteban:2016qun,NuFit}. These in turn include results from the solar neutrino experiments Homestake (chlorine)~\\cite{Cleveland:1998nv}, Gallex\/GNO~\\cite{Kaether:2010ag}, SAGE~\\cite{Abdurashitov:2009tn}, SNO~\\cite{Aharmim:2011vm}, the four phases of Super-Kamiokande~\\cite{Hosaka:2005um,Cravens:2008aa,Abe:2010hy} and two phases of Borexino~\\cite{Bellini:2011rx,Bellini:2008mr,Bellini:2014uqa}. They also include results from the atmospheric experiments IceCube\/DeepCore~\\cite{Aartsen:2014yll}, the reactor experiments KamLAND~\\cite{Gando:2013nba}, Double-Chooz~\\cite{An:2016srz}, Daya-Bay~\\cite{An:2016ses} and Reno~\\cite{reno}, the accelerator experiments MINOS~\\cite{Adamson:2013whj,Adamson:2013ue}, T2K~\\cite{t2k} and NO$\\nu$A~\\cite{nova}, and the cosmic microwave background results from Planck~\\cite{Ade:2015xua}.\n\\item \\textbf{Lepton universality: }\\textsf{NeutrinoBit}\\xspace contains likelihoods for lepton universality violation in fully leptonic decays of charged mesons, $X^+ \\to l^+ \\nu$.\n\\item \\textbf{CKM unitarity: }The determination of the CKM matrix elements usually relies on the assumption that the active-sterile neutrino mixing matrix is zero. The presence of non-trivial mixing thus modifies the CKM matrix elements, and the experimentally-observed values can be used to simultaneously constrain the true CKM element values, and the active-sterile mixing matrix $\\Theta$. \\textsf{NeutrinoBit}\\xspace constructs a likelihood based on the deviations of the true values of $(V_{CKM})_{us}$ and $(V_{CKM})_{ud}$ from their experimentally-measured values.\n\\item \\textbf{Neutrinoless double-beta decay: }In a double-beta decay process, two neutrons decay into two protons, with the emission of two electrons and two anti-neutrinos. Majorana neutrinos would give rise to lepton number violation, resulting in neutrinoless double-beta decay ($0\\nu\\beta\\beta$). In addition, the exchange of RHNs can modify the effective neutrino mass $m_{\\beta\\beta}$, which is constrained by half-life measurements of $0\\nu\\beta\\beta$ decay. The best upper limits currently come from the GERDA experiment (Germanium)~\\cite{Agostini:2017iyd} with $m_{\\beta\\beta}<0.15-0.33\\;\\text{eV}$ (90\\% CL), and KamLAND-Zen (Xenon)~\\cite{KamLAND-Zen:2016pfg}, $m_{\\beta\\beta}<0.061-0.165\\;\\text{eV}$ (90\\% CL). \\textsf{NeutrinoBit}\\xspace uses these values to define one-sided Gaussian likelihoods, with theoretical calculations for RHN models taken from Refs.~\\cite{Drewes:2016lqo,Faessler:2014kka}\n\\item \\textbf{Big Bang Nucleosynthesis: }RHNs can affect the abundances of the primordial elements if they decay shortly before BBN, as the typical energy of the decay products is significantly higher than the plasma energy at that time. This can lead to the dissociation of formed nuclei, or the creation of deviations from thermal equilibrium. The requirement that RHNs decay before BBN implies an upper limit on their lifetime which, in turn, results in a constraint on the total mixing with the active neutrinos. \\textsf{NeutrinoBit}\\xspace currently includes a basic BBN likelihood that uses decay expressions from Refs.~\\cite{Gorbunov:2007ak,Canetti:2012kh}, and requires the lifetime of each RHN to be less than 0.1s~\\cite{Ruchayskiy:2012si}. A more comprehensive update will be released in future, associated with the new \\textsf{CosmoBit} module.\n\\item \\textbf{Direct RHN searches: }Direct searches for RHNs can be performed by looking for peaks in the lepton energy spectrum of a meson decay, looking for evidence of production in beam dump experiments, and by studying the decay of vector bosons or mesons in $e^+e^-$ or $pp$ colliders. \\textsf{NeutrinoBit}\\xspace contains likelihoods for RHN searches at the PIENU~\\cite{PIENU:2011aa}, PS-191~\\cite{Bernardi:1987ek}, CHARM~\\cite{Bergsma:1985is}, E949~\\cite{Shaykhiev:2011zz,Artamonov:2014urb}, NuTeV~\\cite{Vaitaitis:1999wq}, DELPHI~\\cite{Abreu:1997uq}, ATLAS~\\cite{Aad:2015xaa} and CMS~\\cite{Sirunyan:2018mtv} experiments.\n\\end{itemize}\n\n\\section{Applications to new physics}\n\\label{sec:physics}\n\\subsection{Supersymmetry}\n\\label{sec:SUSY}\n\nSupersymmetry (SUSY) has long been one of the leading candidates for BSM physics, owing to its potential for simultaneously answering several of the questions left open by the SM. In particular, the hierarchy problem and the dark matter ``WIMP miracle'' suggest the possible existence of SUSY states around the weak scale.\n\nMost phenomenological explorations of SUSY take the MSSM as their starting point. On top of its minimal supersymmetrisation of the SM, the MSSM effectively parameterises our ignorance about the high-scale mechanism responsible for breaking SUSY. This is done by including in the Lagrangian all gauge-invariant and renormalisable terms that break SUSY ``softly'', that is, without re-introducing the quadratic Higgs mass divergences that gave rise to the hierarchy problem. In this way the MSSM provides a unified framework for exploring a wide range of possible manifestations of SUSY, but at the price of a vast parameter space: if no further assumptions are made the soft SUSY-breaking terms introduce more than one hundred free parameters.\n\nMany different assumptions have been employed in the literature to reduce this parametric freedom and improve predictability. The resulting models broadly fit in two categories.\n\nThe first category consists of high-scale models that take inspiration from the fact that SUSY can provide gauge coupling unification at some high Grand Unified Theory (GUT) scale, typically around $10^{16}$\\,GeV. In these models a small number of unified mass and coupling parameters are defined at the GUT scale and then run down to the electroweak scale where phenomenological predictions are calculated. Thus, the assumption of high-scale unification constrains the model to a low-dimensional subspace of the full MSSM space, effectively imposing a set of characteristic correlations among the many MSSM parameters at the weak scale.\n\nProbably the most studied SUSY model in this category is the Constrained MSSM (\\textsf{CMSSM}\\xspace)~\\cite{Nilles:1983ge}. Here the parameter space is reduced to only four continuous parameters and a sign choice: the unified soft-breaking scalar mass, $m_0$; the unified soft-breaking gaugino mass, $m_{1\/2}$; the unified trilinear coupling, $A_0$; the ratio of the vacuum expectation values for the two Higgs doublets, $\\tan\\beta\\equiv v_\\mathrm{u}\/v_\\mathrm{d}$; and the sign of the supersymmetric Higgsino mass parameter $\\mu$. The \\textsf{CMSSM}\\xspace has been studied in global fits for over a decade~\\cite{Han:2016gvr, Bechtle:2014yna, arXiv:1405.4289, arXiv:1402.5419, MastercodeCMSSM, arXiv:1312.5233, arXiv:1310.3045, arXiv:1309.6958, arXiv:1307.3383, arXiv:1304.5526, arXiv:1212.2886, Strege13, Gladyshev:2012xq, Kowalska:2012gs, Mastercode12b, arXiv:1207.1839, arXiv:1207.4846, Roszkowski12, SuperbayesHiggs, Fittino12, Mastercode12, arXiv:1111.6098, Fittino, Trotta08, Ruiz06, Allanach06, Fittino06, Baltz04, SFitter}, most recently in the \\textsf{GAMBIT}\\xspace analysis in~\\cite{CMSSM}.\n\n\nTwo much-studied generalisations of the \\textsf{CMSSM}\\xspace are the Non-Universal Higgs Mass models 1 and 2 (\\textsf{NUHM1}\\xspace and \\textsf{NUHM2}\\xspace)~\\cite{Matalliotakis:1994ft,Olechowski:1994gm,Berezinsky:1995cj,Drees:1996pk,Nath:1997qm}. These models loosen the tight link in the \\textsf{CMSSM}\\xspace between the Higgs sector and the sfermions by separating out the soft-breaking mass parameters of the Higgs sector from the common scalar mass parameter $m_0$. This is achieved by introducing either one (\\textsf{NUHM1}\\xspace) or two (\\textsf{NUHM2}\\xspace) additional parameters at the GUT scale. In recent years the \\textsf{NUHM1}\\xspace and \\textsf{NUHM2}\\xspace have been studied in several global fit analyses~\\cite{MastercodeCMSSM, arXiv:1312.5233, Strege13, Fittino12, Mastercode15, Buchmueller:2014yva, arXiv:1405.4289}. \\textsf{GAMBIT}\\xspace global fits of the \\textsf{NUHM1}\\xspace and \\textsf{NUHM2}\\xspace were performed along with the fit of the \\textsf{CMSSM}\\xspace in~\\cite{CMSSM}. In Section~\\ref{sec:GUT-SUSY} we summarise the \\textsf{GAMBIT}\\xspace results for these GUT-scale SUSY models.\n\nThe second category of MSSM sub-models are the weak-scale models. Here the focus is on exploring a broad range of weak-scale phenomenological scenarios in an economical manner, by varying only the MSSM parameters that most directly impact the observables under study. With all MSSM parameters defined near the weak scale, these models are mostly agnostic to questions concerning physics at very high scales, such as grand unification. The models are often labeled as \\textsf{MSSM}$n$ (or as \\textsf{pMSSM}$n$ for the \\textit{phenomenological} \\textsf{MSSM}$n$), with $n$ specifying the number of weak-scale MSSM parameters that are treated as free parameters.\n\nVarious such weak-scale models have been subjected to global fit analyses in the past few years~\\cite{arXiv:1608.02489, arXiv:1507.07008, Mastercode15, arXiv:1506.02499, arXiv:1504.03260, Mastercode17}. The \\textsf{GAMBIT}\\xspace analyses in this category are~\\cite{MSSM}, which looks at a seven-dimensional MSSM parameterisation (\\textsf{MSSM7}\\xspace), and~\\cite{EWMSSM}, in which the fast LHC simulation capabilities of \\textsf{ColliderBit}\\xspace are used for a collider-focused fit of the four-dimensional MSSM ``electroweakino'' (chargino and neutralino) sector (\\textsf{EWMSSM}\\xspace). We summarise the \\textsf{GAMBIT}\\xspace results for the \\textsf{MSSM7}\\xspace in Section~\\ref{sec:MSSM7} and for the \\textsf{EWMSSM}\\xspace in Section~\\ref{sec:EWMSSM}.\n\nThe phenomenological richness of the MSSM means that a wide range experimental results are relevant for constraining the parameter space. The mass and signal strength measurements for the $125$\\,GeV Higgs boson and the measurement of the relic density of dark matter are of particular importance. We note that the impact of the relic density measurement depends strongly on whether the SUSY model is assumed to account for the full relic density, or, more conservatively, some arbitrary fraction of it. The \\textsf{GAMBIT}\\xspace studies reviewed here all take the latter approach.\n\nMeasurements of electroweak precision observables such as $m_W$ and the muon $g-2$, and of flavour observables like $BR(B \\rightarrow X_s \\gamma)$ and $BR(B_{(s)} \\rightarrow \\mu^+ \\mu^-)$, introduce further important requirements on the SUSY parameter space. Finally, the null results from direct and indirect dark matter searches, and from collider searches for sparticles and additional Higgs bosons, essentially rule out some parts of SUSY parameter space. Though, to determine the exact implications of such null-result collider searches is far from trivial, as will be illustrated by the \\textsf{EWMSSM}\\xspace results discussed in Section~\\ref{sec:EWMSSM}.\n\n\n\\subsubsection{Results for the \\textsf{CMSSM}\\xspace, \\textsf{NUHM1}\\xspace and \\textsf{NUHM2}\\xspace}\n\\label{sec:GUT-SUSY}\n\nThe \\textsf{GAMBIT}\\xspace global fits of the \\textsf{CMSSM}\\xspace, \\textsf{NUHM1}\\xspace and \\textsf{NUHM2}\\xspace in~\\cite{CMSSM} are interpreted in terms of frequentist profile likelihood maps, identifying the best-fit point and the $1\\sigma$ and $2\\sigma$ preferred regions relative to this point. As the results for \\textsf{NUHM2}\\xspace are qualitatively similar to those for \\textsf{NUHM1}\\xspace, we here focus on the \\textsf{CMSSM}\\xspace and \\textsf{NUHM1}\\xspace results.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.49\\textwidth]{figures\/susy\/CMSSM_2_3_like2D_post}\n \\includegraphics[width = 0.49\\textwidth]{figures\/susy\/CMSSM_2_3_ColourMechanism_post}\\\\\n \\includegraphics[width = 0.49\\textwidth]{figures\/susy\/NUHM1_2_3_like2D_post}\n \\includegraphics[width = 0.49\\textwidth]{figures\/susy\/NUHM1_2_3_ColourMechanism_post}\\\\\n \\includegraphics[height=3.1mm]{figures\/susy\/rdcolours4.pdf}\n \\caption{\n Profile likelihood in the $(m_0,m_{1\/2})$ plane in the \\textsf{CMSSM}\\xspace (\\textit{top left}) and the \\textsf{NUHM1}\\xspace (\\textit{bottom left}). The right-hand panels show the mechanisms that contribute to bringing the predicted DM relic density close to or below the observed value. The white contours show the $1\\sigma$ and $2\\sigma$ preferred regions relative to the best-fit point (white star). From~\\cite{CMSSM}.\n }\n \\label{fig:CMSSM_NUHM1_m0_m12}\n\\end{figure}\n\nThe profile likelihood maps for the $(m_0,m_{1\/2})$ planes of the \\textsf{CMSSM}\\xspace and \\textsf{NUHM1}\\xspace are shown in the left panels of Fig.\\ \\ref{fig:CMSSM_NUHM1_m0_m12}. The \\textsf{NUHM1}\\xspace plane is clearly less constrained compared to the \\textsf{CMSSM}\\xspace. The underlying reason is the additional parametric freedom in the Higgs sector of the \\textsf{NUHM1}\\xspace, where the MSSM Higgs parameters $m_{H_u}^2$ and $m_{H_d}^2$ are not unified with $m_0^2$ at the GUT scale, but are rather set by an independent parameter $m_H$ through the GUT-scale requirement $m_{H_u} = m_{H_d} \\equiv m_H$. (In the \\textsf{NUHM2}\\xspace, $m_{H_u}$ and $m_{H_d}$ are taken as independent parameters at the GUT scale.) We note that as $m_H$ is taken to be a real parameter, we have $m_{H_u}^2 = m_{H_d}^2 > 0$ at the GUT scale. The correct shape of the Higgs potential at the weak scale must therefore be generated through radiative corrections, as is the case for the \\textsf{CMSSM}\\xspace.\n\nThe right-hand panels in Fig.\\ \\ref{fig:CMSSM_NUHM1_m0_m12} help us understand the preferred parameter space in more detail. In these panels different sub-regions of the $2\\sigma$ region are coloured according to which mechanism(s) contribute to keeping the DM relic density close to or below the observed value. The following criteria are used to define the DM mechanism regions in~\\cite{CMSSM} and in the \\textsf{MSSM7}\\xspace study in~\\cite{MSSM}:\n\\begin{itemize}\n\\item stop co-annihilation: $m_{\\tilde{t}_1} \\leq 1.2\\,m_{\\tilde{\\chi}^0_1}$,\n\\item sbottom co-annihilation: $m_{\\tilde{b}_1} \\leq 1.2\\,m_{\\tilde{\\chi}^0_1}$,\n\\item stau co-annihilation: $m_{\\tilde{\\tau}_1} \\leq 1.2\\,m_{\\tilde{\\chi}^0_1}$,\n\\item chargino co-annihilation: $\\tilde{\\chi}^0_1$ $\\ge50\\%$ Higgsino,\\footnote{For brevity we refer to this mechanism simply as ``chargino co-annihilation'', though it also includes co-annihilations with the next-to-lightest neutralino. Further, for many points in this region the most important effect is simply enhanced $\\tilde{\\chi}^0_1$--$\\tilde{\\chi}^0_1$ annihilations, owing to the dominantly Higgsino $\\tilde{\\chi}^0_1$ composition.}\n\\item $A\/H$ funnel: $1.6\\,m_{\\tilde{\\chi}^0_1} \\leq \\textrm{($m_A$ or $m_H$)} \\leq 2.4\\,m_{\\tilde{\\chi}^0_1}$,\n\\item $h\/Z$ funnel: $1.6\\,m_{\\tilde{\\chi}^0_1} \\leq \\textrm{($m_Z$ or $m_h$)} \\leq 2.4\\,m_{\\tilde{\\chi}^0_1}$.\n\\end{itemize}\nThe coloured regions overlap for parameter points where more than one mechanism contributes.\n\nIn the \\textsf{CMSSM}\\xspace the overall highest-likelihood point is found in the stop co-annihilation region, at $m_0 \\lesssim 4.5$\\,TeV. This region is associated with large, negative values for the trilinear coupling, $A_0 \\lesssim -5$\\,TeV, and $\\tan\\beta \\lesssim 16$. Only two other DM mechanisms are active within the best-fit parameter space, namely the $A\/H$ funnel and chargino co-annihilation. Thus, in contrast with earlier \\textsf{CMSSM}\\xspace fits, these results show that the stau co-annihilation region has fallen out of the preferred parameter space. This is mainly driven by the likelihood contribution from the LHC Higgs measurements, which penalise the lower-$m_0$ region where the lightest stau get sufficiently close in mass to the lightest neutralino.\n\nAs discussed above, the link between $m_0$ and the Higgs sector is relaxed in the \\textsf{NUHM1}\\xspace. This opens up the parameter space at lower $m_0$, allowing the stau co-annihilation region back within the $2\\sigma$ preferred region, as seen in the lower right panel of Fig.\\ \\ref{fig:CMSSM_NUHM1_m0_m12}. A second consequence of $m_0$ being decoupled from the Higgs sector in the \\textsf{NUHM1}\\xspace is that the allowed chargino co-annihilation region is extended to arbitrarily small $m_0$ values, compared to in the \\textsf{CMSSM}\\xspace. We can understand this by investigating the \\textsf{CMSSM}\\xspace case: the chargino co-annihilation DM mechanism is important when the MSSM Higgsino mass parameter $\\mu$ is smaller than the bino mass parameter $M_1$ at the weak scale, as in that case the $\\tilde{\\chi}^0_1$ will be the lightest state in a triplet of near mass-degenerate Higgsinos (two neutralinos and one chargino).\\footnote{In the general MSSM it is also possible to have chargino co-annihilation between a pair of wino-dominated $\\tilde{\\chi}^0_1$ and $\\tilde{\\chi}^\\pm_1$, when $|M_2| < |M_1|, |\\mu|$. However, this mechanism is not available in the models discussed here, as the GUT-scale relation $M_1 = M_2 = M_3 \\equiv m_{1\/2}$ leads to $M_2 \\sim 2 M_1$ at the weak scale.}\nIn the \\textsf{CMSSM}\\xspace, the MSSM Higgsino mass parameter $\\mu$ is strongly linked to $m_0$ via the conditions for EWSB; reducing $m_0$ effectively increases $\\mu$. The bino mass parameter $M_1$ is on the other hand controlled by $m_{1\/2}$ via the GUT-scale relation $M_1 = M_2 = M_3 \\equiv m_{1\/2}$. For a fixed value of $m_{1\/2}$, lowering $m_0$ therefore eventually leads to $M_1 \\ll |\\mu|$, resulting in a bino-dominated $\\tilde{\\chi}^0_1$ significantly lower in mass than the Higgsino-dominated neutralinos\/chargino. In the \\textsf{NUHM1}\\xspace, on the other hand, the $\\mu$ parameter is mostly controlled by $m_H$. This allows for $|\\mu| < M_1$, and thus chargino co-annihilation, also in the low-$m_0$ region.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width = 0.49\\textwidth]{figures\/susy\/CMSSM_105_201_like2D_post}\n \\includegraphics[width = 0.49\\textwidth]{figures\/susy\/CMSSM_105_201_ColourMechanism_post}\\\\\n \\includegraphics[height=3.1mm]{figures\/susy\/rdcolours3.pdf}\n \\caption{\n Profile likelihood in the $(m_{\\tilde{\\chi}^0_1},\\Omega_\\chi h^2)$ plane of the \\textsf{CMSSM}\\xspace (\\textit{left}), and the mechanisms that bring the predicted relic density close to or below the measured value (\\textit{right}). The stars show the best-fit points, while the white contours outline the $1\\sigma$ and $2\\sigma$ regions. From~\\cite{CMSSM}.\n }\n \\label{fig:CMSSM_mN1_oh2}\n\\end{figure}\n\nAs mentioned above, in these fits the observed DM relic density is only imposed as an upper bound, to leave open the possibility for non-MSSM contributions in the observed DM density. While this choice broadens the allowed parameter space, it is worth noting that the parameter regions that fully explain the relic density can have equally high likelihoods as those with a lower predicted relic density. This can be seen in the left panel of Fig.~\\ref{fig:CMSSM_mN1_oh2}, which shows the profile likelihood in the \\textsf{CMSSM}\\xspace plane of the neutralino mass $m_{\\tilde{\\chi}^0_1}$ and the predicted relic density $\\Omega_{\\chi} h^2$. For most $m_{\\tilde{\\chi}^0_1}$ values there is little variation in the profile likelihood when moving up to a point where the prediction saturates the observed value (dashed purple line).\n\nThe right-hand panel in Fig.~\\ref{fig:CMSSM_mN1_oh2} shows that, in the \\textsf{CMSSM}\\xspace, the lowest predicted neutralino masses are found within the stop and chargino co-annihilation regions, extending down to $m_{\\tilde{\\chi}^0_1} \\sim 250$\\,GeV. In the \\textsf{NUHM1}\\xspace and \\textsf{NUHM2}\\xspace, the chargino co-annihilation and stau co-annihilation regions extend further down, to $m_{\\tilde{\\chi}^0_1} \\sim 150$\\,GeV. The chargino co-coannihilation region in Fig.~\\ref{fig:CMSSM_mN1_oh2} also illustrates the well-known result that a dominantly Higgsino $\\tilde{\\chi}^0_1$ produces the entire observed relic density when $m_{\\tilde{\\chi}^0_1} \\sim 1$\\,TeV. Moving along the observed relic density towards higher neutralino masses, additional contributions from resonant $A\/H$-funnel annihilations become more and more important.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/CMSSM_105_217_ColourMechanism_post_wExps.pdf}\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/NUHM1_105_217_ColourMechanism_post_wExps.pdf}\\\\\n \\includegraphics[height=3.1mm]{figures\/susy\/rdcolours4.pdf}\n \\caption{\n The $2\\sigma$ preferred regions in the plane of the spin-independent neutralino-proton cross-section versus the neutralino mass for the \\textsf{CMSSM}\\xspace (\\textit{left}) and the \\textsf{NUHM1}\\xspace (\\textit{right}), coloured according to the mechanism(s) that limit the predicted DM relic density. The pink lines show the observed 90\\% CL exclusion limit from LUX~\\cite{LUXrun2} and projected limits for XENON1T (two tonne-years of exposure), XENONnT\/LZ (20 tonne-years of exposure)~\\cite{XENONnTLZ} and DARWIN (200 tonne-years of exposure)~\\cite{DARWIN}. The $1\\sigma$ and $2\\sigma$ regions are shown as white contours; best-fit points are marked by stars. From~\\cite{CMSSM}.\n }\n \\label{fig:CMSSM_NUHM1_direct_detection}\n\\end{figure}\n\nDirect detection DM searches seem the most promising experimental probe for the SUSY scenarios preferred in these fits. In Fig.\\ \\ref{fig:CMSSM_NUHM1_direct_detection} the preferred \\textsf{CMSSM}\\xspace (left) and \\textsf{NUHM1}\\xspace regions are shown in the plane of the lightest neutralino mass versus the spin-independent neutralino-proton cross-section. The predicted cross-section is scaled by the fraction $f$ of the full DM relic density that the given parameter point attributes to neutralinos. The solid pink line shows the $90\\%$\\,CL exclusion limit from the LUX 2016 result~\\cite{LUXrun2}, which was included as a likelihood component in these fits. The dashed and dotted lines show projected $90\\%$\\,CL limits for the XENON and DARWIN experiments~\\cite{XENONnTLZ, DARWIN}. While the stop co-annihilation regions will largely remain out of reach, as will much of the stau co-annihilation region in the \\textsf{NUHM1}\\xspace, both the chargino co-annihilation and the $A\/H$ funnel regions can be fully probed in future direct detection searches.\n\nFinally, we note that the \\textsf{CMSSM}\\xspace, \\textsf{NUHM1}\\xspace and \\textsf{NUHM2}\\xspace fit results in~\\cite{CMSSM} indicate that these models no longer hold much promise for resolving the observed discrepancy in the muon anomalous magnetic moment. The strong constraints on the low-mass parameter space -- in particular from LHC sparticle searches, DM direct detection and the LHC Higgs measurements -- push the fits towards heavier sfermion and electroweakino spectra, thus diminishing the possible SUSY contribution to the muon $(g-2)$.\n\n\n\n\\subsubsection{Results for the \\textsf{MSSM7}\\xspace}\n\\label{sec:MSSM7}\n\nWe now move on to the weak-scale parameterisations of the MSSM, starting with the \\textsf{GAMBIT}\\xspace analysis of the \\textsf{MSSM7}\\xspace in~\\cite{MSSM}. Here the free parameters are the wino mass parameter, $M_2$; the $(3,3)$ elements of the $\\mathbf{A}_u$ and $\\mathbf{A}_d$ MSSM trilinear coupling matrices, $(\\mathbf{A}_u)_{33} \\equiv A_{u_3}$ and $(\\mathbf{A}_d)_{33} \\equiv A_{d_3}$ (the other trilinear couplings are set to 0); the soft-breaking Higgs mass parameters, $m_{H_u}^2$ and $m_{H_u}^2$; a common parameter $m_{\\tilde{f}}^2$ for the sfermion soft-breaking mass parameters; and the ratio of the Higgs vacuum expectation values, $v_u\/v_d \\equiv \\tan\\beta$. All the parameters are defined at the scale $Q = 1$\\,TeV, except $\\tan\\beta$, which is defined at $Q = m_Z$.\n\nWhile this model is a weak-scale MSSM parameterisation, the GUT-inspired relation\n\\begin{align}\n\\frac{3}{5}\\cos^2\\theta_\\mathrm{W}M_1 = \\sin^2\\theta_\\mathrm{W}M_2 = \\frac{\\alpha}{\\alpha_\\mathrm{s}}M_3,\n\\label{eq:GUT_relation}\n\\end{align}\nis imposed to limit the dimensionality of the parameter space. Equation~\\ref{eq:GUT_relation} represents an expected weak-scale relation between $M_1$, $M_2$ and $M_3$ if they originate from a common GUT-scale parameter, like $m_{1\/2}$ in the \\textsf{CMSSM}\\xspace.\n\nAs in the GUT-scale models, the Higgsino mass parameter $\\mu$ is determined from the input parameters -- most importantly $m_{H_u}^2$ and $m_{H_u}^2$ -- and the requirements for EWSB. Since Eq.\\ \\ref{eq:GUT_relation} implies that $|M_1| < |M_2|$, we again have three \\textit{a priori} possibilities for the composition of the neutralino state: dominantly bino ($|M_1| < |\\mu|$), dominantly Higgsino ($|\\mu| < |M_1|$), or a bino-Higgsino mixture ($|M_1| \\sim |\\mu|$).\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSM7_111_112_like2D.pdf}\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSM7_111_112_ColourMechanism_post.pdf}\\\\\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSM7_105_201_like2D.pdf}\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSM7_105_201_ColourMechanism_post.pdf}\\\\\n \\includegraphics[height=3.1mm]{figures\/susy\/MSSM7_rdcolours5.pdf}\n \\caption{\n Profile likelihoods in the $(\\mu,M_1)$ plane (\\textit{top left}) and the $(m_{\\tilde{\\chi}^0_1},\\Omega_\\chi h^2)$ plane of the \\textsf{MSSM7}\\xspace. The right-hand panels show the $2\\sigma$ preferred parameter regions coloured according to which mechanism(s) contribute to limit the relic density. The stars mark the best-fit points, while the white contours show the $1\\sigma$ and $2\\sigma$ preferred regions. From~\\cite{MSSM}.\n }\n \\label{fig:MSSM7}\n\\end{figure}\n\nThe global fit analysis in~\\cite{MSSM} finds that all these three neutralino scenarios are allowed within the $2\\sigma$ preferred parameter space of the \\textsf{MSSM7}\\xspace. This can be seen in the top panels of Fig.\\ \\ref{fig:MSSM7}, showing the profile likelihood in the $(\\mu, M_1)$ plane (left) and the active mechanisms that bring the relic density close to or below the observed value (right). In the $\\mu < |M_1|$ regions of the plane, corresponding to a mostly Higgsino $\\tilde{\\chi}^0_1$, the chargino co-annihilation and $A\/H$ funnel mechanisms dominate. Moving towards larger $\\mu$ we enter the bino-Higgsino mixture scenario at $\\mu \\sim |M_1|$, before reaching the bino-$\\tilde{\\chi}^0_1$ scenario at $\\mu > |M_1|$. Here the chargino co-annihilation mechanism is no longer relevant, so an acceptable relic density must be achieved either through efficient $A\/H$ funnel annihilations, co-annihilations with the lightest stop or sbottom, or a combination of these mechanisms.\\footnote{The lack of a stau co-annihilation region in the \\textsf{MSSM7}\\xspace is related to the assumption of a common sfermion mass parameter defined at the low scale of $Q=1$\\,TeV. The differences in sfermion masses are then mostly determined by the amount of L\/R mixing in the sfermion mass matrices, rather than RGE running of mass parameters. Since the L\/R mixing terms for both up-type and down-type sfermions are proportional to the corresponding Yukawa couplings, the light stop ends up being the lightest sfermion across much of parameter space, and the light sbottom is always lighter than then light stau.}\n\n\nThe overall best-fit point in the \\textsf{MSSM7}\\xspace is found in the chargino co-annihilation region, with $m_{\\tilde{\\chi}^0_2} \\approx m_{\\tilde{\\chi}^\\pm_1} \\approx m_{\\tilde{\\chi}^0_1} \\approx 260$\\,GeV. As can be seen in the lower panels of Fig.\\ \\ref{fig:MSSM7}, the predicted neutralino relic density at this point can only explain around 10\\% of the observed DM relic density. However, with only slightly heavier neutralino masses there are \\textsf{MSSM7}\\xspace scenarios that achieve close to the same likelihood values -- well within the $1\\sigma$ region -- and account for the full relic density. These are scenarios with a mostly bino $\\tilde{\\chi}^0_1$ and efficient $\\tilde{\\chi}^0_1$--$\\tilde{\\chi}^0_1$ annihilations through the $A\/H$ funnel.\n\nThe cutoff of this $A\/H$ funnel region at $m_{\\tilde{\\chi}^0_1} \\sim 250$\\,GeV, corresponding to $m_{A\/H} \\sim 500$\\,GeV, is due to several independent likelihood contributions that penalize the lower-mass scenarios. In particular, the constraint on BSM contributions to $BR(B \\rightarrow X_s \\gamma)$ plays an important role here, as the $A^0$ mass is closely related to the $H^\\pm$ mass, and a light charged Higgs will induce sizable SUSY contributions to this decay. Further important constraints on this region come from the LHC Higgs measurements, and also from LHC gluino searches, as the gluino mass parameter $M_3$ is connected to $M_1$ via Eq.\\ \\ref{eq:GUT_relation}, giving $M_3 \\sim 5 M_1$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSM7_105_217_like2D.pdf}%\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSM7_105_217_ColourMechanism_post.pdf}\\\\\n \\includegraphics[height=3.1mm]{figures\/susy\/MSSM7_rdcolours5.pdf}\n \\caption{\n\tProfile likelihood in the plane of the neutralino mass versus the spin-independent neutralino-proton cross-section in the \\textsf{MSSM7}\\xspace (left), and the relic density mechanisms that are active in different parts of the $2\\sigma$ region (right). The predicted neutralino-proton cross-section is rescaled at each point by the fraction $f$ of the observed DM relic density that the neutralino relic prediction accounts for. 90\\% CL exclusion limits are shown for the full LUX exposure~\\cite{LUXrun2} and the projected reach for for XENON1T (two tonne-years of exposure), XENONnT\/LZ (20 tonne-years of exposure)~\\cite{XENONnTLZ} and DARWIN (200 tonne-years of exposure)~\\cite{DARWIN}. The $1\\sigma$ and $2\\sigma$ regions are outlined by white contours. The stars mark the best-fit points. From~\\cite{MSSM}.\n }\n \\label{fig:MSSM7_direct_detection}\n\\end{figure}\n\nWe note that even the $h\/Z$ funnel mechanisms are present within the $2\\sigma$ parameter regions, for $m_{\\tilde{\\chi}^0_1} \\approx 45$\\,GeV and $m_{\\tilde{\\chi}^0_1} \\approx 62$\\,GeV. However, the allowed scenarios in this low-$m_{\\tilde{\\chi}^0_1}$ region have an almost pure Higgsino $\\tilde{\\chi}^0_1$ anyway, so this alone ensures a predicted relic density far below the observed value, also explaining how the otherwise strong constraints from DM direct detection are avoided.\n\nAs for the GUT-scale models discussed in the previous section, direct DM searches seem the most promising probe of the \\textsf{MSSM7}\\xspace scenarios preferred by this fit. Figure \\ref{fig:MSSM7_direct_detection} shows the profile likelihood (left) and the active DM mechanisms (right) across the plane of the neutralino mass and the spin-independent neutralino-proton cross-section. We see that future direct detection experiments will explore not only the full chargino co-annihilation region, but almost the entire $1\\sigma$ region preferred in the \\textsf{GAMBIT}\\xspace fit.\n\nConcerning the muon $(g-2)$ discrepancy, the fit in~\\cite{MSSM} shows that there is little hope that the \\textsf{MSSM7}\\xspace can provide an explanation. This is not particularly surprising: because the model dimensionality is kept low, relating all sfermion mass parameters to the common $m_{\\tilde{f}}^2$ parameter at the weak scale, it is impossible to get sufficiently light smuons and muon sneutrinos without simultaneously causing significant tension with other observables such as LHC squark searches.\n\n\n\n\\subsubsection{Results for the \\textsf{EWMSSM}\\xspace}\n\\label{sec:EWMSSM}\n\nCurrent SUSY searches by the ATLAS and CMS experiments at the LHC are usually optimised and interpreted assuming a simplified model. These models typically include only two or three different sparticles and assume 100\\% of decays occur to the signal processes. Such theory simplifications are a necessary compromise given the level of detail and complexity in experimental searches. Nevertheless it leaves open an important question: what impact do the results from ATLAS and CMS SUSY searches have on the parameter space of more realistic models like the MSSM?\n\nThe \\textsf{GAMBIT}\\xspace analysis in~\\cite{EWMSSM} takes on this question in the context of LHC searches for neutralinos and charginos. The canonical simplified model for these searches is one that assumes production of a purely wino $\\tilde{\\chi}^0_2 \\tilde{\\chi}^\\pm_1$ pair, with subsequent decays to a purely bino $\\tilde{\\chi}^0_1$ via $\\tilde{\\chi}^0_2 \\rightarrow Z \\tilde{\\chi}^0_1$ and $\\tilde{\\chi}^\\pm_1 \\rightarrow W^\\pm \\tilde{\\chi}^0_1$. This gives motivation for a search for events with leptons, jets and missing energy (see e.g.\\ \\cite{Aaboud:2018jiw, Sirunyan:2017lae}). The \\textsf{GAMBIT}\\xspace study assumes a phenomenologically far richer model, referred to as the \\textsf{EWMSSM}\\xspace. This is the effective theory obtained when assuming that all sparticles except the MSSM electroweakinos are too heavy to affect current collider searches. The \\textsf{EWMSSM}\\xspace is thus a model with six sparticles -- four neutralinos and two charginos -- controlled by only four free MSSM parameters: $M_1$, $M_2$, $\\mu$ and $\\tan\\beta$. Loosely speaking, the bino soft-mass $M_1$ controls the mass of one neutralino, the wino soft-mass $M_2$ controls the masses of one neutralino and one chargino, and the Higgsino mass parameter $\\mu$ sets the masses of two neutralinos and one chargino.\n\nIn contrast to the global fits discussed in the previous two sections, the fit in \\cite{EWMSSM} focuses exclusively on collider constraints. This choice allows the fit to explore the full range of possible collider scenarios in the \\textsf{EWMSSM}\\xspace without further enlarging the model parameter space. Keeping the dimensionality of the parameter space fairly low is of critical importance, due the large computational expense of this fit: for each sampled \\textsf{EWMSSM}\\xspace parameter point, \\textsf{ColliderBit}\\xspace is used to run full Monte Carlo simulations of the relevant ATLAS and CMS searches. While running full simulations at each point in a global fit is always computationally challenging, it is particularly so when simulating electroweakino searches due to the low signal acceptance rates in these searches.\\footnote{For most of the included LHC searches there is no public information on how background estimates are correlated across signal regions. In these cases the single signal region with the best expected sensitivity must be identified at each \\textsf{EWMSSM}\\xspace parameter point in the fit. This adds to the already substantial computational cost, as distinguishing between ``competing'' signal regions often requires higher Monte Carlo statistics than what is needed to get reasonable signal estimates for each individual signal region alone.}\n\nThe analysis in \\cite{EWMSSM} includes \\textsf{ColliderBit}\\xspace simulations of most of the $13$\\,TeV electroweakino searches that were available at the time of the study~\\cite{Aaboud:2018jiw,Aaboud:2018sua,Aaboud:2018htj,Aaboud:2018zeb,CMS:2017fth,Sirunyan:2018iwl,Sirunyan:2017qaj,CMS-PAS-SUS-16-039}. The combined likelihood obtained from these simulations is the main component in the fit likelihood function. The other collider observables going into the total likelihood are a collection of SUSY cross-section limits from LEP and the invisible decay widths of the $Z$ and the $125$\\,GeV Higgs.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSMEW_155_151_like2D.pdf}\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSMEW_152_155_like2D.pdf}\\\\\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSMEW_152_153_like2D.pdf}\n \\includegraphics[width=0.49\\textwidth]{figures\/susy\/MSSMEW_154_153_like2D.pdf}\n \\caption{\n Profile likelihood in four different \\textsf{EWMSSM}\\xspace mass planes: the $(m_{\\tilde{\\chi}^\\pm_1},m_{\\tilde{\\chi}^0_1})$ plane (top left), the $(m_{\\tilde{\\chi}^0_2},m_{\\tilde{\\chi}^\\pm_1})$ plane (top right), the $(m_{\\tilde{\\chi}^0_2},m_{\\tilde{\\chi}^0_3})$ plane (bottom left), and the $(m_{\\tilde{\\chi}^0_4},m_{\\tilde{\\chi}^0_3})$ plane (bottom right). The white contours show the $1\\sigma$ and $2\\sigma$ preferred regions. The star marks the best-fit point. From~\\cite{EWMSSM}.\n }\n \\label{fig:EWMSSM_mass_planes}\n\\end{figure}\n\nThe main result from \\cite{EWMSSM} is that, when combined, the ATLAS and CMS electroweakino results prefer \\textsf{EWMSSM} scenarios with a distinct pattern of relatively light neutralino and chargino masses (Fig.\\ \\ref{fig:EWMSSM_mass_planes}). The preferred $2\\sigma$ parameter region has all six neutralinos and charginos below $\\sim$$700$\\,GeV, with the lightest neutralino below $\\sim$$200$\\,GeV. The lightest neutralino is always dominantly bino, but it also has a non-negligible wino or Higgsino component. Further, the best-fit parameter region predicts two characteristic $\\gtrsim m_Z$ gaps in the mass spectrum: the first between the mostly bino $\\tilde{\\chi}^0_1$ and the mostly wino (Higgsino) $\\tilde{\\chi}^0_2$\/$\\tilde{\\chi}^\\pm_1$, and the second between $\\tilde{\\chi}^0_2$\/$\\tilde{\\chi}^\\pm_1$ and the mostly Higgsino (wino) $\\tilde{\\chi}^0_4$\/$\\tilde{\\chi}^\\pm_2$.\\footnote{In the preferred scenarios, $\\tilde{\\chi}^0_3$ is always mostly Higgsino and thus fairly close in mass to the other Higgsino-dominated states, i.e.\\ either $\\tilde{\\chi}^0_2$\/$\\tilde{\\chi}^\\pm_1$ or $\\tilde{\\chi}^0_4$\/$\\tilde{\\chi}^\\pm_2$.}\n\nAt first sight this result may seem surprising. None of the included ATLAS and CMS searches have seen a convincing SUSY signal, yet when combined they prefer the low-mass region over the decoupling region, where all \\textsf{EWMSSM}\\xspace collider predictions would align with SM expectations. The reason is that the \\textsf{EWMSSM}\\xspace is able to simultaneously fit a pattern of small excesses across several of the simulated LHC searches, while at the same time avoiding generating too much tension with the other searches. The excesses that mostly drive this result come from searches for 2-, 3-, and 4-lepton final states in ATLAS \\cite{Aaboud:2018jiw,Aaboud:2018sua,Aaboud:2018zeb}, specifically in signal regions that target leptons from on-shell $Z$ and $W$ decays. This explains the preference in the fit for electroweakino mass spectra with two $\\gtrsim m_Z$ mass gaps.\n\n\\begin{figure}\n \\centering\n %\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_155_151_obs2D_605.pdf}\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_152_153_obs2D_605.pdf}\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_154_153_obs2D_605.pdf}\n %\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_155_151_obs2D_607.pdf}\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_152_153_obs2D_607.pdf}\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_154_153_obs2D_607.pdf}\n %\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_155_151_obs2D_608.pdf}\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_152_153_obs2D_608.pdf}\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_154_153_obs2D_608.pdf}\n %\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_155_151_obs2D_610.pdf}\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_152_153_obs2D_610.pdf}\n \\includegraphics[width=0.32\\textwidth]{figures\/susy\/MSSMEW_154_153_obs2D_610.pdf}\n %\n \\caption{\n Contributions to the total fit likelihood from the ATLAS searches in Ref.\\ \\cite{Aaboud:2018zeb} (top), Ref.\\ \\cite{Aaboud:2018jiw} (second and third rows), and Ref.\\ \\cite{Aaboud:2018sua} (bottom), shown across the full $3\\sigma$ regions in the $(m_{\\tilde{\\chi}^\\pm_1}, m_{\\tilde{\\chi}^0_1})$ plane (left), the $(m_{\\tilde{\\chi}^0_2}, m_{\\tilde{\\chi}^0_3})$ plane (middle), and the $(m_{\\tilde{\\chi}^0_4}, m_{\\tilde{\\chi}^0_3})$ plane (right). In the blue regions a non-zero signal prediction in the given search improves the overall fit, while in red regions the signal prediction worsens the fit. In the white regions the given search is not sensitive. The orange contours outline the $1\\sigma$, $2\\sigma$ and $3\\sigma$ regions preferred in the fit. The white star marks the best-fit point. From~\\cite{EWMSSM}.\n }\n \\label{fig:EWMSSM_mass_planes_per_analysis}\n\\end{figure}\n\nTo understand the interplay between the analyses contributing to the excess, we can look at their individual likelihood contributions across the combined best-fit surface. This is done in Fig.\\ \\ref{fig:EWMSSM_mass_planes_per_analysis}, where the contributions from four ATLAS results are displayed across the preferred $3\\sigma$ regions in three different mass planes. When reading these plots it is important to keep in mind that the plotted points are those parameter samples picked out by profiling the \\textit{total} likelihood. The sharp changes in analysis likelihood seen in some plots are due to abrupt changes in what scenarios are picked out by this profiling, which again changes which signal region is selected to set the analysis likelihood value.\n\nOne example of the interplay between analyses is seen by comparing the middle panels on the first and third rows. The first of these show the likelihood contribution from an ATLAS search for 4-lepton final states, with the leptons coming from two $Z$ bosons~\\cite{Aaboud:2018zeb}. We see that fitting a 4-lepton excess in the \\textsf{EWMSSM}\\xspace relies on having non-negligible production of $\\tilde{\\chi}^0_3$, as this allows for signal leptons from the decays $\\tilde{\\chi}^0_3 \\rightarrow Z \\tilde{\\chi}^0_{1,2}$. The second of these panels is for an ATLAS search for 3-lepton final states~\\cite{Aaboud:2018jiw}, designed to target $\\tilde{\\chi}^0_2 \\tilde{\\chi}^\\pm_1$ production. For a given $m_{\\tilde{\\chi}^0_2}$, reducing $m_{\\tilde{\\chi}^0_3}$ to $\\lesssim 600$\\,GeV (as preferred by the 4-lepton search) also improves the fit to this 3-lepton search, which for high $m_{\\tilde{\\chi}^0_3}$ sees some tension with the data. At lower $m_{\\tilde{\\chi}^0_3}$ production processes with $\\tilde{\\chi}^0_3$ come into play, involving more complicated event topologies. At the same time the production cross-section for the $\\tilde{\\chi}^0_2 \\tilde{\\chi}^\\pm_1$ pair is reduced somewhat, due to a higher Higgsino component. The combined effect is a change in which 3-lepton signal region is identified as having the best expected sensitivity.\n\nThe combined excess in the $13$\\,TeV searches is estimated in~\\cite{EWMSSM} to have a local significance of $3.3\\sigma$. The impact of $8$\\,TeV LHC results on the preferred low-mass scenarios is investigated by post-processing all parameter samples in the $1\\sigma$ region with simulations of relevant ATLAS and CMS electroweakino searches at $8$\\,TeV \\cite{Aad:2015jqa,ATLAS:2LEPEW_20invfb,ATLAS:3LEPEW_20invfb,CMS:3LEPEW_20invfb}. The result is an upwards shift in the best-fit mass spectrum, by $\\sim20$\\,GeV in all masses, and a small reduction of the estimated significance of the excess, to $2.9\\sigma$.\n\nWe also note that even though the \\textsf{EWMSSM}\\xspace fit did not include DM constraints, parts of the preferred parameter space do give acceptable relic density predictions while avoiding exclusion from current direct and indirect DM searches. This is possible for scenarios with $m_{\\tilde{\\chi}^0_1}$ close to $m_Z\/2$ or $m_h\/2$, where resonant annihilations via the $Z\/h$ funnel can bring the predicted relic density close to or below the observed value.\n\nWhile the small excess seen in the \\textsf{EWMSSM}\\xspace fit is quite possibly due to background fluctuations, the fit demonstrates two important points. First, that LHC constraints on light SUSY can be significantly weaker in realistic SUSY such as the MSSM than in simplified models.\\footnote{While not discussed here, the analysis in~\\cite{EWMSSM} shows that for every mass hypothesis in the $(m_{\\tilde{\\chi}^0_2}, m_{\\tilde{\\chi}^0_1})$ plane -- not just for points in the best-fit region -- there is a point in the \\textsf{EWMSSM}\\xspace parameter space that fits the \\textit{combined} collider results at least as well as the SM expectation.} Second, that proper statistical combinations of collider searches can be a powerful tool to uncover suggestive patterns in BSM parameter spaces.\n\n\n\n\\subsection{Higgs Portal models for dark matter}\n\nNo definitive evidence has yet been uncovered for non-gravitational interactions of DM with the SM. At some level however, such interactions must be inevitably generated by effective operators connecting Lorentz-invariant, gauge singlet combinations of SM particles to equivalently symmetric combinations of DM fields. The lowest-dimension such operator in the SM is the Higgs bilinear $H^\\dagger H$. Depending on the spin and gauge representation of a DM candidate $X$, the lowest-order Lorentz- and gauge-invariant DM operator may be either the bilinear $X^\\dagger X$, or a lone DM field. Operators linear in $X$ are only consistent if $X$ is itself a Lorentz invariant (i.e.\\ a scalar), and a gauge singlet. If it is to be a viable DM candidate however, $X$ must be stable on cosmological timescales. The most straightforward way to achieve this is for $X$ to hold a different charge to SM particles under some new unbroken (typically discrete) symmetry. This has the effect of forbidding terms linear in $X$, preventing the field from decaying.\n\nThe lowest-order operator connecting $X$ to the SM guaranteed to exist at some level is therefore the so-called `Higgs portal' operator $X^\\dagger X H^\\dagger H$. Following electroweak symmetry breaking, this operator gives rise to a mass term for $X$ proportional to $v_0^2$ (with $v_0$ the vacuum expectation value of the Higgs field), a Higgs-DM-DM vertex proportional to $v_0$, and a direct four-particle vertex between two Higgses and two DM particles. The new 3-particle and 4-particle interactions of $X$ with the Higgs boson lead to DM annihilation (enabling thermal production and possible indirect detection), spin-independent DM-nucleon scattering (leading to possible direct detection), DM production at colliders (with the possibility for signals in e.g.\\ monojet searches), and invisible decays of the Higgs to two DM particles when $m_X < m_h\/2$.\n\nDepending on the Lorentz representation of DM, $X^2 H^2$ may be a fully renormalisable dimension 4 operator (if $X$ is a scalar), an effective dimension 4 operator (if $X$ is a vector), or an effective dimension 5 operator (if $X$ is a fermion). All three of these cases have been considered in detail in the literature, with a particular focus on models where $X$ is itself a gauge singlet and the $X^2H^2$ term is therefore the sole link between DM and the SM. The most commonly studied cases have been the $\\mathbb{Z}_2$-symmetric scalar [\\citenum{SilveiraZee,McDonald94,Burgess01,Davoudiasl:2004be,Goudelis09,Yaguna09,Profumo2010a,Andreas:2010dz,Arina11,Mambrini11, Raidal:2011xk,Mambrini:2011ik,He:2011de,Drozd:2011aa,Okada:2012cc,Cheung:2012xb,Okada:2013bna,Cline:2013gha,Chacko:2013lna, Endo:2014cca,Craig:2014lda, Feng15,Duerr15,arXiv:1510.06165,Duerr16,He:2016mls,Han:2016gyy,Dupuis:2016fda,Cuoco:2016jqt,Binder:2017rgn,Ghorbani:2018yfr,Chiang:2018gsn,Stocker:2018avm,Hardy:2018bph,Bernal:2018kcw,Glioti:2018roy,Urbano:2014hda,Escudero:2016gzx,Kanemura:2011nm,Djouadi:2011aa,Djouadi:2012zc,Bishara:2015cha,Ko:2016xwd,Beniwal:2015sdl,Kamon:2017yfx,Dutta:2017sod,Dick:2018lqx}; \\textsf{GAMBIT}\\xspace analyses \\citenum{SSDM,SSDM2}], vector [\\citenum{Djouadi:2011aa,Kanemura:2011nm,Djouadi:2012zc,Bishara:2015cha,Chen:2015dea,DiFranzo:2015nli, Beniwal:2015sdl,Ko:2016xwd,Kamon:2017yfx,Dutta:2017sod,arXiv:1704.05359,Dick:2018lqx,Baek:2014jga}; \\textsf{GAMBIT}\\xspace analysis \\citenum{HP}] and fermionic [\\citenum{Djouadi:2011aa,Kanemura:2011nm,LopezHonorez:2012kv,Djouadi:2012zc,Urbano:2014hda, Baek:2014jga,Bishara:2015cha,Beniwal:2015sdl,Ko:2016xwd,Fedderke:2014wda,Matsumoto:2014rxa,arXiv:1506.04149, arXiv:1506.08805,arXiv:1506.06556, Escudero:2016gzx,Kamon:2017yfx,Dutta:2017sod,Dick:2018lqx,Matsumoto:2018acr}; \\textsf{GAMBIT}\\xspace analysis \\citenum{HP}] variants, along with the $\\mathbb{Z}_3$-symmetric scalar [\\citenum{Belanger2013a,Kang:2017mkl,2017JHEP...10..088B,Hektor:2019ote,Kannike:2019mzk}; \\textsf{GAMBIT}\\xspace analysis \\citenum{SSDM2}].\n\n\n\\subsubsection{$\\mathbb{Z}_2$-symmetric scalar singlet}\n\nThe simplest Higgs portal model for DM, and indeed probably the most minimal of all models for particle DM, is a single, real, gauge-singlet scalar field $S$, protected from decay by a $\\mathbb{Z}_2$ symmetry. The only new renormalisable Lagrangian terms allowed by gauge, Lozentz and $\\mathbb{Z}_2$ symmetry are\n\\begin{equation}\n\\mathcal{L}_{\\mathbb{Z}_2} = \\frac12 \\mu_{\\scriptscriptstyle S}^2 S^2 + \\frac14\\lambda_{\\sss S} S^4 + \\frac12\\lambda_{h\\sss S} S^2|H|^2.\n\\label{L_S}\n\\end{equation}\nThe model is fully specified by the $S$ bare mass $\\mu_{\\scriptscriptstyle S}$, the dimensionless $S$ quartic self-coupling $\\lambda_{\\sss S}$, and the dimensionless Higgs portal coupling $\\lambda_{h\\sss S}$. For the most part, the $S$ quartic coupling has little impact on the phenomenology of the model, as it leads only to DM self-interactions, which are not sufficiently constrained by existing data to place strong limits on $\\lambda_{\\sss S}$. A key exception, however, is the impact of $\\lambda_{\\sss S}$ on the running of gauge couplings under renormalisation group flow, which can have important implications for stability of the electroweak vacuum.\n\nDenoting the physical SM Higgs field by $h$, following electroweak symmetry breaking $H \\rightarrow \\left[0, (v_0+h)\/\\sqrt{2}\\right]^\\text{T}$. This generates new vertices of the form $v_0hS^2$ and $h^2S^2$, and induces a shift to the $S$ bare mass, such that at tree level\n\\begin{equation}\nm_{\\sss S} = \\sqrt{\\mu_{\\scriptscriptstyle S}^2 + \\frac12{\\lambda_{h\\sss S} v_0^2}}.\n\\label{ms}\n\\end{equation}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.6\\columnwidth]{figures\/figure_1.pdf}\n\\caption{Feynman diagrams for annihilation, semi-annihilation, nuclear scattering and Higgs decays in scalar singlet Higgs portal models. $N, f$ and $V$ refer to nucleons, fermions and SM electroweak vector bosons ($Z$ and $W$), respectively. Diagrams are shown for the $\\mathbb{Z}_3$-symmetric case, where DM exists in $S$ and anti-$S$ (i.e. $S^*$) states, but the same diagrams apply in the $\\mathbb{Z}_2$-symmetric case with $S=S^*$, except for semi-annihilation (which is absent in the $\\mathbb{Z}_2$ model). The same diagrams also apply to $\\mathbb{Z}_2$-symmetric vector and fermionic Higgs portal models (with $S$ replaced by the relevant DM particle and semi-annihilation also forbidden by the $\\mathbb{Z}_2$ symmetry). From \\cite{SSDM2}.}\n\\label{fig:diagrams}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_singlet_19_17_like2D_SingletDM_Z2_low_X}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_singlet_19_17_like2D_SingletDM_Z2_full_X}\n \\caption{Profile likelihoods of parameters in the $\\mathbb{Z}_2$-symmetric scalar singlet Higgs portal dark matter model, including constraints from direct and indirect detection, the relic density of dark matter and LHC searches for invisible decays of the Higgs boson, along with various Standard Model, dark matter halo and nuclear uncertainties. \\textit{Left}: the low-mass resonance region. \\textit{Right}: the full mass range. Contours show 1 and 2$\\sigma$ confidence regions, with white corresponding to the main scan (including the 2018 XENON1T direct search \\cite{Aprile:2018dbl}) and grey to a secondary scan using the 2017 XENON1T result \\cite{Aprile:2017iyp}. White stars indicate the location of the best-fit point. From \\protect\\cite{SSDM2}.}\n\t\\label{fig:z2scalar1}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_singlet_19_200_like2D_SingletDM_Z2_full_X}\n \\caption{Results from the same analysis of the $\\mathbb{Z}_2$-symmetric scalar singlet Higgs portal dark matter model as shown in Fig.\\ \\ref{fig:z2scalar1}, but plotted in the plane of the effective spin-independent nuclear scattering cross-section and the scalar mass, in order to compare directly to the sensitivity of direct detection experiments. All models have their effective cross-section defined as $f\\sigma_\\mathrm{SI}$, where $f\\equiv \\Omega_S \/ \\Omega_\\mathrm{DM}$ is the fraction of the relic density constituted by the scalar singlet. Experiments assume $f=1$ when publishing their results. Contours show 1 and 2$\\sigma$ confidence regions, and stars best fits. From \\protect\\cite{SSDM2}.}\n\t\\label{fig:z2scalar2}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_singlet_19_17_like2D_SingletDM_Z2_full_vs_X_cut}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_singlet_19_200_like2D_SingletDM_Z2_full_vs_X_cut}\n \\caption{Regions in the $\\mathbb{Z}_2$-symmetric scalar singlet model that satisfy all experimental constraints, stabilise the electroweak vacuum and remain perturbative up to scales of $10^{15}$\\,GeV. Contours show 1 and 2$\\sigma$ confidence regions, and stars best fits. Grey contours show the allowed regions without the requirements of vacuum stability and perturbativity. From \\protect\\cite{SSDM2}.}\n\t\\label{fig:z2scalar3}\n\\end{figure}\n\nThe interaction with the physical Higgs endows $S$ with essentially all of the classic phenomenology of WIMP DM, via the diagrams shown in Fig.\\ \\ref{fig:diagrams} -- along with the added possibility of Higgs decays $h\\to SS$ where $m_{\\sss S} \\le m_h\/2$. The leading constraints on the model come from searches for gamma rays from dark matter annihilation in dwarf spheroidal galaxies \\cite{LATdwarfP8}, the observed relic density of dark matter \\cite{Planck18cosmo}, direct searches performed by the XENON1T \\cite{Aprile:2018dbl} and PandaX \\cite{Cui:2017nnn} experiments, and searches for invisible Higgs decays at the LHC \\cite{Belanger:2013xza,CMS-PAS-HIG-17-023}.\n\nThe resulting preferred regions of parameter space are shown in Fig.\\ \\ref{fig:z2scalar1}. These results explicitly allow models where $S$ is only a fraction of the observed DM, and include a fully self-consistent rescaling of the predicted signals at direct and indirect searches according to the fraction $f \\le 1$ of DM constituted by $S$ at each point in the parameter space. The allowed parameter space splits into three regions: one at high masses where direct detection loses sensitivity, a second at intermediate mass where the 4-boson vertex boosts the annihilation cross-section and depletes the relic density, and another at and immediately below $m_{\\sss S} = m_h\/2$, where $S$ annihilates highly efficiently via an $s$-channel resonance mediated by the Higgs, depleting the relic density to below the observed value even for very small values of $\\lambda_{h\\sss S}$.\n\nThe Higgs invisible width constraint rules out large couplings $\\lambda_{h\\sss S}$ at singlet masses below the resonance. The thermal relic density of $S$ provides the lower limit of the low-mass and high-mass allowed regions. Indirect detection plays the leading role only on the high-mass edge of the resonance, where thermal effects in the early Universe push annihilation slightly off resonance but late-time annihilation remains strongly boosted. Direct detection plays a significant role throughout the parameter space, as can be seen in Fig.\\ \\ref{fig:z2scalar2}. Except for the very bottom of the resonance region, the entirety of the model will soon be probed by direct detection.\n\nGamma-ray lines do not provide any meaningful constraint, as the partial annihilation cross-section for $SS\\to\\gamma\\gamma$ is only appreciable in parts of the parameter space where the relic density is significantly suppressed. Likewise, monojet searches only constrain very large values of $\\lambda_{h\\sss S}$ already excluded by other constraints or expected to lead to new strong dynamics. Indeed, both these points also apply to all other Higgs portal models that we discuss in this review.\n\nGiven that the Higgs portal operator is not just an effective interaction, but a fully renormalisable operator in this model, it is also important to consider the UV behaviour of the theory. Due to the observed values of the top and Higgs masses, the SM posesses a second minimum in its scalar potential at $\\gtrsim \\mathcal{O}(10^{15})$\\,GeV, causing the low-scale vacuum in which we reside to be metastable. Adding an additional scalar to the SM impacts the running of the Higgs quartic coupling, raising its value at high scales. This can prevent the quartic coupling from running negative, and make the low-scale minimum a global rather than a local one. The catch is that $\\lambda_{\\sss S}$ must be relatively large in order to achieve this effect. Fig.\\ \\ref{fig:z2scalar3} shows the parts of the parameter space, consistent with all experimenal constraints, where $\\lambda_{\\sss S}$ can be pushed high enough to stabilise the SM vacuum, but without pushing any of the couplings non-perturbative below a scale of $10^{15}$\\,GeV. Clearly, the $\\mathbb{Z}_2$-symmetric scalar singlet can solve the vacuum stability problem without introducing new strong dynamics, and satistfy all experimental constraints, but only in a region around \\mbox{$m_{\\sss S} = 1$--2\\,TeV} and $\\sigma_\\mathrm{SI} \\sim 10^{-45}$\\,cm$^2$. Curiously, this is also in the region consistent with the (admittedly very small) excess seen in the most recent XENON1T results \\cite{Aprile:2018dbl}. In any case, this hypothessis will clearly be tested very quickly in the upcoming runs of the LZ and XENONnT \\cite{Akerib:2015cja,XENONnTLZ} experiments.\n\nThe results in Figs.\\ \\ref{fig:z2scalar1}--\\ref{fig:z2scalar3} are based on profile likleihood analyses, and illustrate what is possible in each parameter plane, were one able to freely vary the other parameters of the theory (including nuisance parameters) in order to achieve the best possible fit to all available data. If one instead carries out a Bayesian analysis, looking instead at the posterior probability density for these parameters, a different picture emerges. In this case, parameter combinations become more likely if they can provide a good fit for a broader range of values of the other parameters of the theory, i.e.\\ if they can fit the data with less fine tuning. In this case, the low-mass resonance region is strongly disfavoured, as `hitting' the resonance and avoiding the relic density constraint for a given value of $m_{\\sss S}$ requires some fine-tuning of various SM nuisance parameters such as $m_h$; the same is true to a lesser extent for the intermediate-mass region as well. We therefore see that from a Bayesian perspective, the region where the singlet model stabilises the SM vacuum is in fact favoured over the other regions of the theory, even before considering the implications for vacuum stability.\n\n\\begin{figure}[t]\n\t\\centering\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_singlet_20_17_like2D_SingletDM_Z3_full_X}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_singlet_20_19_like2D_SingletDM_Z3_full_X}\n \\caption{Profile likelihoods of parameters in the $\\mathbb{Z}_3$-symmetric scalar singlet Higgs portal dark matter model, including constraints from direct and indirect detection, the relic density of dark matter and LHC searches for invisible decays of the Higgs boson, along with various Standard Model, dark matter halo and nuclear uncertainties. Contours show 1 and 2$\\sigma$ confidence regions, with white corresponding to the main scan (including the 2018 XENON1T direct search \\cite{Aprile:2018dbl}) and grey to a secondary scan using the 2017 XENON1T result \\cite{Aprile:2017iyp}. White stars indicate the location of the best-fit point. From \\protect\\cite{SSDM2}.}\n\t\\label{fig:z3scalar1}\n\\end{figure}\n\n\\begin{figure}[t]\n\t\\centering\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_singlet_20_17_obs2D_60_SingletDM_Z3_full_X}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_singlet_20_19_obs2D_60_SingletDM_Z3_full_X}\n \\caption{Results from the same analysis of the $\\mathbb{Z}_3$-symmetric scalar singlet Higgs portal dark matter model as shown in Fig.\\ \\ref{fig:z3scalar1}, but shaded according to the semi-annihilation fraction $\\alpha$ (Eq.\\ \\protect\\ref{eqn:sa_fraction}). From \\protect\\cite{SSDM2}.}\n\t\\label{fig:z3scalar2}\n\\end{figure}\n\n\\subsubsection{$\\mathbb{Z}_3$-symmetric scalar singlet}\n\nIn contrast to the self-adjoint $\\mathbb{Z}_2$-symmetric scalar singlet, a $\\mathbb{Z}_3$ symmetry leads to a complex scaler DM candidate, with both DM ($S$) and anti-DM ($S^*$) states contributing to the relic density. This symmetry also allows an additional cubic term in the Lagrangian,\n\\begin{equation}\n\\mathcal{L}_{\\mathbb{Z}_3} = \\mu_{\\scriptscriptstyle S}^2 S^\\dagger S + \\lambda_{\\sss S} (S^\\dagger S)^2 + \\frac{\\mu_3}{2}(S^{\\dagger 3}+S^3) + \\lambda_{h\\sss S} S^\\dagger S|H|^2,\n\\end{equation}\nwhere we have introduced the new dimension-1 $S$ cubic coupling $\\mu_3$. This new coupling allows for so-called semi-annihilation processes $SS\\to S^*h$ and $S^*S^*\\to Sh$, shown in Fig.\\ \\ref{fig:diagrams}.\n\nCompared to the $\\mathbb{Z}_2$-symmetric model, semi-annihilation is able to deplete the relic density of DM at intermediate masses and open up an entirely new region of viable parameter space. This is shown in terms of the profile likelihood in Fig.\\ \\ref{fig:z3scalar1}, and highlighted in terms of the semi-annihilation fraction\n\\begin{equation}\n\\alpha=\\frac{1}{2}\\frac{\\langle\\sigma v_\\mathrm{rel}\\rangle_{SS\\rightarrow hS}}{\\langle\\sigma v_\\mathrm{rel}\\rangle+\\frac{1}{2}\\langle\\sigma v_\\mathrm{rel}\\rangle_{SS\\rightarrow hS}},\\label{eqn:sa_fraction}\n\\end{equation}\nin Fig.\\ \\ref{fig:z3scalar2}. Here $\\langle\\sigma v_\\mathrm{rel}\\rangle$ is the thermally averaged (semi-)annihilation cross-section weighted by the relative velocity between annihilating particles.\n\nThe vacuum structure of the theory is also more complicated than that of the $\\mathbb{Z}_2$-symmetric model, as regions where $\\mu_3 \\geq 2\\sqrt{\\lambda_{\\sss S}}m_{\\sss S}$ or $\\mu_{\\scriptscriptstyle S}^2<0$ and $\\lambda_{h\\sss S}$ is large can possess a second, $\\mathbb{Z}_3$-breaking minimum. The results shown in Figs.\\ \\ref{fig:z3scalar1} and \\ref{fig:z3scalar2} avoid these regions, demanding that $S$ does not itself obtain a VEV, and that the potential remains bounded from below.\n\nLike the $\\mathbb{Z}_2$-symmetric variant, the $\\mathbb{Z}_3$-symmetric model can in principle completely stabilise the SM vacuum. However, because of the various factors of 2 introduced relative to the $\\mathbb{Z}_2$ case, by virtue of DM not being self-adjoint, the region where this is possible is in fact in strong tension with the results from both XENON1T \\cite{Aprile:2018dbl} and PandaX \\cite{Cui:2017nnn}. $\\mathbb{Z}_3$-symmetric models that stabilise the SM vacuum and produce the entire observed DM relic density are ruled out at 99\\% confidence; those constituting only a fraction of DM are ruled out at 98\\% confidence. The same is expected of other $\\mathbb{Z}_N$-symmetric models with $N>3$, which also feature non-self-adjoint DM.\n\nAs in the $\\mathbb{Z}_2$-symmetric case, a Bayesian analysis prefers the higher-mass part of the parameter space, due to the fine-tuning needed to achieve agreement with all experimental data in both the resonance and semi-annihilation (intermediate mass) regions. In this case, the additional tuning in $\\mu_3$ required to satisfy the condition $\\mu_3 \\leq 2\\sqrt{\\lambda_{\\sss S}}m_{\\sss S}$ -- and to achieve sufficient semi-annihilation in the intermediate-mass region -- further penalises these regions.\n\n\\begin{figure}[t]\n\t\\centering\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_VDM_2_1_like2D_combined_low_mass}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_VDM_2_1_like2D_combined_high_mass}\n \\caption{Profile likelihoods of parameters in the $\\mathbb{Z}_2$-symmetric vector singlet Higgs portal dark matter model, including constraints from direct and indirect detection, the relic density of dark matter and LHC searches for invisible decays of the Higgs boson, along with various Standard Model, dark matter halo and nuclear uncertainties. \\textit{Left}: the low-mass resonance region. \\textit{Right}: the full mass range. Grey shading indicates the area that fails the unitarity cut (Eq.\\ \\ref{eq:vec_unitarity}). Orange annotations indicate the edge of the allowed parameter space along which the model reproduces the entire cosmological abundance of dark matter. Contours show 1 and 2$\\sigma$ confidence regions. White stars indicate the location of the best-fit point. From \\protect\\cite{HP}.}\n\t\\label{fig:vector}\n\\end{figure}\n\n\n\\subsubsection{$\\mathbb{Z}_2$-symmetric vector singlet}\n\nIf DM is a $\\mathbb{Z}_2$-symmetric vector singlet $V_\\mu$ interacting with the SM via the Higgs portal, its effective Lagrangian takes the form\n\\begin{equation}\n\\mathcal{L_V} = -\\frac{1}{4} W_{\\mu\\nu} W^{\\mu\\nu} + \\frac{1}{2} \\mu_V^2 V_\\mu V^\\mu - \\frac{1}{4!} \\lambda_{V} (V_\\mu V^\\mu)^2 + \\frac{1}{2} \\lambda_{hV} V_\\mu V^\\mu H^\\dagger H.\n\\label{eq:Lag_V}\n\\end{equation}\nHere $W_{\\mu\\nu} \\equiv \\partial_\\mu V_\\nu - \\partial_\\nu V_\\mu$ is the field strength tensor for the new vector. The tree-level DM mass has exactly the same form as Eq.\\ \\ref{ms}. Although all terms here are dimension 4, the theory is not renormalisable, as it possesses an explicit mass term for $V_\\mu$. Perturbative unitarity is violated at energies above this mass. In the \\textsf{GAMBIT}\\xspace analysis \\cite{HP}, this issue was avoided by excluding the region of parameter space\n\\begin{equation}\n0 \\le \\lambda_{hV} \\le 2m_V^2\/v_0^2\n\\label{eq:vec_unitarity}\n\\end{equation}\nfrom the analysis.\n\nThe phenomenology of the vector model is very similar to that of the $\\mathbb{Z}_2$-symmetric scalar variant, with the only major difference being the absence of the intermediate-mass solution due to the unitarity requirement (Fig.\\ \\ref{fig:vector}; the region excluded from the analysis due to the unitarity condition is shown in grey). The Bayesian analysis once again prefers the high-mass region due to the fine-tuning of nuisance parameters required in the resonance region.\n\n\\subsubsection{$\\mathbb{Z}_2$-symmetric Dirac \\& Majorana fermionic singlets}\n\nThe Lagrangians of the fermionic singlet Higgs portal models are\n\\begin{align}\n \\mathcal{L}_{\\chi} &= \\frac{1}{2} \\overline{\\chi} (i\\slashed{\\partial} - \\mu_\\chi) \\chi - \\frac{1}{2}\\frac{\\lambda_{h\\chi}}{\\Lambda_\\chi} \\Big(\\cos\\theta \\, \\overline{\\chi}\\chi + \\sin\\theta \\, \\overline{\\chi}i\\gamma_5 \\chi \\Big) H^\\dagger H,\n \\label{eq:Lag_chi}\\\\\n \\mathcal{L}_{\\psi} &= \\overline{\\psi} (i \\slashed{\\partial} - \\mu_\\psi) \\psi \\nonumber - \\frac{\\lambda_{h\\psi}}{\\Lambda_\\psi} \\Big(\\cos\\theta \\, \\overline{\\psi}\\psi + \\sin\\theta \\, \\overline{\\psi}i\\gamma_5 \\psi \\Big) H^\\dagger H,\n \\label{eq:Lag_psi}\n\\end{align}\nwith the Majorana variant denoted $\\chi$ and the Dirac variant $\\psi$. These noticeably possess dimension-5 effective portal operators suppressed by the scale of new physics $\\Lambda$, with both scalar ($CP$-even) and pseudoscalar ($CP$-odd) couplings. The degree to which the portal interaction violates $CP$ is dictated by the mixing angle $\\theta$, where $\\theta = 0$ corresponds to pure $CP$ conservation and $\\theta = \\frac\\pi2$ to maximal $CP$ violation.\n\nAs in the scalar and vector models, the portal interaction produces terms quadratic in the DM field following electroweak symmetry breaking. The pseudoscalar coupling leads to an imaginary mass term, which must be rotated away with the field transformation $X \\rightarrow e^{i\\gamma_5 \\alpha\/2} X$ for $X \\in \\{\\chi, \\psi\\}$, in order to arrive at the physical (real) mass. This introduces a new parameter $\\alpha$. The physical masses are then\n\\begin{equation}\nm_{X}^2 = \\left(\\mu_{X} + \\frac{1}{2}\\frac{\\lambda_{hX}}{\\Lambda_{X}} v_0^2 \\cos\\theta \\right)^2 + \\left(\\frac{1}{2}\\frac{\\lambda_{X}}{\\Lambda_{X}}v_0^2 \\sin\\theta \\right)^2.\n\\end{equation}\nThe rotation parameter $\\alpha$ is fixed by the requirement that the mass be real, so all phenomenology can be described by three parameters: $m_X$, $\\lambda_X\/\\Lambda_X$ and $\\xi \\equiv \\theta + \\alpha$. Notably, the pure $CP$-conserving theory ($\\theta = 0$) remains $CP$-conserving after electroweak symmetry breaking ($\\xi = 0$), but maximal $CP$ violation before electroweak symmetry breaking does not correspond to maximal violation after the symmetry is broken (i.e. $\\theta = \\frac\\pi2 \\notimplies \\xi = \\frac\\pi2$).\n\nWhilst the $CP$-even Higgs portal coupling leads to the familiar velocity and momentum-independent nuclear scattering cross-section, the $CP$-odd coupling gives rise to an interaction suppressed by $q^2$, the square of the momentum exchanged in the scattering event. This leads to an overall suppression of direct detection signals and corresponding constraints for $\\xi \\rightarrow \\frac\\pi2$. Conversely, the $CP$-odd coupling produces a velocity and momentum-independent annihilation cross-section, whereas the $CP$-even coupling gives rise to a velocity-suppressed annihilation cross-section.\n\n\\begin{figure}[tbp]\n\t\\centering\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_MDM_2_1_like2D_combined_low_mass}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_MDM_2_1_like2D_combined_high_mass}\\\\\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_MDM_2_3_like2D_combined}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_MDM_3_1_like2D_combined}\n \\caption{Profile likelihoods of parameters in the $\\mathbb{Z}_2$-symmetric Majorana fermion singlet Higgs portal dark matter model, including constraints from direct and indirect detection, the relic density of dark matter and LHC searches for invisible decays of the Higgs boson, along with various Standard Model, dark matter halo and nuclear uncertainties. The upper-left panel shows a zoomed-in view of the low-mass resonance region. Grey shading indicates the area that fails the unitarity cut (Eq.\\ \\ref{eq:fermion_unitarity}). Orange annotations indicate the edge of the allowed parameter space along which the model reproduces the entire cosmological abundance of dark matter. Contours show 1 and 2$\\sigma$ confidence regions. White stars indicate the location of the best-fit point. From \\protect\\cite{HP}.}\n\t\\label{fig:fermion1}\n\\end{figure}\n\n\\begin{figure}[tbp]\n\t\\centering\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_MDM_2_1_post2D_xi_free_TWalk_low_mass}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_MDM_2_1_post2D_xi_free_TWalk_high_mass}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_MDM_2_3_post2D_xi_free_TWalk}\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_MDM_3_1_post2D_xi_free_TWalk}\n \\caption{Posterior probability densities from a Bayesian analysis of the $\\mathbb{Z}_2$-symmetric Majorana fermion singlet Higgs portal dark matter model, using the same likelihood functions as Fig.\\ \\protect\\ref{fig:fermion1}. White bullets indicate posterior means; other annotations are as in Fig.\\ \\protect\\ref{fig:fermion1}. From \\protect\\cite{HP}.}\n\t\\label{fig:fermion2}\n\\end{figure}\n\nProfile likelihoods from the global fit to the Majorana fermion model are shown in Fig.\\ \\ref{fig:fermion1}. Results for Dirac fermion dark matter are broadly very similar, and differ from the Majorana case only in the exact location of the border of the allowed parameter space, reflecting the essentially inconsequential nature of the relative factors of 2 between the two Lagrangians. Grey regions correspond to the regime\n\\begin{equation}\n\\lambda_{hX}\/\\Lambda_{X} \\geq 2\\pi\/m_X,\n\\label{eq:fermion_unitarity}\n\\end{equation}\nwhere the validity of the EFT becomes questionable. Further discussion on this issue can be found in Ref.\\ \\cite{HP}; it would also be possible to unitarise the theory, and draw further constraints in this region, using the $K$-matrix formalism \\cite{Bell:2016obu,Balaji:2018qyo}.\n\nThe preferred regions in the mass-coupling plane (upper panels of Fig.\\ \\ref{fig:fermion1}) include the now-familiar resonance and high-mass regions. However, unlike the vector and scalar models, these are fully connected by valid models at all masses, with the preferred region bounded from below mostly by the relic density constraint, supported by indirect detection. This is because profiling over $\\xi$ allows for the selection of $CP$-violating couplings in order to avoid constraints from direct detection. The degree of tuning in $\\xi$ required to achieve this is apparent in the lower panels of Fig.\\ \\ref{fig:fermion1}, where it is clear that good fits can be found for any value of $\\xi$ in the resonance region, but that higher masses require some degree of $CP$ violation in order to avoid direct detection. This becomes even clearer in the equivalent Bayesian results shown in Fig.\\ \\ref{fig:fermion2}, where intermediate masses and couplings are disfavoured relative to other regions, due to the need to make $CP$ violation nearly maximal in order to avoid direct detection.\n\n\\begin{figure}[t]\n\t\\centering\n \\includegraphics[width=0.495\\columnwidth]{figures\/plot_MDM_3_post1D_xi_free_TWalk}\n \\caption{Marginalised one-dimensional posterior probability density for the $CP$-mixing parameter $\\xi$ in the $\\mathbb{Z}_2$-symmetric Majorana fermion singlet Higgs portal dark matter model. This result has been extracted from the same analysis as that shown in Fig.\\ \\protect\\ref{fig:fermion2}. The value $\\xi = 0 = \\pi$ corresponds to $CP$ conservation; a clear preference for violation of $CP$ symmetry is evident. The blue bullet indicates the posterior mean value of $\\xi$, and the red star the value of $\\xi$ at the best-fit sample. From \\cite{HP}.}\n\t\\label{fig:fermion3}\n\\end{figure}\n\nIntegrating the posterior over all parameters other than $\\xi$ (Fig.\\ \\ref{fig:fermion3}), there is a clear preference for $CP$ violation. This reflects the fact that the more $CP$ violation permitted, the broader the range of other parameters able to give good fits to the combined data of all experiments. Performing Bayesian model comparison between the full model and its pure $CP$-conserving subspace (i.e.\\ $\\xi = 0$) results in Bayes factors of between 70:1 and 140:1, depending on the adopted priors. This indicates a strong preference for $CP$ violation in fermionic Higgs portal models. Bayesian model comparison between the scalar, vector and fermionic variants of the Higgs portal DM model reveals essentially equal odds for each of the scalar and fermionic models, but a 6:1 preference for all of these models over the vector variant.\n\n\\subsection{Axions}\n\n\\subsubsection{Axion models and their implementation in \\textsf{GAMBIT}\\xspace}\n\nAxions are an intriguing theoretical possibility due to their ability to solve the strong-$CP$ problem of the SM whilst providing a credible DM candidate~\\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah,1986_turner_axiondensity}. One can also use axion-like particles to reconcile various tensions between astrophysical observations and theory, including the cooling of white dwarfs~\\cite{Isern:1992gia,1205.6180,1211.3389,1512.08108,1605.06458,1605.07668,1708.02111}, and the transparency of the Universe to gamma rays~\\cite{0707.4312,0712.2825,1001.0972,1106.1132,1201.4711,1302.1208}.\n\nThe strong-$CP$ problem is ultimately a fine-tuning problem, arising from the fact that the SM symmetries permit a $CP$-odd term in the SM Lagrangian density of the form:\n\n\\begin{equation}\n\\mrm{\\pazocal{L}}{QCD} \\supset - \\frac{\\mrm{\\alpha}{S}}{8\\pi} \\mrm{\\theta}{QCD} G_{\\mu \\nu}^a \\widetilde{G}^{\\mu \\nu, a} \\, , \\label{eq:QCDLagrangian}\n\\end{equation}\nwhere $G_{\\mu \\nu}^a$ is the gluon field strength tensor, $\\widetilde{G}^{\\mu \\nu, a}$ is its dual (both of which have the $SU(3)$ gauge index $a$ explicitly shown), and $\\mrm{\\alpha}{S}$ is the strong coupling constant. The angle $\\mrm{\\theta}{QCD} \\in [-\\pi, \\pi]$ is a free parameter. In the SM, the term also receives a contribution from the chiral anomaly which, for down- and up-type Yukawa matrices $Y_d$ and $Y_u$, replaces $\\mrm{\\theta}{QCD}$ by the effective angle\n\\begin{equation}\n\\mrm{\\theta}{eff} \\equiv \\mrm{\\theta}{QCD} - \\arg \\left [ \\det (Y_dY_u) \\right ] \\, . \\label{eq:thetaeff}\n\\end{equation}\nA non-zero $\\mrm{\\theta}{eff}$ would result in $CP$-violating effects in strong interactions, which are severely constrained by observed upper limits on the electric dipole moment of the neutron, demanding $|\\mrm{\\theta}{eff}|\\mathrel{\\rlap{\\lower4pt\\hbox{$\\sim$}}\\raise1pt\\hbox{$<$}} \\num{e-10}$ \\cite{1509.04411}. Naively, this can only be avoided in the SM by fine-tuning the value of $\\mrm{\\theta}{QCD}$ to cancel the contribution from the chiral anomaly.\n\nAn alternative solution, first proposed by Peccei and Quinn~\\cite{1977_pq_axion1,1977_pq_axion2}, is to add a new global, axial $U(1)$ symmetry spontaneously broken by the vacuum expectation value $v$ of a complex scalar field. This breaking has an associated pseudoscalar Nambu-Goldstone boson, $a(x)$, which supplements $\\mrm{\\theta}{eff}$ by a new term $Na(x)\/v$, where the non-zero integer $N$ is the colour anomaly of the added symmetry. The Vafa-Witten theorem~\\cite{1984_vafa_vafawitten1,1984_vafa_vafawitten2} can then be used to show that $\\mrm{\\theta}{eff}+Na(x)\/v$ is dynamically driven to zero, solving the strong $CP$ problem.\n\nIn the resulting theory of the QCD axion, the axion is practically massless until the time of the QCD phase transition, due to a shift symmetry of the $U(1)$ phase, which prevents a mass term in the Lagrangian. After this, however, it picks up a small, temperature-dependent mass due to breaking of the continuous shift symmetry by fluctuations of the gluon fields. This gives rise to an effective axion potential\n\\begin{equation}\n\tV(a) = f_a^2 \\, m_a^2 \\, \\left[1 - \\cos (a\/f_a) \\right] \\, , \\label{eq:axion_eff_pot}\n\\end{equation}\nwhere $m_a$ is the temperature-dependent axion mass and $f_a \\equiv v\/N$. The zero-temperature axion mass, $m_{a,0}$, can be calculated using next-to-leading order chiral perturbation theory, and it turns out to be inversely proportional to $f_a$ for the QCD axion. At higher temperatures, numerical estimates of the mass are available from lattice QCD results, which can be described to a good approximation by\n\\begin{equation}\nm_a(T) = m_{a,0}\n\\begin{cases}\n\\hfil 1 \\hfil & \\mathrm{if \\; } T \\leq T_\\chi \\\\\n\\left ( \\frac{T_\\chi}{T} \\right )^{\\beta\/2} & \\mathrm{otherwise}\n\\end{cases} \\, . \\label{eq:axionmass}\n\\end{equation}\n$T_\\chi$ and $\\beta$ are in principle calculable, but can be left as nuisance parameters in order to account for systematic uncertainties in the calculations.\n\nIn fact, QCD axions are only one instance of a general class of \\emph{axion-like particles} (ALPs), which could generally result from the breaking of a $U(1)$ symmetry at some scale $f_a$, with mass generation occurring via the explicit breaking of the residual symmetry at some lower scale $\\Lambda$~\\cite{1987_kim_lightpseudoscalars,1002.0329,1801.08127}. It can be shown that in a Friedmann-Robertson-Walker-{Lema\\^itre} universe, a QCD axion or ALP field $\\theta(t)=a(t)\/f_a$ satisfies the equation of motion\n\\begin{equation}\n \\ddot{\\theta} + 3H(t) \\, \\dot{\\theta} + m_a^2(t) \\, \\sin (\\theta) = 0,\n\\label{eq:AxionFieldEq}\n\\end{equation}\nwhere we have assumed the canonical axion potential of\n\\begin{equation}\n V(\\theta) = f_a^2m_a^2\\left[1-\\cos (\\theta)\\right]. \\label{eq:potential}\n\\end{equation}\nThis is subject to the boundary condition $\\theta(\\mrm{t}{i}) = \\mrm{\\theta}{i}$ and $\\dot{\\theta}(\\mrm{t}{i}) = 0$, where $\\mrm{\\theta}{i}$ is called the \\textit{initial misalignment angle}.\n\nThe \\textsf{GAMBIT}\\xspace collaboration completed a comprehensive study of axion and broader ALP theories in 2018~\\cite{Axions}, using an extensive list of experimental constraints. These rely on the interactions of ALPs with SM matter, which can be studied in an effective field theory framework~\\cite{Kaplan:1985dv,1985_srednicki_axioneft,1986_georgi_axioneft}.\n\nThe most general axion\/ALP model in \\textsf{GAMBIT}\\xspace assumes the effective Lagrangian density to take the form\n\\begin{equation}\n\t\\pazocal{L}_a^\\mathrm{int} = -\\frac{f_ag_{a\\gamma\\gamma}}{4} \\theta F_{\\mu\\nu}\\widetilde{F}^{\\mu\\nu} - \\frac{f_ag_{aee}}{2m_e} \\bar{e}\\gamma^\\mu\\gamma_5e\\partial_\\mu \\theta \\, .\\label{eq:ax:lagrange}\n\\end{equation}\nNote that this provides for possible axion-photon and axion-electron interactions, whilst ignoring terms for other interactions that do not currently give rise to interesting experimental observables. The complete family tree of \\textsf{GAMBIT}\\xspace axion\/ALP models is shown in Fig.\\ \\ref{fig:AxionModelTree}, headed by the \\textsf{GeneralALP}\\xspace model, whose parameters have all now been defined. This provides a phenomenological description of axion physics that is not constrained to give physical solutions, as the couplings are not inversely proportional to $f_a$.\n\nThe \\textsf{QCDAxion}\\xspace model appears as a child model, and differs from the more general case by having tight constraints on some parameters, arising from the known relationships with the QCD scale. The axion-electron coupling is traded for the model-dependent form factor~$C_{aee}$\n\\begin{equation}\n\tg_{aee} = \\frac{m_e}{f_a} \\; C_{aee}\\, , \\label{eq:qcdaxioncouplings1}\n\\end{equation}\nwhilst the axion-photon coupling is replaced by the model-dependent ratio of the electromagnetic and colour anomalies~$E\/N$\n\\begin{equation}\n\tg_{a\\gamma\\gamma} = \\frac{\\mrm{\\alpha}{EM}}{2\\pi f_a}\\left(\\frac{E}{N} - \\widetilde{C}_{a\\gamma\\gamma}\\right) \\, .\\label{eq:qcdaxioncouplings2}\n\\end{equation}\n$\\widetilde{C}_{a\\gamma\\gamma}$ is a model-independent contribution from axion-pion mixing, which is taken from Ref.~\\cite{1511.02867}, and assigned a nuisance likelihood with a relevant uncertainty. Note that the ratio $E\/N$ should in principle take discrete values, but it is sampled as a continuous parameter for convenience, seeing as the possible rational values that it can take are close together. The final nuisance parameter of the \\textsf{QCDAxion}\\xspace model is $\\Lambda_\\chi$, which results from replacing the parameter $m_{a,0}$ of the \\textsf{GeneralALP}\\xspace model by an energy scale such that\n\\begin{equation}\n\tm_{a,0} \\equiv \\frac{\\Lambda_\\chi^2}{f_a} \\, .\n\\end{equation}\nThe value of $\\Lambda_\\chi$ is taken from first-principle calculations of the zero-temperature axion mass provided in Ref.~\\cite{1511.02867},\\footnote{This value was later updated in \\cite{Gorghetto:2018ocs}, after the appearance of Ref.\\ \\cite{Axions}.} and it is subject to a Gaussian nuisance likelihood.\n\nThe other models of interest for this review are the \\textsf{KSVZAxion}\\xspace and \\textsf{DFSZAxion}\\xspace model variants, which involve further field content being added to the SM. In \\textsf{KSVZAxion}\\xspace models~\\cite{1979_kim_ksvz,1980_shifman_ksvz}, the SM is supplemented by one or more electrically neutral, heavy quarks, and there are no tree-level interactions between the axion and SM fermions. There is, however, still an axion-photon interaction, which generates an axion-electron interaction at one loop. The \\textsf{GAMBIT}\\xspace study investigated four different \\textsf{KSVZAxion}\\xspace models, distinguished only by the choice of $E\/N$ from the set 0, 2\/3, 5\/3 and 8\/3.\n\n\\textsf{DFSZAxion}\\xspace models supplement the SM by an additional Higgs doublet~\\cite{1980_zhitnitsky_dfsz,1981_dine_dfsz}, which results in direct axion-electron interactions. Defining the ratio of the two Higgs vacuum expectation values to be $\\tan (\\beta^\\prime)$, one can write two variants of the \\textsf{DFSZAxion}\\xspace scenario as\n\\begin{align}\n\t\\begin{array}{lll}\n\t\tC_{aee} = \\sin^2 (\\beta^\\prime)\\left\/3\\right., \\quad \\phantom{.} & E\/N = 8\/3 \\quad \\phantom{.}& (\\textsf{DFSZAxion-I}\\xspace) \\, \\\\\n\t\tC_{aee} = \\left[1-\\sin^2 (\\beta^\\prime) \\right]\\left\/3\\right., \\quad \\phantom{.} & E\/N = 2\/3 \\quad \\phantom{.} & (\\textsf{DFSZAxion-II}\\xspace) \\,\n\t\\end{array} \\label{eq:dfsz:caee}.\n\\end{align}\nIt is thus convenient to replace the parameter~$C_{aee}$ in the \\textsf{QCDAxion}\\xspace model by $\\tan (\\beta^\\prime)$.\n\n\\begin{figure}[bt]\n\t\\centering\n\t\\input{include\/tree.tex}\n\t\\caption{Family tree of axion models in \\textsf{GAMBIT}\\xspace. The numbers in brackets refer to the number of model parameters; $(n+m)$ indicates $n$ (largely unconstrained) fundamental parameters of the model and $m$ (typically well-constrained) nuisance parameters. From \\cite{Axions}.}\n\t\\label{fig:AxionModelTree}\n\\end{figure}\n\n\\subsubsection{Experimental constraints on axions}\n\\label{sec:axionL}\nMany experiments are sensitive to the axion theories described here, and current null results place tight constraints on axions for specific combinations of masses and coupling strengths. Here we provide a brief review of those constraints, referring the reader to Ref.~\\cite{Axions} for a detailed description of the experimental likelihoods.\n\n\\begin{itemize}\n\\item \\textbf{Light-shining-through-wall (LSW) experiments: }Photon-axion interactions would allow photons to pass through a wall by becoming an axion, only to convert back to a photon on the other side. LSW experiments attempt to observe this by shining laser light onto an opaque material in the presence of a strong magnetic field. The \\textsf{GAMBIT}\\xspace LSW likelihood uses the results from the ALPS-I experiment, using data for both evacuated and gas-filled magnets~\\cite{1004.1313}.\n\\item \\textbf{Helioscopes: }Axion production in the Sun can be probed by observing the solar disc with a long magnet contained in an opaque casing. Any axions produced in the Sun that made it to Earth would pass through the exterior, and potentially convert to photons within the magnetic field in the interior. The details of solar axion production depend on the solar model, in addition to the axion-photon and axion-electron couplings. The \\textsf{GAMBIT}\\xspace axion studies utilise the AGSS09met solar model~\\cite{Serenelli09,AGSS} and its more recent iteration~\\cite{1611.09867}, and utilise two separate likelihoods for the 2007 and 2017 results of the CAST experiment~\\cite{hep-ex\/0702006,1705.02290}.\n\\item \\textbf{Haloscopes (cavity experiments): }Axion haloscopes aim to detect resonant axion-photon conversion inside a tunable cavity~\\cite{1983_Sikivie, 1985_Sikivie}, with microwave cavities providing the greatest current sensitivity to axions. Unfortunately, the resonant nature of the experiment means that one obtains highly sensitive constraints only within a very narrow mass range. The ability of haloscope experiments to detect axions depends on their cosmological abundance, as well as the galactic DM velocity distribution~\\cite{2011_Hoskins}. The \\textsf{GAMBIT}\\xspace study combines separate likelihood terms for the Rochester-Brookhaven-Fermi (RBF)~\\cite{DePanfilis:1987dk,Wuensch:1989sa}, University of Florida (UF)~\\cite{Hagmann:1990tj}, ADMX 1998-2009~\\cite{astro-ph\/9801286,Asztalos:2001tf,astro-ph\/0310042,astro-ph\/0603108,0910.5914} and ADMX 2018~\\cite{1804.05750} datasets.\n\\item \\textbf{Dark matter relic density: }Although axions are not a thermal relic such as those encountered in WIMP models, the relic abundance of axion DM is calculable numerically via the details of the realignment mechanism that follow from the equation of motion given in Eq~\\ref{eq:AxionFieldEq}. This can be compared with the observed value from the most recent \\emph{Planck} analysis~\\cite{Planck15cosmo}. The \\textsf{GAMBIT}\\xspace axion study applied this as both an upper limit (in which case axions are allowed to provide only a component of DM) and, in separate analyses of each model, a measurement. In the former case, anticipated yields in experiments that rely on the local DM density were scaled accordingly.\n\\item \\textbf{Distortions of gamma-ray spectra: }Axion-photon conversions could occur in strong galactic or inter-galactic magnetic fields, resulting in a distortion of the spectra of distant sources~\\cite{Raffelt:1987im,hep-ph\/0111311,hep-ph\/0204216,0704.3044}. There is a critical energy scale $E_{crit}$ at which photons will efficiently convert into axions, and it can be shown that spectral distortions only occur in real measurements when the critical energy lies within the spectral window of the instrument~\\cite{1205.6428,1305.2114}. This has the effect of localising constraints from spectral distortion measurements to specific ranges of the axion mass. The \\textsf{GAMBIT}\\xspace axion study utilises a likelihood based on H.E.S.S studies of the active galactic nucleus PKS 2155-304~\\cite{1311.3148}.\n\\item \\textbf{Supernova 1987A: }If axions had been produced in the SN1987A supernova explosion, they could have been converted to photons in the Galactic magnetic field, and detected as a coincident gamma ray burst by the Solar Maximum Mission~\\cite{Chupp:1989kx}. The absence of this observation has been used to constrain axion properties. The \\textsf{GAMBIT}\\xspace study uses a likelihood based on Ref.~\\cite{1410.3747}.\n\\item \\textbf{Horizontal Branch stars and the R parameter: }The existence of axions would provide an extra mechanism of energy loss for stars, causing them to cool faster~\\cite{Sato:1975vy,Raffelt:1990yz,book_raffelt_laboratories}. This would affect the relative time that stars spend on the Horizontal Branch (HB) and upper Red Giant Branch (RGB), which in turn sets the observed ratio of the numbers of stars on these branches ($R = \\mrm{N}{HB}\/\\mrm{N}{RGB}$). Theory suggests that axions would have the most significant impact on the lifetimes of HB stars, leading to a reduction in $R$. The \\textsf{GAMBIT}\\xspace $R$ parameter likelihood is based on the comparison of a calculation of the $R$ parameter for axion theories~\\cite{1512.08108,1983A&A...128...94B,Raffelt:1989xu,1311.1669,1406.6053} with the observed value of $\\mrm{R}{obs}=1.39 \\pm 0.03$~\\cite{1406.6053}, which is based on a weighted average of cluster count obervations \\cite{astro-ph\/0403600}.\n\\item \\textbf{White Dwarf cooling hints: }White dwarfs (WDs) are intriguing axion laboratories for several reasons. The first is that energy loss via axion production in WDs can be probed experimentally by using measurements of the oscillations of their radii and luminosities. These can be related to their internal structure via astroseismology, and measurements of the decrease in the oscillation periods can be related to energy loss. The second reason is that WDs have electron-degenerate cores, allowing us to probe the axion-electron coupling rather than the electron-photon coupling. A number of previous studies have calculated the expected period decrease in the presence of axions. The \\textsf{GAMBIT}\\xspace WD cooling likelihood is based on interpolation of the results and uncertainties found in Refs.~\\cite{1205.6180,1211.3389,1605.06458,1605.07668}. Current evidence suggests that WDs actually require an additional cooling mechanism relative to standard models, but this remains controversial due to a number of experimental and theoretical issues. The \\textsf{GAMBIT}\\xspace axion paper thus contains studies generated both with and without WD cooling hints added to the combined likelihood.\n\\end{itemize}\n\n\\subsubsection{\\textsf{GAMBIT}\\xspace results for the \\textsf{QCDAxion}\\xspace model}\n\nAlthough the \\textsf{GAMBIT}\\xspace axion paper contained results for all of the models described above, we will here concentrate on the \\textsf{QCDAxion}\\xspace results in the interests of brevity. The various parameters (including nuisance parameters) are shown in Table~\\ref{tab:priors:QCDAxion}, along with the chosen priors and prior ranges. For each of the nuisance parameters, the prior range is chosen to cover a range of approximately $-5\\sigma$ to $+5\\sigma$ around the known central value, where $\\sigma$ is the known uncertainty. The range of $E\/N$ values is selected to encompass those encountered in a broad range of previous axion model studies, whilst the range on $f_a$ is driven by the requirement that the range of possible axion masses reaches from very small masses to the the largest mass allowed by bounds on hot DM. Our choice of a log prior for $f_a$ is motivated by the fact that the scale is unknown. $C_{aee}$ is explored in a generous range around 1, whilst the causal structure of the early Universe motivates our use of a flat prior on the initial misalignment angle $\\mrm{\\theta}{i}$. The local DM density $\\rho_0$ is given the same treatment as in previous \\textsf{GAMBIT}\\xspace studies.\n\n\\begin{table}[tbp]\n\t\\caption{Prior choices for \\textsf{QCDAxion}\\xspace models in \\cite{Axions}.\\label{tab:priors:QCDAxion}}\n\t\\footnotesize\n\t\\centering\n\t\\begin{tabular}{@{}lccc}\n\t\t\\toprule\n\t\t\\textbf{Model} & \\multicolumn{2}{l}{\\textbf{Parameter range\/value}} & \\textbf{Prior type} \\\\\n\t\t\\midrule\n\t\t\\textsf{QCDAxion}\\xspace & \\iuo{f_a}{\\text{G\\eV}\\xspace} & \\prrange{e6}{e16} & log \\\\\n\t\t& \\iuo{\\Lambda_\\chi}{\\text{M\\eV}\\xspace}& \\prrange{73}{78} & flat \\\\\n\t\t& $\\widetilde{C}_{a\\gamma\\gamma}$ & \\prrange{1.72}{2.12} & flat \\\\\n\t\t& $E\/N$ & \\prrange{-1.33333}{174.667} & flat \\\\\n\t\t& $C_{aee}$ & \\prrange{e-4}{e4} & log \\\\\n\t\t& $\\mrm{\\theta}{i}$ & \\prrange{-3.14159}{3.14159} & flat \\\\\n\t\t& $\\beta$ & \\prrange{7.7}{8.2} & flat \\\\\n\t\t& \\iuo{T_\\chi}{\\text{M\\eV}\\xspace}& \\prrange{143}{151} & flat \\\\\n\t\t\\midrule\n\t\tLocal DM density & \\iuo{\\rho_0}{\\text{G\\eV}\\xspace\\per\\centi\\metre^3} & \\prrange{0.2}{0.8} & flat \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\n\\begin{figure}[tbp]\n\t\\centering\n\t{\n\t\t\\includegraphics[width=0.49\\linewidth]{figures\/diver\/plot_2_10M1_QCDAxion_100_6_like2D}\n\t\t\\hfill\n\t\t\\includegraphics[width=0.49\\linewidth]{figures\/diver\/plot_2_10M2_QCDAxion_100_6_like2D}\n\t}\n\t{\n\t\t\\includegraphics[width=0.49\\linewidth]{figures\/diver\/plot_2_10M1_QCDAxion_100_108_like2D}\n\t\t\\hfill\n\t\t\\includegraphics[width=0.49\\linewidth]{figures\/diver\/plot_2_10M2_QCDAxion_100_108_like2D}\n\t}\n\t\\caption{Profile likelihoods~(from \\diver) for \\textsf{QCDAxion}\\xspace models with upper limits~(\\textit{left}) and matching condition~(\\textit{right}) for the observed DM relic density. The upper and lower panels show the constraints on the anomaly ratio, $E\/N$, and the absolute value of the initial misalignment angle, $|\\mrm{\\theta}{i}|$, respectively. From \\cite{Axions}. \\label{fig:QCDAxion:frequentist}}\n\\end{figure}\n\nFig.\\ \\ref{fig:QCDAxion:frequentist} shows profile likelihood distributions in various planes of the \\textsf{QCDAxion}\\xspace parameters, obtained without the presence of WD cooling hints in the combined likelihood. The left panels show the result of imposing the relic density constraint as an upper limit. The exclusion of the low-$f_a$ (high mass) region, except at very low values of the axion-photon coupling (which is related to $E\/N$), arises from the $R$ parameter and CAST results. The slight reduction in the profile likelihood for masses lower than approximately \\SI{0.1}{\\micro\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace} also comes from the $R$ parameter likelihood; for such masses, the maximum allowed value for the axion-electron coupling ($C_{aee} \\leq \\num{e4}$) is not large enough to perfectly satisfy the $R$-parameter constraint. If axions are assumed to saturate the relic abundance of DM (right top panel), the high-mass region is excluded entirely due to the fact that the realignment mechanism cannot produce enough DM. The bottom row of Fig.\\ \\ref{fig:QCDAxion:frequentist} shows the allowed values for the initial misalignment angle. In the case that axions supply all of DM, we recover the familiar result that $\\left|\\mrm{\\theta}{i}\\right| \\ll 1$, for QCD axion masses of $m_{a,0} \\mathrel{\\rlap{\\lower4pt\\hbox{$\\sim$}}\\raise1pt\\hbox{$<$}} \\SI{0.1}{\\micro\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace}$, a fine-tuning that we will discuss further in the context of a Bayesian analysis.\n\nIn the left panel of Fig.\\ \\ref{fig:cooling:QCDAxion:RelicDens:twoD}, we show the marginalised Bayesian posterior in the $\\Omega_a h^2-m_{a,0}$ plane without WD cooling hints, demonstrating that the scan can find viable parts of the parameter space where axions consistent with all current experimental observations can account for a sizable fraction of dark matter. The situation is similar even when WD cooling hints are included (not shown). One can also observe an interesting bound on the axion mass. If the DM relic density constraint is applied as an upper limit, we find $\\SI{0.73}{\\micro\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace} \\le m_{a,0} \\le \\SI{6.1}{\\milli\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace}$ at 95\\% credibility (equal-tailed interval). Meanwhile, if axions must provide all of the observed dark matter, this changes to $\\SI{0.53}{\\micro\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace} \\le m_{a,0} \\le \\SI{0.13}{\\milli\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace}$.\n\n\\begin{figure}\n \\centering\n {\n \\includegraphics[width=0.49\\linewidth]{figures\/twalk\/plot_2_3042_QCDAxion_100_10_post2D}\n \\includegraphics[width=0.49\\linewidth]{figures\/twalk\/plot_2_3141_QCDAxion_100_107_post2D}\n }\n \\caption{Marginalised posterior for \\textsf{QCDAxion}\\xspace models with the DM relic density constraint treated as an upper limit. \\textit{Left}: constraints on the energy density in axions today, $\\Omega_a h^2$, without the inclusion of WD cooling hints. \\textit{Right}: constraints on the absolute value of the axion-photon coupling, $|g_{a\\gamma\\gamma}|$. This panel also includes WD cooling hints, but they have little impact on the result. For comparison, the right panel also shows the region for which QCD axions are not theoretically possible (red line and shading), as well as the frequentist $2\\sigma$ C.L. constraints on more general ALP models (dashed lines). From \\cite{Axions}. \\label{fig:cooling:QCDAxion:RelicDens:twoD}}\n\\end{figure}\n\nIn the right panel of Fig.\\ \\ref{fig:cooling:QCDAxion:RelicDens:twoD}, we show the marginalised posterior in the $|g_{a\\gamma\\gamma}|-m_{a,0}$ plane with the DM relic density constraint applied as an upper limit, and with WD cooling hints included. Also shown are the {na\\\"ive} bounds on the parameter space that result from phenomenological constraints on \\textsf{GeneralALP}\\xspace models and the maximum value of $E\/N$. The shape of the preferred region is partly formed by the effect of fine-tuning. At low axion masses, this is required to avoid dark matter overproduction, whilst at large axion masses it is required to achieve low values of $|g_{a\\gamma\\gamma}|$ through cancellations between $E\/N$ and $\\tilde{C}_{a\\gamma\\gamma}$. The preferred parameter region is localised within a few orders of magnitude in mass around $m_{a,0} \\sim \\SI{100}{\\micro\\ensuremath{\\text{e}\\mspace{-0.8mu}\\text{V}}\\xspace}$ and $g_{a\\gamma\\gamma} \\sim \\SI{e-12}{\\text{G\\eV}\\xspace^{-1}}$.\n\n\\begin{table}[tbp]\n\t\\caption{Prior choices for \\textsf{DFSZAxion-I}\\xspace, \\textsf{DFSZAxion-II}\\xspace and \\textsf{KSVZAxion}\\xspace models in \\cite{Axions}. Note that the priors listed in the first section of the table apply to all three models.\\label{tab:priors:DFSZvsKSVZ}}\n\t\\footnotesize\n\t\\centering\n\t\\begin{tabular}{lcccl}\n\t\t\\toprule\n\t\t\\textbf{Model} & \\multicolumn{2}{l}{\\textbf{Parameter range\/value}} & \\textbf{Prior type} & \\textbf{Comments}\\\\\n\t\t\\midrule\n\t\t& \\iuo{f_a}{\\text{G\\eV}\\xspace} & \\prrange{e6}{e16} & log & Applies to all\\\\\n\t\t& \\iuo{\\Lambda_\\chi}{\\text{M\\eV}\\xspace}& \\prrange{73}{78} & flat & Applies to all\\\\\n\t\t& $\\widetilde{C}_{a\\gamma\\gamma}$ & \\prrange{1.72}{2.12} & flat & Applies to all\\\\\n\t\t& $\\mrm{\\theta}{i}$ & \\prrange{-3.14159}{3.14159} & flat & Applies to all\\\\\n\t\t& $\\beta$ & \\prrange{7.7}{8.2} & flat & Applies to all\\\\\n\t\t& \\iuo{T_\\chi}{\\text{M\\eV}\\xspace}& \\prrange{143}{151} & flat & Applies to all\\\\\n\t\t\\midrule\n\t\t\\textsf{DFSZAxion-I}\\xspace & $E\/N$ & $8\/3$ & delta & \\\\\n\t\t& $\\tan(\\beta')$ & \\prrange{0.28}{140.0} & log & \\\\\n\t\t\\textsf{DFSZAxion-II}\\xspace & $E\/N$ & $2\/3$ & delta & \\\\\n\t\t& $\\tan(\\beta^\\prime)$ & \\prrange{0.28}{140.0} & log & \\\\\n\t\t\\textsf{KSVZAxion}\\xspace & $E\/N$ & $0$, $2\/3$, $5\/3$, $8\/3$ & delta & Discrete\\\\\n\t\t\\midrule\n\t\tLocal DM density & \\iuo{\\rho_0}{\\text{G\\eV}\\xspace\\per\\centi\\metre^3} & \\prrange{0.2}{0.8} & flat \\\\\n\t\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\nThe \\textsf{GAMBIT}\\xspace Bayesian analysis of axion models also includes a model comparison of the \\textsf{QCDAxion}\\xspace, \\textsf{DFSZAxion-I}\\xspace, \\textsf{DFSZAxion-II}\\xspace and \\textsf{KSVZAxion}\\xspace, based on scans of the latter models that use the priors defined in Table~\\ref{tab:priors:DFSZvsKSVZ}. Bayesian evidence values $\\mathbb{Z}(\\mathcal{M})$ for each model $\\mathcal{M}$ were calculated using the \\MultiNest nested sampling package, before constructing the Bayes factor \\cite{Jeffreys:1939xee,10.2307\/2291091,10.2307\/4356165}\n\\begin{equation}\n\t\\mathcal{B} \\equiv \\frac{\\mathbb{Z}(\\mathcal{M}_1)}{\\mathbb{Z}(\\mathcal{M}_2)} \\equiv \\frac{\\int \\! \\mathcal{L}\\left(\\text{data}\\left| \\right. \\boldsymbol{\\theta_1} \\right) \\pi_1(\\boldsymbol{\\theta_1}) \\, \\mathrm{d} \\boldsymbol{\\theta_1}}{\\int \\! \\mathcal{L}\\left(\\text{data}\\left| \\right. \\boldsymbol{\\theta_2} \\right) \\pi_2(\\boldsymbol{\\theta_1}) \\, \\mathrm{d} \\boldsymbol{\\theta_2}} \\, ,\n\\end{equation}\nwhich relates two models $\\mathcal{M}_1$ and $\\mathcal{M}_2$ with parameters $\\boldsymbol{\\theta_1}$ and~$\\boldsymbol{\\theta_2}$. $\\pi_1$ and $\\pi_2$ are the priors on the parameters of the two models, and $\\mathcal{L}$ is the likelihood. The Bayes factor is connected to the ratio of posterior probabilities of the models being correct\n\\begin{equation}\n\t\\frac{\\mathcal{P}\\left(\\mathcal{M}_1 \\left| \\text{data} \\right.\\right)}{\\mathcal{P}\\left(\\mathcal{M}_2 \\left| \\text{data} \\right.\\right)} = \\mathcal{B} \\; \\frac{\\pi(\\mathcal{M}_1)}{\\pi(\\mathcal{M}_2)},\n\\end{equation}\nwhere the prior probabilities of the models themselves being correct are given by $\\pi(\\mathcal{M}_1)$ and $\\pi(\\mathcal{M}_2)$. In the following, it is assumed that $\\pi(\\mathcal{M}_1)=\\pi(\\mathcal{M}_2)$, causing the the posterior odds ratio to be equal to the Bayes factor.\n\nWithout cooling hints, the odds ratios for pairs of models provide insufficient evidence to favour any particular scenario. However, if it is demanded that axions solve the DM and WD cooling problems simultaneously, there is a positive preference for the \\textsf{QCDAxion}\\xspace model over the DFSZ- and KSVZ-type models, at a level of about 5:1. This results from the larger $C_{aee}$ values allowed in the \\textsf{QCDAxion}\\xspace model, which peaks at $C_{aee}\\approx 100$ in the one-dimensional marginalised posterior. Such a large coupling may cause a problem for model building. A frequentist analysis of the same scenario allows both the \\textsf{DFSZAxion}\\xspace and \\textsf{KSVZAxion}\\xspace models to be rejected with respect to the \\textsf{QCDAxion}\\xspace model with better than 99\\% confidence; if DM is instead allowed to consist only partially of axions, only \\textsf{KSVZAxion}\\xspace models can be rejected in this way.\n\n\\subsection{Right-handed neutrinos}\n\\subsubsection{Model definition}\nThe addition of right-handed ``sterile'' neutrinos to the SM has been proposed to explain the existence of neutrino flavour oscillations, which imply a non-zero neutrino mass. They also serve an aesthetic theoretical purpose, as neutrinos are the only elementary fermions in the SM to not have both left- and right-handed incarnations. Moreover, sterile neutrinos are permitted to have both a Dirac mass term $\\bar{\\nu_L}M_D\\nu_R$ and a Majorana mass term $\\bar{\\nu_R}M_M\\nu_R^c$, and specific choices of the latter allow sterile neutrinos to solve cosmological problems such as the baryon asymmetry of the Universe~\\cite{Fukugita:1986hr,Akhmedov:1998qx,Asaka:2005pn}, and the DM problem~\\cite{Dodelson:1993je, Shi:1998km}.\n\nA convenient parameterisation of a right-handed neutrino sector is the Casas-Ibarra parametrisation, amended to include 1-loop corrections to the left-handed neutrino mass matrix~\\cite{Casas:2001sr,Lopez-Pavon:2015cga}. This involves writing a matrix that encodes the mixing among left-handed neutrinos (LHNs) and right-handed neutrinos (RHNs) as\n\\begin{align}\n\\Theta = iU_{\\nu}\\sqrt{m_{\\nu}^\\text{diag}}\\mathcal{R}\\sqrt{\\tilde{M}^\\text{diag}}^{-1},\n\\label{CItheta}\n\\end{align}\nwhere $U_{\\nu}$ is the PMNS matrix, $m_{\\nu}^\\text{diag}$ is a diagonalised, one-loop-corrected LHN mass matrix and $\\tilde{M}^\\text{diag}$ is the analogous RHN mass matrix. $\\mathcal{R}$ is a complex, orthogonal matrix written as the product\n\\begin{align}\n\\mathcal{R} = \\mathcal{R}^{23}\\mathcal{R}^{13}\\mathcal{R}^{12}\\;,\n\\label{Rorder}\n\\end{align}\nwhere the $\\mathcal{R}^{ij}$ can, in turn, be parameterised by complex angles $\\omega_{ij}$ with\n\\begin{align}\n\\mathcal{R}^{ij}_{ii} &= \\mathcal{R}^{ij}_{jj} = \\cos\\omega_{ij}, \\\\\n\\mathcal{R}^{ij}_{ij} &= -\\mathcal{R}^{ij}_{ji} = \\sin\\omega_{ij}, \\\\\n\\mathcal{R}^{ij}_{kk} &= 1; k \\neq i,j\\;.\n\\end{align}\n\nWorking in the flavour basis in which the Yukawa couplings of the charged leptons are diagonal by construction, the PMNS matrix $U_\\nu$ can be written as\n\\begin{align}\\label{UnuParameterisation}\n U_{\\nu} = V^{23}U_{\\delta}V^{13}U_{-\\delta}V^{12}\\mathrm{diag}(e^{i\\alpha_1\/2},e^{i\\alpha_2\/2},1)\\;,\n\\end{align}\nwhere $U_{\\pm\\delta} = \\mathrm{diag}(e^{{\\mp}i\\delta\/2},1,e^{{\\pm}i\\delta\/2})$\nand $V^{ij}$ is parameterised by the LHN mixing angles $\\theta_{ij}$. The non-zero elements of $V^{ij}$ are analogous to those of $\\mathcal{R}$. $\\alpha_1$, $\\alpha_2$ and $\\delta$ are $CP$-violating phases that are not excluded \\emph{a priori}.\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{l}\n \\toprule\n \\textbf{Parameter} \\\\\n \\midrule\n Active neutrino parameters\\\\\n \\quad$\\theta_{12}$ [rad] \\\\\n \\quad$\\theta_{23}$ [rad] \\\\\n \\quad$\\theta_{13}$ [rad] \\\\\n \\quad$m_{\\nu_0}$ [eV] \\\\\n \\quad$\\Delta m^2_{21}$ $[10^{-5}\\,\\text{eV}^2]$ \\\\\n \\quad$\\Delta m^2_{3l}$ $[10^{-3}\\,\\text{eV}^2]$ \\\\\n \\quad$\\alpha_1$, $\\alpha_2$ [rad] \\\\\n Sterile neutrino parameters\\\\\n \\quad$\\delta$ [rad] \\\\\n \\quad Re $\\omega_{ij}$ [rad] \\\\\n \\quad Im $\\omega_{ij}$ \\\\\n \\quad$M_I$ [GeV] \\\\\n \\quad$R_{\\rm{order}}$ \\\\\n Nuisance parameters \\\\\n \\quad$m_h$ [GeV]\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{The full list of scanned parameters for the \\textsf{GAMBIT}\\xspace right-handed neutrino study \\cite{RHN}.}\n \\label{tab:scanpars}\n\\end{table}\n\n\\begin{figure}[tbp]\n \\centering\n \\includegraphics[width=0.45\\linewidth]{figures\/NH_M_Ue_capped.pdf}\n \\includegraphics[width=0.45\\linewidth]{figures\/IH_M_Ue_capped.pdf}\\\\\n \\includegraphics[width=0.45\\linewidth]{figures\/NH_M_Umu_capped.pdf}\n \\includegraphics[width=0.45\\linewidth]{figures\/IH_M_Umu_capped.pdf}\\\\\n \\includegraphics[width=0.45\\linewidth]{figures\/NH_M_Utau_capped.pdf}\n \\includegraphics[width=0.45\\linewidth]{figures\/IH_M_Utau_capped.pdf}\n \\caption{Profile likelihoods of right-handed neutrino models in the $M_I$ vs $U_{eI}^2$ (top), $M_I$ vs $U_{\\mu I}^2$ (middle) and $M_I$ vs $U_{\\tau I}^2$ (bottom) planes. Results are shown for normal (left) and inverted (right) neutrino mass ordering. From \\cite{RHN}.}\n \\label{fig:M_Ue_capped}\n\\end{figure}\n\n\\begin{figure}[tbp]\n \\centering\n \\includegraphics[width=0.45\\linewidth]{figures\/NH_M_U_capped.pdf}\n \\includegraphics[width=0.45\\linewidth]{figures\/IH_M_U_capped.pdf}\n \\caption{Profile likelihood of right-handed neutrino models in the $M_I$ vs $U_I^2$ plane for normal (left) and inverted (right) neutrino mass ordering. From \\cite{RHN}.}\n \\label{fig:M_U_capped}\n\\end{figure}\n\nA comprehensive, frequentist \\textsf{GAMBIT}\\xspace study of this scenario has recently been completed \\cite{RHN}. The full list of parameters considered in the \\textsf{GAMBIT}\\xspace RHN study is given in Table~\\ref{tab:scanpars}. Separate scans were done for the cases of a normal and an inverted mass hierarchy, and the scanning strategy used a number of carefully targeted scans to ensure that the regions near the various experimental bounds were convergently sampled. The scans used the full range of likelihoods implemented in \\textsf{NeutrinoBit}\\xspace, documented in Section~\\ref{sec:neutrinobit}, in addition to the extra routines described in \\textsf{FlavBit}\\xspace, \\textsf{DecayBit}\\xspace and \\textsf{PrecisionBit}\\xspace. The main analysis used a capped likelihood, in which each point is forced to have a likelihood which is equal to or worse than the SM ($\\mathcal{L} = \\min[\\mathcal{L}_{\\rm{SM}}, \\mathcal{L}_{\\rm{RHN}}]$). This is due to a number of excesses in individual experimental observations that --- although combining to give a small overall significance --- would bias the presentation of exclusion limits on RHN parameters. In this review, we concentrate on the resulting limits on RHNs, and direct the reader to the original study~\\cite{RHN} for a detailed discussion of the excesses.\n\nThe 1-loop Casas-Ibarra parameterisation used in the GAMBIT analysis \\cite{RHN} is valid for seesaw scenarios where the active-sterile mixing, $|\\Theta|^2$, is small. In principle, additional $|\\Theta|^4$ corrections could be expected in low-scale seesaw scenarios, and could only be captured by an exact expansion (e.g.\\ Schechter-Valle \\cite{Schechter:1980gr}). Nevertheless, the loss of generality in the Casa-Ibarra approximation is outweighed by its numerical and computational benefits. This parameterisation allows one to explicitly choose the masses of both active and sterile neutrinos, and is automatically consistent with oscillation data, allowing oscillation parameters to be easily treated as Gaussian nuisances. Alternative parameterisations would constitute a different effective prior on both the sterile and active neutrino parameters. Because the GAMBIT analysis is based on profile likelihoods, which are by construction prior-independent, switching parameterisation would only have the impact of making sampling less efficient, rather than causing any physical or statistical effect.\n\n\\subsubsection{RHN results}\n\nIn Fig.\\ \\ref{fig:M_Ue_capped}, we show, as functions of the heavy neutrino masses $M_I$, the constraints on the couplings $U^2_{\\alpha I}$ to the active neutrino flavours $\\alpha = (e,\\mu,\\tau)$; in Fig.\\ \\ref{fig:M_U_capped} we also show the overall constraints on their combination $U^2_I=\\sum_\\alpha U^2_{\\alpha I}$. The index $I$ can refer to any of the heavy neutrino flavours $I=(1,2,3)$, as their labelling has no physical significance. The profile likelihood is mostly flat at low values of the couplings, but exhibits characteristic drop-offs at higher values that result from specific experimental observations. The most dominant constraint varies with the RHN mass.\n\nAbove the masses of the weak gauge bosons, direct searches at colliders are not relevant, and the leading constraints on the RHN properties come from electroweak precision observables, CKM measurements and searches for lepton flavour violation (LFV). The upper limits on the $\\tau$ couplings are much larger than on the $e$ and $\\mu$ couplings, due to the fact that the EWPO and LFV limits are stronger for the $e$ and $\\mu$ flavours.\n\nWhen $M_I$ is between the $D$ meson and $W$ boson masses, direct search experiments dominate, as RHNs are efficiently produced via the $s$-channel exchange of on-shell $W$ bosons. The DELPHI and CMS results compete to impose the strongest bound in this region.\n\nBelow the $D$ meson mass, the dominant constraints come from direct searches at beam-dump experiments, in particular CHARM and NuTeV (above the kaon mass), PS-191 and E949 (between the pion and the kaon mass), and pion decay experiments at even lower mass. In the case of the $\\tau$ couplings, the direct search constraints are much weaker, and the most significant constraint instead comes from DELPHI searches for long-lived particles.\n\nFor $M_I$ values below 0.3\\,GeV, the global constraints are stronger than the sum of the individual contributions, due to an interplay between the lower bound from BBN, the upper bounds from direct searches and the constraints on RHN mixing from neutrino oscillation data (which disfavour large hierarchies amongst the couplings to individual SM flavours). The BBN lifetime constraint does not have an observable effect on the individual couplings, but it does force their combination to be greater than a certain value (as seen in Fig.\\ \\ref{fig:M_U_capped}).\n\nFinally, we point out that although this analysis included the active neutrino oscillation likelihoods contained in \\textsf{NeutrinoBit}, based on the results of \\textsf{NuFit} \\cite{NuFit15}, it would not change the results even slightly if one were to replace these with nuisance likelihoods from either of the other main 3-flavour neutrino fitting groups \\cite{deSalas:2017kay,Capozzi:2018ubv}. This is because the results of all three groups are highly consistent, and the preferred parameter region is where the approximate $B-L$ symmetry holds and the oscillation constraints are essentially irrelevant. As the fits allow $m_{\\nu_0}\\to 0$, there is no lower limit implied on $M_I$ from oscillation data, but rather only from BBN (the effects of which were modelled under the massless neutrino approximation).\n\n\n\\section{Summary}\n\\label{sec:summary}\n\n\\textsf{GAMBIT}\\xspace is an open-source software framework for combining all relevant experimental constraints on theories for physics Beyond the Standard Model. It includes extensive libraries of theory and likelihood routines for dark matter, flavour, collider, neutrino and precision observables, along with spectrum and decay calculations, all for a range of popular and highly plausible theories of new physics. It features an array of different statistical samplers, a hierarchical model database, an automated engine for building calculations based on graph theory, and the ability to connect to external physics calculators with ease.\n\nIn the two years since its release, \\textsf{GAMBIT}\\xspace has produced seven global analyses of leading theories for physics beyond the Standard Model. In supersymmetry, this includes analyses of the CMSSM, NUHM1, NUHM2, a 7-parameter weak-scale MSSM, and an electroweakino effective field theory known as the EWMSSSM. Results indicate a $3.3\\sigma$ combined preference in LHC searches for weak production of light charginos and neutralinos. In Higgs portal dark matter, \\textsf{GAMBIT}\\xspace results cover $\\mathbb{Z}_2$ and $\\mathbb{Z}_3$-symmetric scalar singlet models, as well as $\\mathbb{Z}_2$-symmetric vector and fermion models. All can provide good fits to experimental constraints, but fermionic models strongly prefer to violate $CP$. Scalar models can not only solve the dark matter problem, but can also stabilise the vacuum of the standard model if and only if they possess TeV-scale masses and respect a $\\mathbb{Z}_2$ symmetry. GAMBIT studies indicate that QCD axions are most likely to constitute a fraction of dark matter rather than the entire amount, and to reside in a mass window between about $10^{-7}$ and $10^{-3}$\\,eV. Right-handed neutrinos are constrained by a wide array of different searches at different masses; interactions with electrons and muons are the most strongly constrained, with constraints on couplings to tau leptons somewhat weaker.\n\n\\textsf{GAMBIT}\\xspace is a powerful tool for testing theories of new physics. The code can be obtained from \\url{https:\/\/gambit.hepforge.org}. All samples, input files and benchmark points resulting from the \\textsf{GAMBIT}\\xspace physics studies discussed in this review can also be obtained from \\textsf{Zenodo}, by visiting \\url{https:\/\/zenodo.org\/communities\/gambit-official}.\n\n\\section*{Acknowledgements}\nWe thank our collaborators within the \\textsf{GAMBIT}\\xspace Community for their essential and extensive contributions to the work reviewed in this article, and Tom\\'as Gonzalo in particular for comments on right-handed neutrinos. The majority of plots presented in the review were produced with \\pippi \\cite{pippi}. We acknowledge PRACE for awarding us access to Marconi at CINECA, Italy.\n\n\\bibliographystyle{JHEP_pat}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{INTRODUCTION}\nVisual place recognition in changing environments is the problem of finding matchings between two sets of observations, database and query, despite severe appearance changes.\nIt is required for loop closure detection in SLAM (Simultaneous Localization and Mapping) and for candidate selection in image-based 6D localization systems.\nThe common pipeline for visual place recognition in changing environments is shown in Fig.~\\ref{fig:intro} (top):\nGiven the two sets of images, database (the ``map'') and query (later or current run), a descriptor is computed for each image.\nSubsequently, each descriptor of the database is compared pairwise to each descriptor of the query to get a similarity matrix $\\hat{S}^{DB\\times Q}$ that can finally be used to determine potential place matches.\nAs illustrated in Fig.~\\ref{fig:intro} (bottom), there are various additional approaches to improve the place recognition pipeline either by preprocessing the descriptors, e.g., with feature standardization, or by postprocessing the similarity matrix, e.g., with sequence search in the similarity matrix.\nIn addition to these pre- and postprocessing steps, additional information can be exploited like database image poses or the so far rarely used intra-database and intra-query similarities.\nHowever, there are only few methods that exploit such additional knowledge in order to further improve the place recognition performance.\n\nIn this paper, we address the problem of systematically exploiting additional information by proposing a versatile, expandable framework for different kinds of prior knowledge from or about database and query.\nSpecifically, we\n\\begin{itemize}\n \\item propose a versatile, expandable graph-based framework that formulates place recognition as a sparse non-linear least squares optimization problem in a factor graph (Sec.~\\ref{sec:algo})\n \\item discuss different sources of information in place recognition problems and propose a loop-rule and an exclusion-rule to exploit inherent structural properties of the place recognition problem (Sec.~\\ref{sec:binary_factor})\n \\item demonstrate how this framework can be used to integrate the different sources of information, e.g., we provide implementations of the loop- and exclusion-rules in terms of cost functions for factors. These either exploit intra-set descriptor similarities within database and\/or query, or, if available, additional knowledge about database image poses (Sec.~\\ref{sec:binary_factor})\n \\item demonstrate the versatility of the framework by proposing an n-ary factor that mimics the sequence processing approach of SeqSLAM \\cite{seqSLAM} (Sec.~\\ref{sec:nary_factor})\n \\item present the optimization using standard non-linear least squares optimization tools (Sec.~\\ref{sec:optim})\n \\item provide comprehensive experimental evaluation on a variety of datasets, configurations and existing methods from the literature (Sec.~\\ref{sec:results})\n\\end{itemize}\n\nThe paper closes with a discussion of the current implementation and potential extensions in Sec.~\\ref{sec:discussion}.\n\n\\begin{figure}[tb]\n \\centering \n \\includegraphics[width=1\\linewidth]{images\/pr_pipeline.pdf}\n \\vspace{-0.7cm}\n \\caption{The basic place recognition pipeline (above the horizontal dashed line) can be extended with additional information (below this line). Established approaches are standardization of descriptors and sequence processing. We propose a graph-based framework to integrate various sources of additional information in a common optimization formulation. An example for so far rarely used information are intra-set similarities.} \n \\vspace{-0.4cm}\n \\label{fig:intro}\n\\end{figure}\n\n\n\\section{RELATED WORK}\n\\label{sec:related_work}\nVisual place recognition in changing environments is a subject of active research.\nAn overview of existing methods is given in \\cite{Lowry2016}.\nIn the present paper, the basic source of information to match places are image descriptors.\nThe authors of \\cite{Suenderhauf2015} demonstrated that intermediate CNN-layer outputs like the \\textit{conv3}-layer from AlexNet \\cite{alexnet} trained for image classification can serve as holistic image descriptors to match places across condition changes between database and query.\nMoreover, there are CNNs especially trained for place recognition that return either holistic image descriptors like NetVLAD \\cite{netvlad} or local features like DELF \\cite{delf}.\nThe performance of holistic descriptors can be further improved by additional pre- and postprocessing steps.\nTo improve the performance of holistic descriptors, feature standardization and unsupervised learning techniques like PCA- and clustering-based methods can be used \\cite{Schubert2020}.\nIn \\cite{Neubert2019} it is shown how a neurologically-inspired neural network can be used to combine several descriptors along a sequence to a new descriptor for each place.\n\\cite{Schubert2019} extended this approach to encode additional odometry information in the new descriptors.\nGiven the final pairwise similarities between descriptors from database and query, sequence-based methods \\cite{seqSLAM}\\cite{hmm} for postprocessing can be used to improve the place recognition performance further.\n\nIn this paper, we propose a graph-based approach to optimize the descriptor similarities by incorporating prior knowledge.\nIn \\cite{hmm} Hansen and Browning used a hidden Markov model (HMM) to formulate a graph-based sequence search method in the similarity matrix.\nNaseer et al. \\cite{Naseer14} used a flow network with edges defined for sequence search to improve place recognition results.\nIn \\cite{Zhang2019} a graph is used to prevent place matches between adjacent places and to distribute high matching scores to neighboring places.\nIn contrast to these approaches, our approach exploits not only sequence information but also integrates other additional knowledge, e.g., about intra-set similarities in the database set or the query set.\nThe literature provides several approaches for localization where the database is known in advance.\nFor example, given the images from the database (or another representative training set), FabMap \\cite{fabmap} learns statistics about feature occurrences in order to determine the most descriptive features.\nMcManus et al. \\cite{McManus2014} train offline condition-invariant broad-region detectors from beforehand collected images with a variety of appearances at particular locations.\nIn \\cite{Neubert2015} intra-database descriptor comparisons are used to reduce the number of required image comparisons during the query run.\nVysotska and Stachniss \\cite{Vysotska2017} use binary intra-database place matches to perform jumps within the similarity matrix during sequence search.\nIn contrast, in our presented approach we use potentially continuous intra-database similarities to optimize the place recognition result instead of just accelerating it.\nMoreover, we do not only use this information to find loops but in addition to potentially inhibit wrong loop closures.\n\nGraphical models, and in particular factor-graphs, are a well established tool in mobile robotics \\cite{Dellaert2017}, e.g., in the form of robust pose graph SLAM \\cite{Sunderhauf2012}.\nSimilar to the here proposed approach, pose graph SLAM represents each place as a node and the edges (factors) model constraints between these places. \nHowever, a significant difference is that pose graph SLAM deals with spatial information, i.e., the places are represented by pose coordinates in the world and the factors are spatial transformations between these poses (e.g., odometry or detected loop closures). \nIn contrast, our presented approach is intended to be used \\textit{before} SLAM to establish loop closures. \nIn particular, we do not optimize metric errors between spatial constraints but errors in the mutual consistency of descriptor similarities.\n\n\n\\section{ARCHITECTURE OF THE GRAPHICAL MODEL}\\label{sec:algo}\nA graphical model serves as a structured representation of prior knowledge in terms of dependencies, rules, and available information.\nGiven the variable nodes in the graph with their initial values together with dependencies between nodes based on additional knowledge, optimization algorithms can be used to modify the variables in order to resolve inconsistencies between nodes.\nWe are going to exploit this mechanism and present a graph-based framework for visual place recognition that optimizes the initial similarity values $\\hat{S}^{DB\\times Q}$ from pairwise image descriptor comparisons.\n\nThe graph-based framework is expressed as factor graph.\nFactor graphs are graphical representations of least squares problems -- for a detailed introduction to factor graphs please refer to \\cite{Dellaert2017}.\nThe graph's basic architecture consists of nodes with unary factors that penalize deviations from the initial similarity values.\nDepending on additional knowledge, we can add different factors in the graph to introduce connections (i.e. dependencies) between nodes.\nTwo architectures with nodes, unary factors, and additional factors are shown in Fig.~\\ref{fig:graph_intra_set}. \nEach factor defines a quadratic cost function based on the variables it connects. \nThe resulting combined optimization problem is defined as a weighted sum of the individual cost functions.\nThe optimization is subject of Sec.~\\ref{sec:optim}.\nHere, we first define the basic architecture of the graph with corresponding nodes and unary factors.\nNext, we propose two ways to exploit prior knowledge with binary and n-ary factors.\nWe will structure the explanation of each factor by the \\textit{prior knowledge} that can be exploited, a corresponding \\textit{rule} that expresses the resulting dependency between nodes, a proposed related \\textit{cost function} $f(\\cdot)$ that punishes a violation of this rule, and the factor's \\textit{usage} in the graph.\nExcept for the unary factor, each factor is optional and depends on the available knowledge.\nAccordingly, the proposed framework can be extended in future work with additional factors.\n\n\\subsection{Graph nodes}\\label{sec:algo:nodes}\nThe graph-based framework is designed to optimize the initial pairwise descriptor similaritiy matrix $\\hat{S}^{DB\\times Q} \\in \\mathbb{R}^{M\\times N}$.\n$M$ is the number of database images and $N$ the number of query images.\nAccordingly, we define $M\\times N$ nodes where each node $s_{ij} \\in S^{DB\\times Q}$ is a variable version of its initial value $\\hat{s}_{ij} \\in \\hat{S}^{DB\\times Q}$ with $s_{ij},\\hat{s}_{ij} \\in [0,1]$.\n\n\\subsection{Unary factor} \\label{sec:unary_factor}\n\\subsubsection{Prior Knowledge}\nDescriptor similarities $\\hat{s}_{ij}$ are the primary source of information for place recognition.\nBeyond initializing the variables to these similarities, we have to prevent too large deviations of $s_{ij}$ from $\\hat{s}_{ij}$, in particular to prevent trivial solutions during optimization.\n\n\\subsubsection{Rule ''prior``}\n\\begin{align}\\label{eq:rule_unary}\n s_{ij} \\approx \\hat{s}_{ij}\n\\end{align}\n\n\\subsubsection{Cost function ''prior``}\n\\begin{align}\\label{eq:cost_unary}\n f_\\text{prior}(\\cdot) = (s_{ij} - \\hat{s}_{ij})^2\n\\end{align}\n\n\\subsubsection{Usage}\nEach node $s_{ij}$ is connected with a single unary factor to its initial value $\\hat{s}_{ij}$ as shown in Fig.~\\ref{fig:graph_intra_set}.\nThus, $M\\times N$ unary factors are used in a graph.\n\n\\subsection{Binary factors for the exploitation of intra-database or intra-query similarities from poses or descriptor-comparisons}\\label{sec:binary_factor}\n\n\\subsubsection{Prior Knowledge -- intra-database similarities from poses}\nIn some applications, the database is created with a more advanced sensor setup than the query.\nDue to the missing position information for the query images, direct position-based matching cannot be conducted.\nNevertheless, available poses for database images can be used to create binary intra-database similarities $\\hat{s}^{DB}_{ij} \\in \\hat{S}^{DB}$ that encode whether two database images $i$ and $j$ show the same place ($\\hat{s}^{DB}_{ij}:=1$) or different places ($\\hat{s}^{DB}_{ij}:=0$).\n\n\\subsubsection{Prior Knowledge -- intra-set similarities from image descriptors}\nEven if position data is not available, we can acquire similar information directly from image comparisons within the database and also within the query to get intra-database similarities $\\hat{s}^{DB}_{ij} \\in \\hat{S}^{DB}$ and intra-query similarities $\\hat{s}^Q_{ij} \\in \\hat{S}^Q$ with $\\hat{s}^{DB}_{ij}, \\hat{s}^Q_{ij} \\in [0,1]$.\nThese intra-set similarities are an inherent and almost always available source of additional information, which has not been used often yet.\nThe computation of intra-set similarities can usually be done more reliably than the computation of inter-set similarities, because the condition \\textit{within} database or query is potentially more constant than between both. \nThe intra-set image comparisons could be done with methods like FabMap~\\cite{fabmap} that are more suited for place recognition under constant condition.\nFig.~\\ref{fig:s_matrices} shows how the intra-set similarities $\\hat{S}^{DB}$, $\\hat{S}^Q$ are related to the inter-set similarities $\\hat{S}^{DB\\times Q}$ and how they can reveal loops and zero-velocity stages.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{images\/S_matrices.pdf}\n \\vspace{-0.2cm}\n \\caption{\n The information of the similarity matrix between database and query (bottom right) can be extended with similarity information of images \\textit{within} the query set (top right) and\/or \\textit{within} the database set (bottom left).}\n \\vspace{-0.2cm}\n \\label{fig:s_matrices}\n\\end{figure}\n\nWe define separate binary factors for both intra-set similarities, either from poses or from image descriptors.\nEach binary factor is a combination of two complementing rules and cost functions $f^Q_\\text{loop}(\\cdot)+f^Q_\\text{exclusion}(\\cdot)$ (Eq.~(\\ref{eq:cost_sim_q}), (\\ref{eq:cost_dissim_q})) and $f^{DB}_\\text{loop}(\\cdot)+f^{DB}_\\text{exclusion}(\\cdot)$ (Eq.~(\\ref{eq:cost_sim_db}), (\\ref{eq:cost_dissim_db})), respectively.\n\n\\subsubsection{Rule ''loop``}\nIf $\\hat{s}^{Q}_{jl}$ is high (indicated by ''$\\hat{s}^{Q}_{jl}\\uparrow$``), the $j$-th and $l$-th query image likely show the same place.\nIf the $i$-th database image is compared to these two query images, the following ternary relation can be derived:\n\\begin{align}\\label{eq:rule_sim2}\n s_{ij} \\approx s_{il} &\\text{, iff } \\hat{s}^Q_{jl}\\uparrow\n\\end{align}\n\nThe rule expresses that ''$s_{ij}$ should be equal to $s_{il}$ iff the query images $j$, $l$ show the same place``, because if both query images $j$, $l$ show the same place, database image $i$ can either be equal to both or to none of both.\nAccordingly, this rule exploits loops within the intra-query similarities.\nThis rule is inherent to the place recognition problem and valid as well for intra-database similarities:\n\\begin{align}\\label{eq:rule_sim1}\n s_{ij} \\approx s_{kj} &\\text{, iff } \\hat{s}^{DB}_{ik}\\uparrow\n\\end{align}\n\n\\subsubsection{Cost function ''loop``}\nFor equation (\\ref{eq:rule_sim2}) and (\\ref{eq:rule_sim1}), cost functions similar to (\\ref{eq:cost_unary}) can be formulated:\n\\begin{align}\n f^Q_\\text{loop}(\\cdot) &= \\hat{s}^Q_{jl} \\cdot (s_{ij} - s_{il})^2 \\label{eq:cost_sim_q} \\\\\n f^{DB}_\\text{loop}(\\cdot) &= \\hat{s}^{DB}_{ik} \\cdot (s_{ij} - s_{kj})^2 \\label{eq:cost_sim_db} \n\\end{align}\nThe quadratic error term is multiplied by the intra-set similarity $\\hat{s}^Q_{jl}$ or $\\hat{s}^{DB}_{ik}$ to apply weighting in case of non-binary intra-set similarities from image descriptors or to turn it on and off for binary intra-set similarities.\n\n\\subsubsection{Rule ''exclusion``}\nIf $\\hat{s}^{Q}_{jl}$ is low (indicated by ''$\\hat{s}^{Q}_{jl}\\downarrow$``), the $j$-th and $l$-th query image probably show different places.\nIf the $i$-th database image is compared to these two query images, the following ternary relation can be derived:\n\\begin{align}\\label{eq:rule_dissim2}\n \\neg(s_{ij}\\uparrow \\land s_{il}\\uparrow) \n &\\text{, iff } \\hat{s}^Q_{jl}\\downarrow\n\\end{align}\nThis rule expresses that ''not both similarity measurements $s_{ij}$ AND $s_{il}$ can be high iff the query images $j$, $l$ show different places``, i.e., the rule excludes one or both similarities $s_{ij}$, $s_{il}$ from being high; otherwise a single database image $i$ would show two different places concurrently.\nAgain, this rule is inherent to the place recognition problem and is supposed to add valuable information that can be exploited.\nAs before, the same rule is valid for intra-database similarities:\n\\begin{align}\\label{eq:rule_dissim1}\n \\neg(s_{ij}\\uparrow \\land s_{kj}\\uparrow) \n &\\text{, iff } \\hat{s}^{DB}_{ik}\\downarrow\n\\end{align}\n\n\\subsubsection{Cost function ''exclusion``}\nIt is less natural how to express the \\textit{rule ''exclusion``} in a cost function. This is part of the discussion in Sec.~\\ref{sec:discussion}.\nIn this work, we define the following cost functions for equation (\\ref{eq:rule_dissim2}) and (\\ref{eq:rule_dissim1}):\n\\begin{align}\n f^Q_\\text{exclusion}(\\cdot) &= (1-\\hat{s}^Q_{jl}) \\cdot (s_{ij} \\cdot s_{il})^2 \\label{eq:cost_dissim_q} \\\\\n f^{DB}_\\text{exclusion}(\\cdot) &= (1-\\hat{s}^{DB}_{ik}) \\cdot (s_{ij} \\cdot s_{kj})^2 \\label{eq:cost_dissim_db}\n\\end{align}\nThe quadratic error term is multiplied by the negated intra-set similarity $(1-\\hat{s}^Q_{jl})$ or $(1-\\hat{s}^{DB}_{ik})$ to weight it in case of non-binary intra-set similarities from image descriptors or to turn it on and off for binary intra-set correspondences.\n\n\\subsubsection{Usage}\nFig.~\\ref{fig:graph_intra_set} (left) shows a graphical model with nodes $s_{ij}$, unary factors $f_\\text{prior}(\\cdot)$, and the proposed binary factors.\nTo apply the binary factors for existing intra-database similarities $\\hat{S}^{DB}$, each node $s_{ij}$ has to be connected to every node $s_{kj}$ for all $k = 1,\\ldots,M$ with $k\\neq i$ within each column in $S^{DB\\times Q}$; i.e., $\\binom{M}{2}$ factors per column.\nFor existing intra-query similarities $\\hat{S}^Q$, each node $s_{ij}$ has to be connected to every node $s_{il}$ for all $l = 1,\\ldots,N$ with $l\\neq j$ within each row in $S^{DB\\times Q}$; i.e., $\\binom{N}{2}$ factors per row.\nThe potentially high number of connections is part of the discussion in Sec.~\\ref{sec:discussion}.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=\\linewidth]{images\/graph.pdf}\n \\caption{Illustration of the graph structure. \\textit{(left)} Unary and binary factors. \\textit{(right)} Unary and n-nary factors, which connect structured local blocks.}\n \\label{fig:graph_intra_set}\n\\end{figure}\n\n\\subsection{N-ary factors for the exploitation of sequences}\\label{sec:nary_factor}\n\n\\subsubsection{Prior Knowledge}\nIf both database and query are recorded as spatio-temporal sequence, sequences also appear in the inter-set similarities $S^{DB\\times Q}$ (see Fig.~\\ref{fig:s_matrices}).\nIn this case, SeqSLAM \\cite{seqSLAM} showed the benefit from a simple combination of similarities of neighboring images.\nOriginally, it was implemented as postprocessing of the similarity matrix $S^{DB\\times Q}$. \nWe can integrate such sequential information within the graph in a similar fashion using n-ary factors.\n\n\\subsubsection{Rule ''sequence``}\n\\begin{align}\\label{eq:rule_seq}\n s_{ij} \\approx \\frac{1}{L}\\sum_{\\forall \\{k,l\\} \\in \\text{Seq}({i,j};L,v_p)} s_{kl}\n\\end{align}\n$\\text{Seq}({i,j};L,v_p)$ is a function that returns similarities that are part of a sequence along a line segment with slope $v_p$ and sequence length $L$ within $S^{DB\\times Q}$ around $s_{ij}$.\nFor a full explanation how SeqSLAM works, please refer to \\cite{seqSLAM}.\n\n\\subsubsection{Cost function ''sequence``}\n\n\\begin{align}\n f_\\text{seq}(\\cdot) = (s_{ij} - \\frac{1}{L}\\max_{v_p \\in V}{(\\sum_{\\{k,l\\} \\in \\text{Seq}(\\{i,j\\};L,v_p)} s_{kl})})^2\n\\end{align}\nwith $V$ being the set of allowed velocities within $S^{DB\\times Q}$.\n\n\\subsubsection{Usage}\nFig.~\\ref{fig:graph_intra_set} (right) shows a graphical model with nodes $s_{ij}$, unary factors $f_\\text{prior}(\\cdot)$, and the proposed n-ary factors for sequences.\nAs for the unary factors in Sec.~\\ref{sec:unary_factor}, each node in the graph is equipped with one n-ary factor.\nEach n-ary factor connects its node to all nodes that are part of any sequence with length $L$ and slope $v_p\\in V$.\nAccordingly, $M\\times N$ n-ary factors are introduced into a graph if sequences are exploited.\n\n\\section{OPTIMIZATION OF THE GRAPHICAL MODEL}\\label{sec:optim}\nThe objective of the graph-based framework is the formulation of dependencies from prior knowledge for a subsequent optimization of the similarities $s_{ij}$ to incorporate the prior knowledge into the final similarity values, and to potentially resolve contradictory dependencies.\n\nWe use factor graphs as graphical representation of least squares problems and defined every cost function for each type of factor as (weighted) quadratic error function.\nAccordingly, the global error $E$ is defined as a sum over the (weighted) cost functions of all factors in the graph.\n\n\\subsection{Weighting of factor's costs for global error computation in the graph}\nAs usually done in error computation for a graph, costs from different factor-types have to be balanced by weighting them separately.\nTherefore, we normalize the cost of each factor by the number of factors per factor-type, and introduce user-specified weights $w$ for all factors except for the basic unary factor (which gets weight 1).\n\nFor the unary factor of our basic architecture with cost function $f_\\text{prior}(\\cdot)$, we define the partial global error $E_\\text{prior}$ with\n\\begin{align}\n E_\\text{prior} = \\frac{1}{MN} \\sum_{i=1}^M \\sum_{j=1}^N f_\\text{prior}(s_{ij})\n\\end{align}\n\nIn case of dependencies in the graph that are introduced by available intra-database or intra-query similarities, we define a partial global error $E^{DB}_{loop,exclusion}$ or $E^Q_{loop,exclusion}$ for the weighted summation over all binary factors (Sec.~\\ref{sec:binary_factor}): \n\\begin{align}\n E^{DB}_\\text{loop,exclusion} = &\\frac{1}{N\\binom{M}{2}} \\sum_{j=1}^N \\sum_{i=1}^{M-1} \\sum_{k=i+1}^M w^{DB}_\\text{loop} \\cdot f^{DB}_\\text{loop}(s_{ij}, s_{kj}) \\nonumber\\\\\n &+ w^{DB}_\\text{exclusion} \\cdot f^{DB}_\\text{exclusion}(s_{ij}, s_{kj})\n \\\\\n E^Q_\\text{loop,exclusion} = &\\frac{1}{M\\binom{N}{2}} \\sum_{i=1}^M \\sum_{j=1}^{N-1} \\sum_{l=j+1}^N w^Q_\\text{loop} \\cdot f^Q_\\text{loop}(s_{ij}, s_{il}) \\nonumber\\\\\n &+ w^Q_\\text{exclusion} \\cdot f^Q_\\text{exclusion}(s_{ij}, s_{il})\n\\end{align}\n\nFor the n-ary factors (Sec.~\\ref{sec:nary_factor}) in case of sequence exploitation, the partial global error $E_\\text{seq}$ is defined with\n\\begin{align}\n E_\\text{seq} = \\frac{1}{MN} \\sum_{i=1}^M \\sum_{j=1}^N w_\\text{seq} \\cdot f_\\text{seq}(s_{ij}, \\text{Seq}(\\{i,j\\};L,v^*_{ij})\n\\end{align}\nwith $v^*_{ij}$ being the optimal velocity (slope) with the highest mean of connected similarities around $s_{ij}$.\n\nFinally, summation over all partial global errors $E_i$ that occur in the graph yields the global error $E$:\n\\begin{align}\n E = \\sum_{\\forall i} E_i\n\\end{align}\n\n\n\\subsection{Implementation of the optimization}\nError $E$ is a sum solely over quadratic cost functions.\nThus, many tools for non-linear least squares (NLSQ) optimization can be used (e.g., scipy's \\textit{least\\_squares}-function in Python).\nFor an easier formulation of the optimization problem, factor graph tools like g2o \\cite{g2o} can be used.\nHowever, one should be aware that depending on the introduced factors, the optimization problem can get quite huge.\nThus, efficient implementations should be preferred that perform a quick and memory efficient optimization.\nSec.~\\ref{sec:seq} reports our achieved runtimes that are presumably sufficient for many applications.\nSec.~\\ref{sec:discussion} provides some more discussion on alternative optimization methods and approximation techniques for runtime improvements.\n\n\\section{EXPERIMENTAL RESULTS}\\label{sec:results}\nWe present experiments that investigate the performance gains achieved by the graph-based framework with three different extensions that exploit 1)~intra-database similarities; 2)~intra-database and intra-query similarities; 3)~intra-database similarities, intra-query similarities and sequences.\nIn order to evaluate the potential benefit beyond available pre- and postprocessing methods, we repeat these experiments with the descriptor standardization approach from \\cite{Schubert2020} for preprocessing and SeqSLAM for postprocessing.\nIn a final experiment, we compare our graph-based method with sequence-based approaches from the literature.\n\n\\subsection{Experimental Setup}\n\\subsubsection{Image descriptor}\nNetVLAD \\cite{netvlad} is used as CNN-image descriptor in all experiments.\nWe use the author's implementation trained on the Pitts30k dataset with VGG-16 and whitening.\n\n\\subsubsection{Metric}\nThe performance is measured with average precision which is the area under the precision-recall curve.\n\n\\subsubsection{Datasets}\nAll experiments are based on the five different datasets Nordland~\\cite{nordland}, StLucia (Various Times of the Day)~\\cite{stlucia}, CMU (Visual Localization)~\\cite{cmu}, GardensPoint Walking~\\cite{gardenspoint} and Oxford RobotCar~\\cite{robotcar} with different characteristics regarding the environment, appearance changes, in-sequence loops, stops, or viewpoint changes.\nWe use the datasets as described in our previous work~\\cite{Neubert2019}.\nImages from StLucia, CMU and Oxford were sampled with one frame per second, which preserves varying camera speeds, stops and loops in the datasets, and leads to translation and orientation changes during revisits. GardensPoint contains lateral viewpoint changes.\n\nFor the binary intra-database similarities $\\hat{S}^{DB}_{pose}$ from poses, we use the GPS from the datasets or a main diagonal for \\textit{GardensPoint Walking} and \\textit{Nordland}.\nFor the query sequence, we assume that no GPS is available.\n\n\\subsubsection{Implementation}\\label{sec:implementation}\nGraph creation and optimization were implemented in Python with scipy's \\textit{least\\_squares}-optimization function; the \\textit{Trust Region Reflective} algorithm was used for minimization.\nDue to the huge number of factors within a graph in case of the usage of intra-database and intra-query similarities, we divided $S^{DB\\times Q}$ into equally sized patches with height and width $\\leq 500$, and optimized each patch separately.\nNo information is shared between patches, and the n-ary factors are truncated at borders.\nA full optimization on $S^{DB\\times Q}$ without patches was performed if only intra-database similarities $\\hat{S}^{DB}$ were used.\nThe variables in the optimization are initialized with $\\hat{S}^{DB\\times Q}$ from the pairwise descriptor comparisons.\n\n\\subsubsection{Parameters}\nIn all experiments, we used a fixed parameter set that was determined from a toy-example and a small real-world dataset.\nWe used $w^{DB}_\\text{loop}=4, w^{DB}_\\text{exclusion}=40$ for intra-set similarities from GPS, $w^{DB}_\\text{loop}=w^Q_\\text{loop}=1, w^{DB}_\\text{exclusion}=w^Q_\\text{exclusion}=20$ for intra-set similarities from descriptors, and $w_\\text{seq}=10$, $L=11$ for sequences.\n\n\\subsection{Contributions of information sources and rules}\\label{sec:exp1}\nIn Sec.~\\ref{sec:algo} we identified four rules: \\textit{''prior``}, \\textit{''loop``}, \\textit{''exclusion``} and \\textit{''sequence``}.\nIn the following, we are going to evaluate the influence of the rules when they are successively added and exploited in the graph.\nNote that rule \\textit{''prior``} alone would merely return the initial values $\\hat{S}^{DB\\times Q}$.\nInput to the graph are the pairwise descriptor similarities from the raw NetVLAD descriptors that serve as baseline as well (termed ''pairwise``).\nAll results are summarized in Table~\\ref{tab:no_std_seq}.\n\\input{tables\/res.tex}\n\n\n\\subsubsection{Exploitation of intra-database similarities from poses or descriptors (rules \\textit{''loop``} and \\textit{''exclusion``})}\\label{sec:exp2}\nIntra-database similarities $\\hat{S}^{DB}$ can be used in most place recognition setups as they can be acquired either from the pairwise image comparisons within the database or from poses, e.g., from GPS or SLAM.\n\nTable~\\ref{tab:no_std_seq} shows the results (indicated by $\\hat{S}^{DB}_{pose}$ and $\\hat{S}^{DB}_{desc}$) when intra-database similarities are used either from poses or descriptor comparisons.\nIn most cases, the pairwise performance is significantly improved and never gets worse.\nIf intra-database similarities from poses (here: GPS) are used, the performance gain is slightly better, presumably because place matchings and distinctions from poses are binary and less error prone.\nHowever, even when the intra-database similarities from descriptor comparisons are used, the performance can be improved significantly.\n\nThe results support that most existing place recognition pipelines could be improved with the proposed graph-based framework together with intra-database similarities.\nMoreover, since it is not necessary to know the query sequence in advance, the proposed graph-based framework can be employed in an online fashion.\n\n\n\\subsubsection{Exploitation of intra-database and intra-query similarities from poses or descriptors (rules \\textit{''loop``} and \\textit{''exclusion``})}\nIn addition to the previous experiment, supplementary intra-query similarities could be used to model dependencies within the graph not only between database images but also between query images.\nWe used intra-query similarities solely from descriptor comparisons since we do not assume global pose information during the query run; otherwise, place matchings could be received directly from pose-comparisons.\n\nTable~\\ref{tab:no_std_seq} shows the results (indicated by $\\hat{S}^{DB}_{pose}{+}\\hat{S}^Q_{desc}$ and $\\hat{S}^{DB}_{desc}{+}\\hat{S}^Q_{desc}$).\nFor intra-database similarities from poses, the performance is improved only for a few sequence-combinations compared to the performance with intra-database but without intra-query similarities.\nWhen using intra-query similarities in addition to intra-database similarities from descriptors, the performance could be improved at least by additional $5\\%$ for $50\\%$ of all datasets.\n\nThe results indicate that additional data from intra-query similarities within the proposed graph-based framework can be used to improve the place recognition performance further.\nIt is important to note that the system again never performs worse in comparison to a graph with intra-database but without intra-query similarity exploitation.\nThe result is interesting, since intra-query similarities from descriptors could always be collected for a subsequent postprocessing.\n\n\\subsubsection{Additional exploitation of sequences within the graph (rules \\textit{''loop``}, \\textit{''exclusion``} and \\textit{''sequence``})}\nIn this experiment, we used sequence information within the graph in addition to the intra-database and intra-query similarities.\nTable~\\ref{tab:no_std_seq} shows the results (indicated by $\\hat{S}^{DB}_{pose}{+}\\hat{S}^Q_{desc}{+}\\text{Seq}$ and $\\hat{S}^{DB}_{desc}{+}\\hat{S}^Q_{desc}{+}\\text{Seq}$).\nWith sequence information the graph can again significantly improve the place recognition performance compared to the previous experiments without this additional assumption no matter if the intra-database similarities come from poses or descriptor comparisons.\nMoreover, the full setup of the graph with all proposed factors clearly outperforms the baseline.\n\n\\subsection{Combination with preprocessed descriptors}\\label{sec:exp2_std}\nThe place recognition performance can be improved if the descriptors are preprocessed \\cite{Schubert2020}.\nTo investigate the influence of preprocessing on all configurations of the graph from Sec.~\\ref{sec:exp1}, we repeated all previous experiments with standardized image descriptors \\cite{Schubert2020}.\nResults are shown in Table~\\ref{tab:std_seq} (left).\nSince the preprocessing requires complete knowledge of the query descriptors, we do not provide results for the online configuration of the graph.\nThe full configuration of the graph with intra-set similarities and sequences can again show significant performance improvements for the majority of the sequence-combinations.\nThe results demonstrate that the graph-based framework benefits from better performing descriptors.\n\\input{tables\/res_cat_STD_seq.tex}\n\n\\subsection{Combination with sequence-based postprocessing}\\label{sec:exp_postproc}\nIn the next experiment, a modified version of SeqSLAM~\\cite{seqSLAM} without local contrast normalization and the single matching constraint is used; otherwise SeqSLAM would fail on datasets with in-sequence loops.\nIt postprocesses the inter-set similarities $S^{DB\\times Q}$ either from the pairwise descriptor comparison (baseline) or from all configurations of the graph from Sec.~\\ref{sec:exp1}.\n\nResults are shown in Table~\\ref{tab:std_seq} (right);\nnote that descriptor comparisons from the raw NetVLAD descriptors \\textit{without} sequence-based postprocessing are used as inputs to the graphs.\nThe baseline performance after postprocessing compared to the baseline performance without postprocessing (Table~\\ref{tab:no_std_seq}) is already comparatively high.\nAccordingly, it is hard to achieve high performance improvements.\nNonetheless, the graph-based framework could improve the results in almost $50\\%$ of the cases often by more than $10\\%$ -- however, intra-database similarities from both poses and descriptors seem to be sufficient, since more information could not be used for further performance improvements on most datasets.\n\n\\subsection{Combination with pre- and postprocessing}\nWe also conducted experiments with preprocessing from Sec.~\\ref{sec:exp2_std} \\textit{and} postprocessing from Sec.~\\ref{sec:exp_postproc}.\nThe baseline performance got already almost perfect for most of the datasets, so that performances could only be improved slightly by less than $5\\%$ with the graph-based approach.\nAgain, the performance was never worse than the baseline.\n\n\\subsection{Comparison with state-of-the-art sequence-based methods}\\label{sec:seq}\nTo compare performance and runtime of our method with approaches from the literature, we conduced additional experiments with the sequence-based methods SeqSLAM~\\cite{seqSLAM}, MCN~\\cite{Neubert2019}, VPR~\\cite{Vysotska2017} and ABLE~\\cite{able}.\nThe sequence length was $L=11$, if required.\nThe experiments were performed twice without and with feature standardization~\\cite{Schubert2020} for descriptor preprocessing.\n\nTable~\\ref{tab:seq} shows the achieved performances.\nOur method clearly outperformed the compared approaches and achieved the best performance on most datasets.\nOnly on Nordland SeqSLAM and ABLE achieved better performance, since they benefit from constant camera speed in database and query.\nWith feature standardization, ABLE could additionally achieve best performance on GardensPoint, and MCN performed best on three Oxford datasets.\n\nRuntimes were measured on an Intel i7-7700K CPU with 64GB RAM.\nThe maximum runtimes per query for all methods are shown in Table~\\ref{tab:seq} (bottom).\nOur method required approx. 5.7sec per query on Oxford\\#2 with 3413 database images, while all other approaches needed less than 500msec.\nAll reported runtimes are presumably sufficient for applications like loop closure detection in SLAM, since loops need not to be detected with the full frame rate of a camera.\nSec.~\\ref{sec:discussion} provides some more discussion on the computational efficiency and potential improvements.\n\n\\input{tables\/res_seq.tex}\n\n\\section{DISCUSSION AND CONCLUSION}\\label{sec:discussion}\nThe previous sections presented our approach to use a graphical model as a flexible framework to model different kinds of additional information available in place recognition.\nThe experiments demonstrated that the presented method can considerably improve place recognition results in various configurations in terms of available data (e.g., poses of database images), the subset of applied rules (e.g., using sequences or not), or restriction to online place recognition.\nThe representation and optimization using graphical models offers a high degree of flexibility.\nIn the remainder of this last section, we want to discuss aspects of the proposed system and some particularly interesting possible extensions.\n\nThe graph-based optimization is performed on the pairwise descriptor similarities $S^{DB\\times Q}$.\nThis makes it relatively independent of the actually chosen image descriptor, which may require only a slight parameter adjustment.\nThis property even allows an optimization of place descriptor similarities from different sensor modalities like LiDAR.\n\nIn this paper, we defined several factor-types to express prior knowledge (``rules'') about place recognition problems.\nEach factor implements a cost function for a rule. \nOften, there are alternative formulations of a cost function for a rule.\nFor example, the cost function in case of no loop in query ($\\hat{s}^Q_{ij}\\downarrow$; Eq.~(\\ref{eq:cost_dissim_q})) or database ($\\hat{s}^{DB}_{ij}\\downarrow$; Eq.~(\\ref{eq:cost_dissim_db})) is defined as multiplication $(s_1\\cdot s_2)^2$.\nEspecially for this rule, alternative formulations may apply like $\\min(s_1, s_2)^2$ or $\\max(s_1+s_2-1, 0)^2$; both are piecewise linear which could be beneficial.\n\nFactor graphs are often (but not exclusively) used in combination with probabilities. Presumably, a more probabilistic view on the proposed graph-based framework could provide additional insights.\nFor instance, the chosen cost functions (\\ref{eq:cost_unary}), (\\ref{eq:cost_sim_db}) and (\\ref{eq:cost_dissim_q}) with structure $(s_1-s_2)^2$ can be considered as the negative log-likelihood of a single Gaussian:\n\\vspace{-0.02cm}\n\\begin{align}\n (s_1-s_2)^2 \\Leftrightarrow -\\ln(e^{-(s_1-s_2)^2})\n\\end{align}\nMoreover, a piecewise linear formulation of the discussed factor above could help to formulate the proposed graph-based framework in a more probabilistic way; for instance a cost function $\\min(s_1, s_2)^2$ corresponds to the negative log-likelihood of a maximum of two Gaussians:\n\\begin{align}\n \\min(s_1, s_2)^2 \\Leftrightarrow -\\ln(\\max(e^{-(s_1-0)^2}, e^{-(s_2-0)^2})\n\\end{align}\n\nThese cost functions, however, solely work on the descriptor similarities.\nIn the related work (Sec.~\\ref{sec:related_work}), we already mentioned the important difference to pose graph SLAM, which solely works on spatial poses (and not their similarities).\nAn interesting question for future work is whether the proposed approach can be extended to also directly work on descriptors instead of their similarities (i.e., the variables would be descriptor vectors, not their scalar similarities).\nThis significantly increases the complexity of the optimization problem, but could allow the simultaneous optimization of spatial poses and descriptors for a potentially tightly coupled loop closure detection and SLAM.\n\nEven without such an extension, as indicated in Sec.~\\ref{sec:algo} and Sec.~\\ref{sec:optim}, the number of factors or connections between nodes can get quite huge, and grows cubically if intra-set similarities are used.\nIn our experiments, we addressed the problem by dividing $S^{DB\\times Q}$ into patches if both intra-set similarities were used (Sec.~\\ref{sec:implementation}).\nPresumably, using more efficient implementations, e.g. C++ implementations in Ceres \\cite{ceres}, can improve memory consumption and computational efficiency.\nAnother promising direction are approximation techniques like a systematic removal of low-relevance connections in the graph.\nFinally, different optimization techniques could be used: In earlier work on graph optimization, minimization techniques based on hill-climbing algorithms like ICM (iterated conditional modes) were used \\cite[p.599]{Koller2009}; these may allow a different and more compact representation and optimization of the graph.\n\n\\input{root.bbl}\n\n\\end{document}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\nThe ubiquity of opportunity offered by the Internet of Things (IoT) is providing new ways to embed computation and storage into everyday devices. Reductions in the cost of hardware, coupled with technological advancements, are enabling a constant stream of innovative new products and services that rely upon IoT architectures. Such developments are particularly evident in the `wearable' and healthcare industries, with parallel innovations occurring in the Industrial Internet of Things (IIoT) \\cite{bessis2013,hill2017a}.\n\nAlong with the ability to sense environments, conditions and processes, comes the requirement to measure, monitor, analyse and evaluate performance. As such, Business Intelligence (BI) is undergoing a resurgence as consumer demand moves beyond traditional dashboards through forecasting, towards prescriptive analytics. Of late, vendors have delivered BI often as a flexible, on-demand service, making use of the underlying elastic cloud platforms that enterprise software applications are typically deployed upon \\cite{alaqrabi2012,alaqrabi2013,alaqrabi2015,alaqrabi2018}.\n\nAdvancements in technology have increased the ability to connect such a variety of embedded devices to larger pools of resources such as clouds. Integrating embedded devices and cloud servers raises an important discussion regarding the nature of data generated or transmitted by IoT devices. These approaches must be secure and provide the necessary privacy controls for users. At present, the security and privacy concerns created by these devices play a central role in the successful integration of these two technologies \\cite{kalra2015}.\n\nThe heterogeneous nature of IoT environments makes it much harder to detect the insider and outsider attacks in such universal platforms \\cite{Alrawais2017}.\n\nExperiences with clouds, especially those that have public or hybrid architectures, illustrates that cloud services and applications require multi-layered approaches to external and internal threats to security. This is most pertinent in the IIoT environment, where business systems contain valuable Intellectual Property (IP) that can only be protected by retaining tight security over working practices.\n\nThis article describes a model for delivering analytics services to and between components such as those encountered in IoT architectures. A number of adversarial attacks are simulated to demonstrate the effectiveness of a cloud-inspired multi-layer security model in the IoT domain.\n\\section{Cloud Computing Model}\nThe core principles of cloud computing: on-demand self-service, broad network access, resource pooling, rapid elasticity and service metering (NIST Special Publication 800-145) \\cite{hill2013}, make utility computing an attractive proposition for environments that contain distributed resources.\nThe NIST definition \\cite{nist2011} describes five options for the deployment of cloud computing as follows: public clouds, private clouds hosted onsite, private clouds hosted off-site, onsite community clouds and offsite community clouds.\n\nAbstraction is one of the most compelling motivations for system architects; this enables myriad heterogeneous resources to be viewed as a homogeneous whole. Cloud architectures of any of the five types defined by NIST are modelled using a framework that consists of seven layers. The physical infrastructure foundation is layer one, upon which a resource abstraction (virtualisation) second layer resides. Layer three is the resource composition layer, and layer four refers to the Infrastructure as a Service model (IaaS). Platform as a Service (PaaS) exists on layer five, Software as a Service (SaaS) on layer seven and finally the cloud tenants' applications sit within layer seven \\cite{demchenko,hill2013}.\n\nWe have considered the application of analytics services in the context of an enterprise Business Intelligence application that is likely to be delivered across a heterogeneous network of devices. In a cloud environment this is typified by off-site private or community clouds, whereby multiple disparate business organisations can host their own enterprise software services and data repositories in what is, to all intents and purposes, a set of hardware resources that is dedicated to each tenant \\cite{demchenko2012,hill2013}.\nThe tenants access their dedicated resources via a VPN, thus maintaining clear separation between corporate services and their respective analytics implementations. However, should any of the businesses wish to share services to promote operational efficiencies, this can be enabled via community-based agreements \\cite{demchenko2012}.\n\nWe have used the cloud model as inspiration to propose a multi-layer service-oriented framework that can deliver secure services (such as BI analytics) across distributed infrastructure. Security and privacy are key concerns for tenants of shared cloud infrastructure such as that found in off-site private and community clouds. The sharing of services such as malware detection and inoculation has significant benefits for organisations who may not otherwise have strategies or the resources to maintain their own defences. We shall now briefly review services dedicated to security and privacy within clouds in the following section.\n\n\n\\section{Secure Service Delivery}\nIn keeping with the service orientation models described so far, security as a service can be considered as a multi-layer framework in itself \\cite{panian2008}, protecting each layer of the cloud computing model \\cite{carvakho2011}. Such a service lends itself to a utility offering in the same way that clouds are rapidly provisioned and expanded on demand, for a cost that is quantifiable and chargeable to the consumer.\n\nAs described above, resource abstraction through virtualisation is a key principle for a cloud-inspired architecture, and therefore any security and privacy services need to be made available to all relevant Virtual Machines (VM) for each client \\cite{kumar2011,luo2011}. \n\nTherefore, security and privacy control services must be orchestrated via appropriate service interfaces between each cloud and its respective tenants that are accessing the services \\cite{kumar2011}. This control will be managed by the relevant virtualisation security manager for each VM, and as a consequence the control must be located within the virtualisation layer.\n\nPrivacy is managed via policy resources that need to be retained securely in a protected space such as a digital vault for encryption keys or digital certificates and such like \\cite{pearson2009,diaz2013}, all contained within a tier above other layers such as authentication and client metadata layers \\cite{pervez2012,chadwick2012}.\n\nAny instances whereby session packets do not have the requisite authority to invoke a particular service interaction will result in the session being terminated, and is a key security feature that is embedded within the framework.\n\n\\section{Multi-layered Security Model}\nOur proposed model is illustrated in Figure \\ref{fig:multi} and describes the cloud-inspired architecture containing multiple layers. Each layer incorporates security and privacy services that make use of firewalls as gateways for session traffic from each of the respective clients.\n\nWe have implemented the model within Opnet, the description of which is as follows.\nEach firewall is represented by a Cisco PIX 535, which operates across the various layers including network, transport and application. Access control lists within the firewalls enable the governance of traffic based on IP addresses, protocols and ports for common and bespoke applications.\n\nThe firewalls are configured at the application layer to filter traffic from different URLs, encrypted sessions (HTTPS) and various clients such as Java, and are able to utilise Internetwork Key Exchange (IKE) to encrypt all sessions via DES, 3DES or AES.\n\nAs an example we have defined four separate LANs to represent different clients\/cloud tenants, and they each access the cloud through separate firewalls.\n\nTo illustrate an adversarial scenario, we have incorporated a simulated distributed attack from three malicious parties, who each are attempting to access the network through independent firewalls. Such an attack is a key concern for early adopters of IoT technologies, as the pervasive use of wireless communications presents many opportunities for business vulnerabilities to be exposed.\n\n\\begin{figure}[tb]\n \\includegraphics[width=\\linewidth]{.\/figures\/Multilayer_hierarchical_Inter_Cloud_architecture.JPG}\n \\caption{Multilayer hierarchical inter-cloud architecture.}\n \\label{fig:multi}\n\\end{figure}\nAll of the multiple layers of the cloud are illustrated in Figure \\ref{fig:cloudlay}. We have embraced the cloud concept of resource abstraction and exploited this in the multi-layer model by representing each of the layers as separate clouds. Within each cloud layer exists an array of computing hardware to maximise performance and therefore minimise response times to service requests. It is an imperative that the model must not introduce excessive overheads into the normal functionality of the system.\n\nThe cloud layers are arranged such that inbound traffic from clients are filtered by the firewalls and then passed through successive layers until the system is satisfied that the requests can be delivered to the analytics functionality residing on the cloud applications layer. We now describe each of the layers and the role that they play within the model in turn:\n\n\\begin{itemize}\n\\item {\\bf Tenant firewalls}. This cloud layer consists of a number of databases that hold authentication data for each of the tenants, in order for them to be permitted access to the VM that have been assigned to them as part of their subscription. This is the gateway for each session, $S$ to be invoked.\n\n\\item {\\bf Tenant metadata}. Beyond the authentication criteria that is marshalled by the tenant firewalls layer, there exists further metadata for each client tenant. This metadata describes the credentials to authorise access to specific instances of repositories, applications and services. The detail is embedded within the session packets and is used to verify whether the session can continue or not.\n\\item {\\bf Digital vaults}. The vaults are a secure place to retain digital signatures\/certificates and decryption keys so that only the requisite authorities can access their own content and services. Public key encryption ensures that legitimate session requests are honoured and bogus requests are terminated.\n\\item {\\bf Intrusion Prevention System}. The occurrence of adversarial attacks is not limited to the correct authentication at the start of a session. Intrusion detection prevents attacks on sessions that are in progress, such as SQL injection in web forms. This might be considered a route into the DB\\textsubscript{META} or DB\\textsubscript{VAULT} repositories from the perspective of an adversary. This cloud layer prevents such activity from continuing.\n\n\\item {\\bf Malware protection}. Similar to the IPS cloud layer, an anti-malware layer contiuously montitors for trojan activity, where malicious exploits are embedded and concealed within session packets, only to be executed at the application layer. Records of the monitoring and detection history are retained within DB\\textsubscript{ANTIMAL}.\n\\item {\\bf Tenant applications}. This layer hosts the tenant's applications themselves, in this case the analytics functionality of enterprise BI. The preceding layers ensure that only a marshalled session, $S$ can access this layer, which in the case of BI potentially provides access to confidential business operations and performance data. For some functionality, there will be a requirement for further authentication from a user, such as an account and password, etc.\n\n\\end{itemize}\n\\subsection{Tenant repositories}\nThe final cloud layer hosts the back-end repositories that serve myriad tenant applications. For an analytics application, this would be the databases\/data warehouses\/data lakes that retain the underlying business data, together with any processed data objects for reporting and analysis. These objects are typically accessed by tenant users through reporting dashboards and data visualisation suites, abstracting users away from the complexities of database organisation.\n\\begin{figure}[h]\n \\includegraphics[width=\\linewidth]{.\/figures\/Cloud_layers_modeled_in_this_research.jpg}\n \\caption{Cloud layers modelled in this work.}\n \\label{fig:cloudlay}\n\\end{figure}\n\\subsection{Simulation design}\nWe have elected to model and simulate the proposed system (Figure \\ref{fig:cloudlay}) so that we can examine the operational characteristics in terms of performance, and its ability to provide a collection of services that are resilient towards malicious attacks.\n\n\n\nFigure \\ref{fig:virmac} illustrates how the cloud layers have been mapped to individual profiles. Each profile consists of a collection of VMs, that host the contents of the model as described in the previous section, namely: security and privacy services, tenant applications and repositories.\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=0.85\\linewidth]{.\/figures\/Virtual_Machines_with_applications_packaged.jpg}\n\\caption{Virtual Machines with applications packaged.}\n \\label{fig:virmac}\n\\end{figure}\n\\section{Modelling adversarial attacks}\nIf we now consider an adversarial attack upon the model, we can see how the system protects against such a scenario.\nFigure \\ref{fig:attack} shows the situation where a malicious agent attempts to infiltrate the system to obtain access to an authorised tenant's VM. Whilst the cloud service requires verification to be able to subscribe and access the remote resources, what appears to be a legitimate tenant could actually be an adversary that is masquerading as a valid client, who has the objective of entering the cloud and then attacking other tenant VMs from within the same cloud.\n\nTools such as Metasploit can be employed to automate the delivery of exploits in a rapid fashion. This could enable a bogus tenant to create surreptitious means of exposing sensitive data, unbeknown to any other party.\n\n\\begin{figure}[tb]\n\\centering\n \\includegraphics[width=0.9\\linewidth]{.\/figures\/An_attack_scenario_showing_the_problem.jpg}\n \\caption{An attack scenario showing the problem.}\n \\label{fig:attack}\n\\end{figure}\n\nPosing as a legitimate tenant, the adversarial agent would in this case be attacking from a VM that is authorised and hosted within the cloud. As such, the conventional cloud security processes would not be able to detect such activity.\nIn effect, the activity is obscured by the sheer volume of VMs that exist within a cloud environment.\nThis is a significant challenge for cloud service providers, particularly as service orientation through Microservice Architectures becomes more prevalent \\cite{hill2017}. If we extend this to the IoT domain, there is a stronger desire to package functionality into services, to be hosted on distributed, connected hardware. Therefore, the ability to successfully address this issue is a key feature of this work.\nWe propose a solution whereby the collection of VMs are organised into a hierarchy, as Figure \\ref{fig:ahier} shows.\nIf an adversarial agent enters the cloud via a subscription, and is assigned VM2, it has the potential to employ cross-channel attacks against VM1 and VM3, thereby exploiting the presence of virtual links between differing VMs. Typically, cloud security controls are deployed to prevent external attacks rather than insider attacking.\n\\begin{figure}[tb]\n \\includegraphics[width=\\linewidth]{.\/figures\/A_hierarchical_framework_presented_as_a_solution_to_the_attack_scenario_in_Figure_4.jpg}\n \\caption{A hierarchical framework presented as a solution to the attack scenario in Figure \\ref{fig:attack}.}\n \\label{fig:ahier}\n\\end{figure}\nThis architecture limits the opportunity to commit further exploits since the attacker is prevented from moving to the next control using virtual links. The only option is to proceed using a real network link by requesting a session in Control A, to communicate with VMs 4,5 and 6. Since there now exists Control A, the attacker has to successfully satisfy {\\tt Tenant Metadata Inspection} in order to proceed with the penetration.\nOf course, an orchestrated and sustained attempt to commit an attack will mean that we should anticipate an adversarial agent will also have obtained valid credentials, either by posing as a legitimate tenant or otherwise.\nIn this case, Control B will have required that the malicious agent would need to navigate the entire stack of cloud layers before access could be gained to the analytics interface.\n\nIf the attacker has satisfied the cloud validation and metadata inspection layers, the only way forward now is to plant exploits in the hope that these will lie undetected. However, the Intrusion Protection layer, and the Anti-Malware layer both offer protection for subversive, covert attacks from the inside.\nThe result is that our proposed model prevents data breaches, even when adversarial attacks are launched from what appears to be genuine service subscribers.\nWe can see in Figure \\ref{fig:control} a sequence in which various security controls might be instantiated. The security policies of the host system (or systems) will inform the order in which controls are implemented, to suit the goals desired by the infrastructure provider. It is also evident how a tenant's session is routed through the various VMs in order to access the relevant analytics services.\n\\begin{figure}[tb\n \\includegraphics[width=\\linewidth]{.\/figures\/The_controls_positioned_on_each_layer_of_the_hierarchy.jpg}\n \\caption{The position of the controls at each layer of the hierarchy.}\n \\label{fig:control}\n\\end{figure}\n\\section{Session packet inspection}\nIn this section we shall describe a detailed walkthrough of the security model and explain the use of session packet inspection.\n\nReferring back to Figure \\ref{fig:ahier}, VM1 will host a client that ultimately will access the enterprise analytics application hosted in VM7. A malicious agent that has access to either VM2 or VM3 (or both) can only attack the client of the analytics application, rather than the application itself.\nVM2 will be used to test the validity of the VM1 client using a VM identification number, and assuming that all is well, will launch a form via VM3 to request details from the tenant.\nThe details requested from the tenant will vary each time that VM3 is executed, but they will always refer to some personal details that can be used to help identify the correct tenant.\nOnce the form on VM3 has been completed, VM2 will use the responses to verify the details against those held in the {\\tt MetaDB} repository. Once the metadata has been verified, the session can continue. If the malicious tenant has the intention of acquiring metadata about other tenants, it would need to deliver an exploit into {\\tt MetaDB}. Whilst it might be expected that the system is fully patched, it is still conceivable that a vulnerability exists, enabling an adversary to progress to the next cloud layer.\nSince a session cannot be interrupted, any evidence left by malicious exploits will still remain as the agent will not be able to remove the incriminating evidence of the exploit. As such, the attacker can only proceed to the next layer by exposing that an exploit has been used to obtain entry. If the malicious agent has obtained private keys, they still cannot progress without exposing details of the exploit within the session.\nLayers 5 and 6 both have the capability to detect malware, which of course is reliant upon adequate, proactive security maintenance to ensure that all exploit databases are current.\n\nIt is feasible that an attacker could compromise VMs 3 and 4 with a fresh exploit that has yet to be discovered and documented, in which case the anti-malware cloud layer will be unaware of this exploit as well. Whilst this situation may foil malware detection, there still exists Intrusion Prevention within the cloud layer, which by its nature monitors and reports upon anomalies.\n\nOur model enables system architects the ability to add or subtract security controls as required for a given set of policies, merely by specifying additional cloud layers for the hierarchy. When a session is authenticated as satisfying the requirements of each layer, it can progress to subsequent layers until the destination application layer is reached.\nOne advantage of the use of multiple VMs is that much of the computation can be performed in parallel, and therefore the majority of packet inspection incurs a minimal overhead in VMs 2,3 and 4. However, session packet inspection across cloud layers, especially anything that involves Intrusion Prevention or anti-malware detection will result in an additional overhead. \n\\subsection{Mapping controls to the seven layer model}\nWith reference back to the NIST seven layer model \\cite{nist2011}, the controls of our proposed model can be mapped to the PaaS layer as per Figure \\ref{fig:mapp}, with the exception of the firewalls which naturally reside in the IaaS layer \\cite{hill2013,carvakho2011}.\nTenant VMs are hosted in layers 1-3 of the NIST model; users interact directly with clients that do not store data.\n\nSessions commence within tenant VMs, before moving to the application in layer 7 via several layers of verification and authentication in layers 4 and 5. If the tenant has a SaaS platform, this will be made available through layer 6, otherwise all bespoke applications are resident in layer 7.\n\\subsection{Session flow}\nThe proposed architecture is located on layers 4 (IaaS) and 5 (PaaS) of the NIST seven-layer model. All firewalls are categorised as IaaS due to the functionality of verifying VM instance IDs, and also relating these checks to the authorisation data provided by a tenant to secure access to the cloud.\nWhilst VM IDs are assigned in layers 2 and 3, access controls are assigned at layer 4. Subsequent controls are assigned outside of the VM layer, and are primarily concerned with session packet inspection, for a given session $S$.\nInformation is requested directly from the tenant to satisfy the DB\\textsubscript{META} and DB\\textsubscript{VAULT} checkpoint controls, and is supplemented by DB\\textsubscript{IPS} and DB\\textsubscript{ANTIMAL} controls that perform the session packet inspection function. DB\\textsubscript{IPS} and DB\\textsubscript{META} are therefore categorised as PaaS controls.\nIt is likely that controls will also exist at the application layer. For instance, a SaaS instance will require user authentication, as will a custom enterprise application\\cite{ouf2011}. User role profiles are useful in such scenarios to manage different levels of system access within an enterprise application, to reflect the role, responsibilities and authority of a particular stakeholder.\n\n\n\n\\begin{figure*}[htb]\n\\centering\n\\includegraphics[width=0.9\\textwidth]{.\/figures\/mapping.jpg}\n \\caption{Mapping of the proposed architecture with the NIST seven layer cloud model\\cite{nist2011}.}\n \\label{fig:mapp}\n\\end{figure*}\n\\section{Algorithm design}\nAlgorithm 1 describes the sequence of inspections that are performed within the proposed multi-layer security model. This algorithm represents the core functionality and can be augmented as additional cloud layers are defined in response to the security needs of an organisation. In the case of a single enterprise the model may tend towards fewer augmentations.\nHowever, enterprises that collaborate, or who choose to adopt shared services across distributed platforms, will no doubt adopt additional layers in order to securely manage service access. \n\\begin{algorithm}\n\\SetAlgoLined\nInput: $S$, $P_{CT}$, ($DB_{FW}$, $DB_{META}$, $DB_{VAULT}$, $DB_{IPS}$, $DB_{ANTIMAL}$), 1=permit, 0=deny\\\\\nTenant session: $S$\\\\\nContents of session packets:$P_{CT}$\\\\\nContents of FW: $DB_{FW}$\\\\\nContents of $TENANT_{META}: DB_{META}$\\\\\nContents of $TENANT_{VAULT}: DB_{VAULT}$\\\\\nContents of $IPS: DB_{IPS}$\\\\\nContents of $ANTIMALWARE: DB_{ANTIMAL}$\\\\\nFlags: 1=permit, 0=deny\\\\\n \n Initialise $S$\\;\n Set $S=1$, Match$(P_{CT})$;\\\\\n \\ForEach{\\{$DB_{FW}$, $DB_{META}$, $DB_{VAULT}$, $DB_{IPS}$, $DB_{ANTIMAL}$\\}} {%\n \\eIf{$P_{CT}\\in$ \\{$DB_{FW}$,$DB_{META}$, $DB_{VAULT}$\\} AND $P_{CT}\\notin$ \\{$DB_{IPS}$,$DB_{ANTIMAL}$\\}}{\n set $S=1$; AuthoriseTenantAccess()\\; \/\/tenant access authorised\\;}{\n set $S=0$; DenyTenantAccess()\\; \/\/tenant access denied\\;\n }\n}\n Output: $S$\n \\caption{Multi-layer hierarchical packet inspection}\n\\end{algorithm}\n\\subsection{Security model logic}\nOur proposed model seeks to address security concerns in distributed service applications across heterogeneous hardware resources by enabling session packet inspection to take place at a number of checkpoints. Each session packet is scrutinised and compared with a number of repositories such as DBMETA, DBVAULT, etc. Each inspection stage shall now be described in turn.\n\\begin{enumerate}\n\\item {\\bf DB\\textsubscript{FW}:} For each instance, a client initiates a session, which is inspected as a second stage. This session will possess a VM ID together with authentication and authorisation verification data.\nIn addition, the DB\\textsubscript{FW} must have an entry that relates to the VM ID, otherwise the session is terminated.\n\\item {\\bf DB\\textsubscript{META}:} A further inspection is then performed to confirm tenant metadata as requested by the cloud host administrators.\nOnce the metadata has been provided, this will be added to session packets. The session can only continue if the session metadata matches that which exists in the DB\\textsubscript{META} repository.\n\\item {\\bf DB\\textsubscript{VAULT}:} At this stage the VM ID within the session will be inspected to verify that a private key exists within the DB\\textsubscript{VAULT} so that encrypted databases can be accessed.\n\\item {\\bf DB\\textsubscript{IPS}:} On reaching this stage, the session itself is now regarded as being fully authenticated and is authorised to progress to subsequent layers. The next stage is to inspect the sessions for evidence of potential exploits that match those held in the DB\\textsubscript{IPS} repository. If a match occurs, the session is terminated.\n\\item {\\bf DB\\textsubscript{ANTIMAL}:} This stage performs an anti-malware check, which in conjunction with the DB\\textsubscript{IPS} inspection, prevents fully authenticated VM sessions being able to penetrate the upper layers of the security model by masquerading as legitimate cloud tenants. \n\\end{enumerate}\nThe algorithm ensures that after a session is initiated, it is inspected at each layer, where each layer is represented by a separate cloud. A key principle is that session packet data must match the firewall data, tenant metadata and vault data before a session can be authorised. Furthermore, the session cannot be granted access to application layers until it has been successfully screened against IPS and anti-malware repositories.\n\\subsection{Implementing the model}\nFor our example scenario, the LAN contains 500 clients. Each of the clients is assigned three VMs, with an assigned destination being the tenant client's metadata repositories rather than the eventual analytics application. This was a conscious decision to prohibit any tenant sessions from attempting to subvert the security and privacy controls of the cloud.\nFor a physical network, this control would most likely be enacted by a firewall in an application layer, or as part of a setting in a virtual network controller. For consistency we have replicated this by ensuring that the destinations of the metadata servers are directed towards the tenant vaults; this is a faithful representation of the intentions of the algorithm, in that it is governing the marshalling of each session packet by enforcing checkpoints in a particular sequence. In the case of a session packet not containing a tenant key that matches a corresponding entry in DB\\textsubscript{VAULT}, the encryption key is not assigned and the packets are dropped \\cite{diaz2013}. We have represented the adversarial agents as clients who have the authorisation of a valid tenant, in order to simulate an attack from within.\n\\section{Results}\nFigure \\ref{fig:dbsess} shows the sessions that have been hosted by the tenant LAN in the simulation. We observed that sessions only existed within the tenant LAN for tenants that had matching metadata in DB\\textsubscript{META}. It follows that the sessions that were invoked within the tenant LAN, had consistency between tenant metadata (DB\\textsubscript{META}), tenant vault (DB\\textsubscript{VAULT}), and the associated VM ID from the initial authentication.\nThe simulation demonstrated that these validations were maintained throughout the experiment for all network hops, illustrating that VMs were complying with their specified destination configurations, and there was no evidence of VMs by-passing authentication layers. As such, all security and privacy control rules are enforced by the model.\n\n\\begin{figure}[tb\n\\centering\n \\includegraphics[width=0.9\n \\linewidth]{.\/figures\/Client_DB_sessions_on_Tenants__LAN.jpg}\n\\caption{Client DB sessions on Tenants' LAN.}\n \\label{fig:dbsess}\n\\end{figure}\nVMs initiated by malicious agents have separate profiles from those of legitimate tenants, even though a malicious agent may appear legitimate at the outset. This means that adversaries had distinct metadata and decryption keys in their vault repositories.\nWe can see in Figure \\ref{fig:ipdropped} the instances where a malicious agent's packets have been dropped as a result of packet inspection through either the IPS or anti-malware layers, preventing deeper penetration into the system. As discussed earlier, an internal attack (an agent with legitimate DB\\textsubscript{META} data) would need to compromise DB\\textsubscript{VAULT}, DB\\textsubscript{IPS} and DB\\textsubscript{ANTIMAL} in sequence if it is to successfully reach the analytics application layer.\n\\begin{figure}[tb\n \\includegraphics[width=\\linewidth]{.\/figures\/IP_packets_from_the_hackers__machines_are_dropped.jpg}\n \\caption{After initial attempts to penetrate the system, IP packets from the attackers' machines are dropped.}\n \\label{fig:ipdropped}\n\\end{figure}\nConversely, an authorised tenant may elect to execute DB sessions on their own LAN, since the application profiles can include references to their VMs, as per Figure \\ref{fig:auth}. In such cases, the sessions are authorised in the sense that they fulfil the relevant rule in Algorithm 1. Since we have chosen to model each of the layers as clouds, each of the multiple layers can in fact be serviced by different cloud providers. This architecture thus demonstrates significant flexibility and is attractive to system architects who are proactively designing systems that will rely upon heterogeneous hardware and distributed resources, such as the IoT and IIoT environments.\nAn enterprise may adopt this security model so that it can take the opportunity to employ software applications and services that are themselves hosted on offsite private or community clouds. However, the use of these services may increase operational costs, although this is off-set by a reduction in capital expenditure. Operational costs may also increase indirectly through the maintenance charges associated with managing the database updates of five security control layers.\n\\begin{figure}[tb\n \\includegraphics[width=\\linewidth]{.\/figures\/Authorized_tenant_LANs_established_and_ran_DB_sessions_with_the_TENANT_METADATA}\n \\caption{Authorised tenant LANs established, DB sessions initiated with the TENANT METADATA.}\n \\label{fig:auth}\n\\end{figure}\n\\section {Conclusions\nThis article proposes a multi-layer hierarchical inter-cloud security model, that is inspired by the NIST seven-layer model of cloud computing. By using sequential session packet inspection techniques, we have demonstrated an architecture that exhibits considerable resilience towards both external attack as well as more surreptitious internal adversarial behaviour. Whilst VM vulnerabilities are well documented in multi-tenant shared environments, our five layers for packet inspection enables the architecture to identify and compartmentalise malicious activity. Thus, penetration of a firewall is in itself insufficient as a means of attempting to access the application layer, as it is then necessary to create an evidence trail of exploits that cannot be hidden from the IPS and anti-malware packet inspection layers. The proposed model is particularly suited to architectures that have a requirement to remain flexible for future scaling (which is often a driver for the adoption of cloud infrastructure), such as those built upon microservices. \nOur solution is mapped to the NIST model in order to assist cloud and IoT system architects to incorporate this work into their own designs.\n \n \n \n \n \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nCompact groups of galaxies are associations of three to seven galaxies, where the projected distances between them is of the order of their diameters, and where the group\nshows a low velocity dispersion, making compact groups an ideal place to study\ngalaxy interaction and intergalactic star formation (e.g. \\cite[Torres-Flores et\nal. 2010]{tor09b}, \\cite[de Mello et al. 2008]{dem08b}, \\cite[de Mello, Torres--Flores \\& Mendes de Oliveira 2008]{dem08a}, \\cite[Mendes de Oliveira et al. 2004]{mdeo04}). The main goal of this work is to search for a link between the evolutionary stage of a group and the presence of young intergalactic objects which may have formed during galaxy interactions. For this, we analyze a subsample of seven compact groups (HCG 2, 7, 22, 23, 92, 100 and NGC 92) which span a wide range of evolutionary stages, from HI rich groups to strongly interacting groups, where the galaxies show tidal tail features and a deficiency in neutral HI gas. In order to analyze the evolutionary stage of each group, we used new Fabry-Perot velocity maps, GALEX\/UV data and optical R-band images. The velocity fields and rotation curves help constraining the evolutionary stage of each compact group while ultraviolet light contains important information regarding the age of the young stellar population that may be present in the intragroup medium.\n\n\\section{UV analysis}\n\nWe searched for ultraviolet emitting regions in the vicinity of all seven targets, using the SExtractor software (SE, \\cite[Bertin \\& Arnouts 1996]{ber96}) in the FUV, NUV and R sky-subtracted images of our compact group sample. We compare the field density of regions detected in the compact group with a control sample outside the group. HCG 92 and HCG 22 have the highest field density in this study. No excess was found in HCG 2, HCG 7, HCG 23, HCG 100 and NGC 92 (\\cite[Torres-Flores et al. 2009]{tor09}).\n\n\\section{Fabry-Perot analysis}\n\nIn order to constrain the evolutionary stage of each compact group, we\ninspected the velocity field and rotation curve of each galaxy to search for\ninteraction indicators, in a similar way to that done by \\cite[Plana et al. (2003)]{pla03} and \\cite[Amram et al. (2003)]{amr03}. In the case of NGC 92, it shows a prominent tidal tail in its velocity field. At the tip of this tail, there is a tidal dwarf galaxy candidate, having an age of about 40 Myrs (\\cite[Torres-Flores et al. 2009]{tor09}).\n\n\\section{Conclusions}\n\nWe used multiwavelength data to study the evolutionary stages of the compact groups of\ngalaxies HCG 2, 7, 22, 23, 92, 100 and NGC 92. New Fabry-Perot velocity fields, rotation curves and GALEX NUV\/FUV images were analyzed for four and seven of these groups respectively. Groups HCG 7 and 23 are in an early stage of interaction whereas\nHCG 2 and 22 show limited interaction features and HCG 92, 100 and NGC 92 are in a\nlate stage of evolution, having HI gas in the intragroup medium, galaxies with peculiar velocity fields and several young star-forming regions in the intergalactic medium.\n\n\\acknowledgments\n\nS. T--F. acknowledges the financial support of FAPESP through the Doctoral position, under contract 2007\/07973-3 and Eiffel scholarship.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}