diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpldx" "b/data_all_eng_slimpj/shuffled/split2/finalzzpldx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpldx" @@ -0,0 +1,5 @@ +{"text":"\\section{Appendix}\n\n\\subsection{Low Rank Approximation}\n\nWe first state the following Lemma from~\\cite{parlett1980symmetric} without proof and then prove our Theorem \\ref{the:error} following~\\cite{simon2000low}.\n\n\\begin{lemma}\nLet $A \\in \\mathbb{R}^{N \\times N}$ be symmetric and $v$ an arbitrary vector. \nDefine Krylov subspace $\\mathcal{K}_{m} \\equiv \\text{span} \\{v, Av, \\dots, A^{m-1}v\\}$.\nLet $A = U \\Lambda U^{\\top}$ be the eigendecomposition of $A$ with $\\Lambda_{i,i} = \\lambda_i$ and $\\lambda_1 \\geq \\dots \\geq \\lambda_n$.\nDenoting $U = [u_1, \\dots, u_N]$ and $\\mathcal{U}_j = \\mathrm{span}\\{u_1, \\dots, u_j\\}$, then\n\\begin{align}\n\\tan{\\left(u_j, \\mathcal{K}_{m}\\right)} \\leq \\frac{\\sin{\\left(v, \\mathcal{U}_j\\right)} \\prod_{k=1}^{j-1} (\\lambda_k - \\lambda_n) \/ (\\lambda_k - \\lambda_j) }{\\cos{\\left(v, u_j\\right)} T_{m-j}(1+2\\gamma) }, \\nonumber\n\\end{align}\nwhere $T_{m-j}(x)$ is the Chebyshev Polynomial of degree $m-j$ and $\\gamma = (\\lambda_{j} - \\lambda_{j+1}) \/ (\\lambda_{j+1} - \\lambda_{N})$.\n\\end{lemma}\n\n\\errortheorem*\n\\begin{proof}\nFrom Lanczos algorithm, we have $SQ = QT$.\nTherefore, \n\\begin{align}\n\\Vert S - Q T Q^{\\top} \\Vert_{F}^{2} & = \\Vert S - S Q Q^{\\top} \\Vert_{F}^{2} \\nonumber \\\\\n& = \\Vert S (I - Q Q^{\\top}) \\Vert_{F}^{2}\n\\end{align}\nLet $P_{Q}^{\\perp} \\equiv I - QQ^{\\top}$, the orthogonal projection onto the \northogonal complement of subspace $\\text{span}\\{Q\\}$.\nRelying on the eigendecomposition, we have,\n\\begin{align}\n\\Vert S - Q T Q^{\\top} \\Vert_{F}^{2} & = \\Vert U \\Lambda U^{\\top} (I - Q Q^{\\top}) \\Vert_{F}^{2} \\nonumber \\\\\n& = \\Vert \\Lambda U^{\\top} (I - Q Q^{\\top}) \\Vert_{F}^{2} \\nonumber \\\\\n& = \\Vert (I - Q Q^{\\top}) U \\Lambda \\Vert_{F}^{2} \\nonumber \\\\\n& = \\Vert \\left[ \\lambda_1 P_{Q}^{\\perp} u_1, \\dots, \\lambda_N P_{Q}^{\\perp} u_N \\right] \\Vert_{F}^{2},\n\\end{align}\nwhere we use the fact that $\\Vert RA \\Vert_F^2 = \\Vert A \\Vert_F^2$ for any orthogonal matrix $R$ and $\\Vert A^{\\top} \\Vert_F^2 = \\Vert A \\Vert_F^2$.\n\nNote that for any $j$ we have,\n\\begin{align}\n\\left \\| \\left[ \\lambda_1 P_{Q}^{\\perp} u_1, \\dots, \\lambda_N P_{Q}^{\\perp} u_N \\right] \\right \\|_{F}^{2} & = \\sum_{i=1}^{N} \\lambda_i^2 \\Vert P_{Q}^{\\perp} u_i \\Vert^2 \\nonumber \\\\\n& \\leq \\sum_{i=1}^{j} \\lambda_i^2 \\Vert P_{Q}^{\\perp} u_i \\Vert^2 + \\sum_{i=j+1}^{N} \\lambda_i^2,\n\\end{align}\nwhere we use the fact that for any $i$, $\\Vert P_{Q}^{\\perp} u_i \\Vert^2 = \\Vert u_i \\Vert^2 - \\Vert u_i - P_{Q}^{\\perp} u_i \\Vert^2 \\leq \\Vert u_i \\Vert^2 = 1$.\n\nNote that we have $\\text{span}\\{Q\\} = \\text{span}\\{v, Sv, \\dots, S^{K-1}v\\} \\equiv \\mathcal{K}_{K}$ from the Lanczos algorithm.\nTherefore, we have,\n\\begin{align}\n \\Vert P_{Q}^{\\perp} u_i \\Vert = \\vert \\sin{\\left(u_i, \\mathcal{K}_{K} \\right)} \\vert \\leq \\vert \\tan{\\left(u_i, \\mathcal{K}_{K} \\right)} \\vert.\n\\end{align}\nApplying the above lemma with $A = S$, we finish the proof.\n\n\\end{proof}\n\n\n\\subsection{Lanczos Algorithm}\n\nUtilizing exact arithmetic, Lanczos vectors are orthogonal to each other. \nHowever, in floating point arithmetic, it is well known that the round-off error will make the Lanczos vectors lose orthogonality as the iteration proceeds.\nOne could apply a full Gram-Schmidt (GS) process $z = z - \\sum_{i=1}^{j-1} z^{\\top}q_{i}q_{i}$ after line $6$ of Alg.~\\ref{alg:lanczos} to ensure orthogonality.\nOther partial or selective re-orthogonalization could also be explored. \nHowever, since we found the orthgonality issue does not hurt overall performance with a small iteration number, e.g., $K=20$, and the full GS process is computationally expensive, we do not add such a step.\nAlthough some customized eigendecomposition methods, e.g., implicit QL~\\cite{press2007numerical}, exist for tridiagonal matrix, we leave it for future exploration due to its complicated implementation.\n\n\\subsection{Experiments}\n\nFor ChebyNet, we do not use graph coarsening in all experiments due to its demanding computation cost for large graphs.\nAlso, for small molecule graphs, coarsening generally does not help since it loses information compared to directly stacking another layer of original graph.\n\n\\paragraph{Citation Networks}\n\n\\begin{table}\n\\centering\n{%\n\\begin{tabular}{@{}l|rrrrrrrrr@{}}\n\\hline\n\\toprule\nDataset & \\#Nodes & \\#Edges & \\#Classes & \\#Features & $S_0$ & $S_1$ & $S_2$ & $S_3$ \\\\\n\\midrule\nCiteseer & 3,327 & 4,732 & 6 & 3,703 & 3.6\\%& 3\\%& 1\\%& 0.5\\% \\\\\nCora & 2,708 & 5,429 & 7 & 1,433 & 5.2\\% & 1\\%& 0.5\\%& 0.3\\%\\\\\nPubmed & 19,717 & 44,338 & 3 & 500 & 0.3\\% & 0.1\\%& 0.05\\%& 0.03\\%\\\\\n\\bottomrule\n\\end{tabular}\n}\n\\caption{Dataset statistics. $S_0$ is portion of training examples in the public split. $S_1$ to $S_3$ are the ones of $3$ random splits generated by us.}\n\\label{tbl:citation_stats}\n\\end{table}\n\nThe statistics of three citation networks are summarized in Table~\\ref{tbl:citation_stats}.\nWe now report the important hyperparameters chosen via cross-validation for each method.\nAll methods are trained with Adam with learning rate $1.0e^{-2}$ and weight decay $5.0e^{-4}$.\nThe maximum number of epoch is set to $200$.\nEarly stop with window size $10$ is also adopted.\nWe tune hyperparameters using Cora alone and fix them for citeseer and pubmed.\nFor convolution based methods, we found $2$ layers work the best.\nIn GCN-FP, we set the hidden dimension to $64$ and dropout to $0.5$.\nIn GGNN, we set the hidden dimension to $64$, the propagate step to $2$ and aggregation function to summation.\nIn DCNN, we set the hidden dimension to $64$, dropout to $0.5$ and use diffusion scales $\\{1, 2, 5\\}$.\nIn ChebyNet, we set the polynomial order to $5$, the hidden dimension to $64$ and dropout to $0.5$.\nIn GCN, we set the hidden dimension to $64$ and dropout to $0.5$.\nIn MPNN, we use GRU as update function and set the hidden dimension to $64$ and dropout to $0.5$. \nNo edge embedding is used as there is just one edge type.\nIn GraphSAGE, we set the number of sampled neighbors to $500$, the hidden dimension to $64$, dropout to $0.5$ and the aggregation function to average.\nIn GAT, we set the number of heads per layer to $8, 1$, hidden dimension per head to $8$ and dropout to $0.6$.\nIn {LanczosNet}, we set the short and long diffusion scales to $\\{1, 2 ,5, 7\\}$ and $\\{10, 20, 30\\}$ respectively. \nThe hidden dimension is $64$ and dropout is $0.5$.\nLanczos step is $20$.\n1-layer MLP with $128$ hidden units and ReLU nonlinearity is used as the spectral filter.\nIn {AdaLanczosNet}, we set the short and long diffusion scales to $\\{1, 2 ,5\\}$ and $\\{10, 20\\}$ respectively. \nThe hidden dimension is $64$ and dropout is $0.5$.\nLanczos step is $20$.\n1-layer MLP with $128$ hidden units and ReLU nonlinearity is used as the spectral filter.\n\n\\paragraph{Quantum Chemistry}\n\nWe now report the important hyperparameters chosen via cross-validation for each method.\nAll methods are trained with Adam with learning rate $1.0e^{-4}$ and no weight decay.\nThe maximum number of epoch is set to $200$.\nEarly stop with window size $10$ is also adopted.\nFor convolution based methods, we found $7$ layers work the best.\nWe augment all methods with $64$-dimension node embedding and add edge types by either feeding a multiple-channel graph Laplacian matrix or directly adding a separate message function per edge type.\nFor all methods, no dropout is used since it slightly hurts the performance.\nIn GCN-FP, we set the hidden dimension to $128$.\nIn GGNN, we set the hidden dimension to $128$, the propagate step to $15$ and aggregation function to average.\nIn DCNN, we set the hidden dimension to $128$ and use diffusion scales $\\{3, 5, 7, 10, 20, 30\\}$.\nIn ChebyNet, we set the polynomial order to $5$, the hidden dimension to $128$.\nIn GCN, we set the hidden dimension to $128$.\nIn MPNN, we use GRU as update function, set the number of propagation to $7$, set the hidden dimension to $128$, use a 1-layer MLP with $1024$ hidden units and ReLU nonlinearity as the message function and set the number of unroll step of \\textit{Set2Vec} to $10$.\nIn GraphSAGE, we set the number of sampled neighbors to $40$, the hidden dimension to $128$ and the aggregation function to average.\nIn GAT, we set the number of heads of all $7$ layers to $8$ and hidden dimension per head to $16$.\nIn {LanczosNet}, we do not use short diffusion scales and set long ones to $\\{1, 2, 3, 5, 7, 10, 20, 30\\}$. \nThe hidden dimension is $128$.\nLanczos step is $20$.\n1-layer MLP with $128$ hidden units and ReLU nonlinearity is used as the spectral filter.\nIn {AdaLanczosNet}, we set the short and long diffusion scales to $\\{1, 2, 3\\}$ and $\\{5, 7, 10, 20, 30\\}$ respectively. \nThe hidden dimension is $128$.\nLanczos step is $20$.\n3-layer MLP with $4096$ hidden units and ReLU nonlinearity is used as the spectral filter.\n\\section{Background}\n\nIn this section, we introduce some background material.\nA graph $\\mathcal{G}$ with $N$ nodes is denoted as $\\mathcal{G} = (\\mathcal{V}, \\mathcal{E}, A)$, where $A \\in \\mathbb{R}^{N \\times N}$ is an adjacency matrix which could either be binary or real valued.\n$X \\in \\mathbb{R}^{ N \\times F}$ is the compact representation of node features (or graph signal in the GSP literature).\nFor any node $v \\in \\mathcal{V}$, we denote its feature as a row vector $X_{v, :} \\in \\mathbb{R}^{1 \\times F}$. \nWe use $X_{:, i}$ to denote the $i$-th column of $X$.\n\n\\input{graph_convolution}\n\n\n\\section{Conclusion}\nIn this paper, we propose {LanczosNet} which leverages the Lanczos algorithm to construct a low rank approximation of the graph Laplacian.\nIt not only provides an efficient way to gather multi-scale information for graph convolution but also enables learning spectral filters.\nAdditionally, we propose a model variant {AdaLanczosNet} which facilitates graph kernel and node embedding learning.\nWe show that our model has a close relationship with graph based manifold learning, especially diffusion map.\nExperimental results demonstrate that our model outperforms a range of other graph networks, on challenging graph problems.\nWe are currently exploring customized eigen-decomposition methods for tridiagonal matrices, which will potentially further improve our {AdaLanczosNet}. \nOverall, work in this direction holds promise for allowing deep learning to scale up to very large graph problems.\n\\section{Lanczos Network and Diffusion Maps}\\label{sec:relation}\n\nIn this section, we highlight the relationship between {LanczosNet} and an important example of graph based manifold learning algorithms, diffusion maps~\\cite{coifman2006diffusion}.\n\n\\paragraph{Diffusion Maps}\nIn diffusion maps, the weights in the adjacency matrix define a discrete random walk over the graph, where the Markov transition matrix $ P = D^{-1} A $ shows the transition probability in a single time step. \nTherefore, $P^t_{i, j}$ sums the probability of all paths of length $t$ that start at node $i$ and end at node $j$. \nIt is shown in~\\cite{coifman2006diffusion} that $P$ can be used to define an inner product in a Hilbert space. \nSpecifically, we use the eigenvalues and right eigenvectors $\\{ \\lambda_l, \\psi_l \\}_{l = 1}^N$ of $P$ to define a diffusion mapping $\\Phi_t$ as,\n\\begin{align}\n\\label{eq:dm}\n \\Phi_{t}(i) = \\left(\\lambda_{1}^{t} \\psi_{1}(i), \\lambda_{2}^{t} \\psi_{2}(i), \\dots, \\lambda_{N}^{t} \\psi_{N}(i) \\right),\n\\end{align}\nwhere $\\psi_{l}(i)$ is the $i$-th entry of the eigenvector $\\psi_l$. Since the row stochastic matrix $P$ is similar to $S$, i.e., $P = D^{-1\/2} S D^{1\/2}$, we have $\\psi_{l} = D^{-1\/2} u_l$. The mapping $\\Phi_t$ satisfies $\\sum_{k = 1}^N P^t_{i,k} P^t_{j, k} \/D_{k, k} = \\left \\langle \\Phi_t(i), \\Phi_t(j) \\right \\rangle$, where $\\langle \\cdot, \\cdot \\rangle$ is the inner product over Euclidean space.\nThe diffusion distance between $i$ and $j$, $d^2_{\\mathrm{DM}, t}(i, j) =\\left \\| \\Phi_t(i) - \\Phi_t(j) \\right \\|^2 = \\sum_{k = 1}^{N} {( P^t_{i, k} - P^t_{j, k})^2} \/ {D_{k,k}}$,\nis the weighted-$l_2$ proximity between the probability clouds of random walkers starting at $i$ and ending at $j$ after $t$ steps. \nSince all eigenvalues of $S$ reside in the interval $[-1, 1]$, for some large $t$, $\\lambda_l^{t}$ in Eq.~(\\ref{eq:dm}) is close to zero, and $d_{\\mathrm{DM}, t}$ can be well approximated by using only a few largest eigenvalues and their eigenvectors. \n\n\n\\paragraph{Connection to Graph Convolution} Apart from using diffusion maps to embed node features $X$ at different time scales, one can use it to compute the frequency representations of $X$ as below,\n\\begin{align}\\label{eq:gft}\n \\hat{X} = \\Lambda^{t} U^{\\top}X,\n\\end{align}\nwhere $U$ are the eigenvectors of $S$ and define the graph Fourier transform.\nThe frequency representation $\\hat{X}$ is weighted by the powers of the eigenvalues $\\lambda_l^t$, suppressing entries with small magnitude of eigenvalues.\nRecall that in the convolution layer Eq.~(\\ref{eq:learnable_filter_fix}) of {LanczosNet}, we use multiple such frequency representations with different scales $t$ and replace the eigenvalues $\\Lambda$ in Eq.~(\\ref{eq:gft}) with their approximation, i.e., Ritz values.\nTherefore, in {LanczosNet}, spectral filters are actually applied to the frequency representations which are obtained by projecting the node features $X$ onto multiple diffusion maps with different scales.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Experiments}\n\nIn this section, we compare our two model variants against $9$ recent graph networks, including graph convolution networks for fingerprint (GCN-FP)~\\cite{duvenaud2015convolutional}, gated graph neural networks (GGNN)~\\cite{li2015gated}, diffusion convolutional neural networks (DCNN)~\\cite{atwood2016diffusion}, Chebyshev networks (ChebyNet)~\\cite{defferrard2016convolutional}, graph convolutional networks (GCN)~\\cite{kipf2016semi}, message passing neural networks (MPNN)~\\cite{Gilmer17}, graph sample and aggregate (GraphSAGE)~\\cite{hamilton2017inductive}, graph partition neural networks (GPNN)~\\cite{liao2018graph}, graph attention networks (GAT)~\\cite{velickovic2017graph}. \nWe test them on two sets of tasks: (1) semi-supervised document classification on $3$ citation networks~\\cite{yang2016revisiting},\n(2) supervised regression of molecule property on QM8 quantum chemistry dataset~\\cite{ramakrishnan2015electronic}.\nFor fair comparison, we only tune model-related hyperparameters in all our experiments and share the others, e.g., using the same batch size.\nWe carefully tune hyperparameters based on cross-validation and report the best performance of each competitor.\nPlease refer to the appendix for more details on hyperparameters.\nWe implement all methods using PyTorch~\\cite{paszke2017automatic} and release the code at \\url{https:\/\/github.com\/lrjconan\/LanczosNetwork}. \n\n\n\\subsection{Citation Networks}\nThree citation networks used in this experiment are: Cora, Citeseer and Pubmed.\nFor each network, nodes are documents and connected based on their citation links.\nEach node is associated with a bag-of-words feature vector.\nWe use the same pre-processing procedure and follow the transductive setting as in~\\cite{yang2016revisiting}.\nIn particular, given a portion of nodes and their labeled content categories, e.g., history, science, the task is to predict the category for other unlabeled nodes within the same graph.\nThe statistics of these datasets are summarized in the appendix.\nAll experiments are repeated $10$ times with different random seeds. \nDuring each run, all methods share the same random seed.\nWe first experiment with the public data split and observe severe overfitting for almost all algorithms.\nTo mitigate overfitting and test the robustness of models, we then increase the difficulty of the task by reducing the portion of training examples to several levels and randomly split data.\n\nExperimental results and exact portions of training examples are shown in Table.~\\ref{table:citation_net}.\nWe use the reported best hyperparameters when available for public split and do cross-validation otherwise.\nHyperparameters are reported in the appendix.\nFrom the table, we see that for random splits with different portion of training examples, since each run of experiment uses a separate random split, the overall variance is larger than its public counterpart. We see that GAT achieves the best performance on the public split but performs poorly on random splits with different portions of training examples.\nThis is partly due to the fact that GAT uses multiple dropout throughout the model which helps only if there is overfitting.\nWe can see that either {LanczosNet} or {AdaLanczosNet} achieves state-of-the-art accuracy on random difficult splits and performs closely with respect to GAT on public splits.\nThis may be attributed to the fact that with fewer training examples, the model requires longer scale schemes to spread supervised information over the graph.\nOur model provides an efficient way of leveraging such long scale information.\n\n\\begin{table}[t]\n\\centering\n\\resizebox{\\linewidth}{!}{%\n\\setlength{\\tabcolsep}{3pt}\n\\begin{tabular}{@{}c|cccccccccc@{}}\n\\hline\n\\toprule\nCora & GCN-FP & GGNN & DCNN & ChebyNet & GCN & MPNN & GraphSAGE & GAT & LNet & AdaLNet \\\\\n\\midrule\n\\midrule\nPublic & 74.6 $\\pm$ 0.7 & 77.6 $\\pm$ 1.7 & 79.7 $\\pm$ 0.8 & 78.0 $\\pm$ 1.2 & 80.5 $\\pm$ 0.8 & 78.0 $\\pm$ 1.1 & 74.5 $\\pm$ 0.8 & \\textbf{82.6 $\\pm$ 0.7} & 79.5 $\\pm$ 1.8 & 80.4 $\\pm$ 1.1 \\\\\n3\\% & 71.7 $\\pm$ 2.4 & 73.1 $\\pm$ 2.3 & 76.7 $\\pm$ 2.5 & 62.1 $\\pm$ 6.7 & 74.0 $\\pm$ 2.8 & 72.0 $\\pm$ 4.6 & 64.2 $\\pm$ 4.0 & 56.8 $\\pm$ 7.9 & 76.3 $\\pm$ 2.3 & \\textbf{77.7 $\\pm$ 2.4} \\\\\n1\\% & 59.6 $\\pm$ 6.5 & 60.5 $\\pm$ 7.1 & 66.4 $\\pm$ 8.2 & 44.2 $\\pm$ 5.6 & 61.0 $\\pm$ 7.2 & 56.7 $\\pm$ 5.9 & 49.0 $\\pm$ 5.8 & 48.6 $\\pm$ 8.0 & 66.1 $\\pm$ 8.2 & \\textbf{67.5 $\\pm$ 8.7} \\\\\n0.5\\% & 50.5 $\\pm$ 6.0 & 48.2 $\\pm$ 5.7 & 59.0 $\\pm$ 10.7 & 33.9 $\\pm$ 5.0 & 52.9 $\\pm$ 7.4 & 46.5 $\\pm$ 7.5 & 37.5 $\\pm$ 5.4 & 41.4 $\\pm$ 6.9 & 58.1 $\\pm$ 8.2 & \\textbf{60.8 $\\pm$ 9.0} \\\\\n\\midrule\nCiteseer & GCN-FP & GGNN & DCNN & ChebyNet & GCN & MPNN & GraphSAGE & GAT & LNet & AdaLNet \\\\\n\\midrule\n\\midrule\nPublic & 61.5 $\\pm$ 0.9 & 64.6 $\\pm$ 1.3 & 69.4 $\\pm$ 1.3 & 70.1 $\\pm$ 0.8 & 68.1 $\\pm$ 1.3 & 64.0 $\\pm$ 1.9 & 67.2 $\\pm$ 1.0 & \\textbf{72.2 $\\pm$ 0.9} & 66.2 $\\pm$ 1.9 & 68.7 $\\pm$ 1.0 \\\\\n1\\% & 54.3 $\\pm$ 4.4 & 56.0 $\\pm$ 3.4 & 62.2 $\\pm$ 2.5 & 59.4 $\\pm$ 5.4 & 58.3 $\\pm$ 4.0 & 54.3 $\\pm$ 3.5 & 51.0 $\\pm$ 5.7 & 46.5 $\\pm$ 9.3 & 61.3 $\\pm$ 3.9 & \\textbf{63.3 $\\pm$ 1.8} \\\\\n0.5\\% & 43.9 $\\pm$ 4.2 & 44.3 $\\pm$ 3.8 & 53.1 $\\pm$ 4.4 & 45.3 $\\pm$ 6.6 & 47.7 $\\pm$ 4.4 & 41.8 $\\pm$ 5.0 & 33.8 $\\pm$ 7.0 & 38.2 $\\pm$ 7.1 & 53.2 $\\pm$ 4.0 & \\textbf{53.8 $\\pm$ 4.7} \\\\\n0.3\\%& 38.4 $\\pm$ 5.8 & 36.5 $\\pm$ 5.1 & 44.3 $\\pm$ 5.1 & 39.3 $\\pm$ 4.9 & 39.2 $\\pm$ 6.3 & 36.0 $\\pm$ 6.1 & 25.7 $\\pm$ 6.1 & 30.9 $\\pm$ 6.9 & 44.4 $\\pm$ 4.5 & \\textbf{46.7 $\\pm$ 5.6} \\\\\n\\midrule\nPubmed & GCN-FP & GGNN & DCNN & ChebyNet & GCN & MPNN & GraphSAGE & GAT & LNet & AdaLNet \\\\\n\\midrule\n\\midrule\nPublic & 76.0 $\\pm$ 0.7 & 75.8 $\\pm$ 0.9 & 76.8 $\\pm$ 0.8 & 69.8 $\\pm$ 1.1 & 77.8 $\\pm$ 0.7 & 75.6 $\\pm$ 1.0 & 76.8 $\\pm$ 0.6 & 76.7 +- 0.5 & \\textbf{78.3 $\\pm$ 0.3} & 78.1 $\\pm$ 0.4 \\\\\n0.1\\% & 70.3 $\\pm$ 4.7 & 70.4 $\\pm$ 4.5 & 73.1 $\\pm$ 4.7 & 55.2 $\\pm$ 6.8 & 73.0 $\\pm$ 5.5 & 67.3 $\\pm$ 4.7 & 65.4 $\\pm$ 6.2 & 59.6 +- 9.5 & \\textbf{73.4 $\\pm$ 5.1} & 72.8 $\\pm$ 4.6 \\\\\n0.05\\% & 63.2 $\\pm$ 4.7 & 63.3 $\\pm$ 4.0 & 66.7 $\\pm$ 5.3 & 48.2 $\\pm$ 7.4 & 64.6 $\\pm$ 7.5 & 59.6 $\\pm$ 4.0 & 53.0 $\\pm$ 8.0 & 50.4 +- 9.7 & \\textbf{68.8 $\\pm$ 5.6} & 66.0 $\\pm$ 4.5 \\\\\n0.03\\% & 56.2 $\\pm$ 7.7 & 55.8 $\\pm$ 7.7 & 60.9 $\\pm$ 8.2 & 45.3 $\\pm$ 4.5 & 57.9 $\\pm$ 8.1 & 53.9 $\\pm$ 6.9 & 45.4 $\\pm$ 5.5 & 50.9 +- 8.8 & 60.4 $\\pm$ 8.6 & \\textbf{61.0 $\\pm$ 8.7} \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\caption{Test accuracy with $10$ runs on citation networks. The public splits in Cora, Citeseer and Pubmed contain $5.2\\%$, $3.6\\%$ and $0.3\\%$ training examples respectively.}\n\\label{table:citation_net}\n\\end{table}\n\n\\subsection{Quantum Chemistry}\n\nWe then benchmark all algorithms on the QM8 quantum chemistry dataset which comes from a recent study on modeling quantum mechanical calculations of electronic spectra and excited state energy of small molecules~\\cite{ramakrishnan2015electronic}.\nThe setup of QM8 is as follows.\nAtoms are treated as nodes and they are connected to each other following the structure of the corresponding molecule.\nEach edge is labeled with a chemical bond.\nNote that two atoms in one molecule can have multiple edges belong to different chemical bonds.\nTherefore a molecule is actually modeled as a multigraph.\nWe also use explicit hydrogen in molecule graphs as suggested in~\\cite{Gilmer17}.\nSince some models cannot leverage feature on edges easily, we use the molecule graph itself as the only input information for all models so that it is a fair comparison.\nAs demonstrated in our ablation studies, learning node embeddings for atoms is very helpful.\nTherefore, we augment all competitors and our models with this component.\nThe task is to predict $16$ different quantities of electronic spectra and energy per molecule graph which boils down to a regression problem.\nThere are $21786$ molecule graphs in total of which the average numbers of nodes and edges are around $16$ and $21$.\nThere are $6$ different chemical bonds and $70$ different atoms throughout the dataset.\nWe use the split provided by DeepChem~\\footnote{https:\/\/deepchem.io\/} which have $17428$, $2179$ and $2179$ graphs for training, validation and testing respectively.\nFollowing~\\cite{Gilmer17,wu2018moleculenet}, we use mean squared error (MSE) as the loss for training and weighted mean absolute error (MAE) as the evaluation metric.\nWe repeat all experiments $3$ times with different random seeds and report the average performance and standard deviation.\nThe same random seed is shared for all methods per run.\nHyperparameters are reported in the appendix.\nThe validation and test MAEs are shown in Table~\\ref{table:qm8}.\nAs you can see, {LanczosNet} and {AdaLanczosNet} achieve better performances than all other competitors.\nNote that DCNN also achieves good performance with the carefully chosen scale parameters since it is somewhat similar to our model in terms of leveraging multi-scale information.\n\n\n\\begin{table}[t]\n\\centering\n{%\n\\begin{tabular}{@{}c|cc@{}}\n\\hline\n\\toprule\nMethods & Validation MAE ($\\times 1.0e^{-3}$) & Test MAE ($\\times 1.0e^{-3}$) \\\\\n\\midrule\n\\midrule\nGCN-FP~\\cite{duvenaud2015convolutional} & 15.06 $\\pm$ 0.04 & 14.80 $\\pm$ 0.09 \\\\\nGGNN~\\cite{li2015gated} & 12.94 $\\pm$ 0.05 & 12.67 $\\pm$ 0.22 \\\\\nDCNN~\\cite{atwood2016diffusion} & 10.14 $\\pm$ 0.05 & 9.97 $\\pm$ 0.09 \\\\\nChebyNet~\\cite{defferrard2016convolutional} & 10.24 $\\pm$ 0.06 & 10.07 $\\pm$ 0.09 \\\\\nGCN~\\cite{kipf2016semi} & 11.68 $\\pm$ 0.09 & 11.41 $\\pm$ 0.10 \\\\\nMPNN~\\cite{Gilmer17} & 11.16 $\\pm$ 0.13 & 11.08 $\\pm$ 0.11 \\\\\nGraphSAGE~\\cite{hamilton2017inductive} & 13.19 $\\pm$ 0.04 & 12.95 $\\pm$ 0.11 \\\\\nGPNN~\\cite{liao2018graph} & 12.81 $\\pm$ 0.80 & 12.39 $\\pm$ 0.77 \\\\\nGAT~\\cite{velickovic2017graph} & 11.39 $\\pm$ 0.09 & 11.02 $\\pm$ 0.06 \\\\\nLanczosNet & \\textbf{9.65 $\\pm$ 0.19} & \\textbf{9.58 $\\pm$ 0.14} \\\\\nAdaLanczosNet & 10.10 $\\pm$ 0.22 & 9.97 $\\pm$ 0.20 \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\caption{Mean absolute error on QM8 dataset.}\n\\vspace{-0.5cm}\n\\label{table:qm8}\n\\end{table}\n\n\n\\subsection{Ablation Study}\n\nWe also did a thorough ablation study of our modeling components on the validation set of QM8.\n\n\\begin{table}\n\\centering\n\\resizebox{\\linewidth}{!}\n{%\n\\setlength{\\tabcolsep}{3pt}\n\\begin{tabular}{@{}ccccccc|c@{}}\n\\hline\n\\toprule\nModel & \\begin{tabular}{@{}c@{}}Graph \\\\ Kernel\\end{tabular} & \\begin{tabular}{@{}c@{}}Node \\\\ Embedding\\end{tabular} & \\begin{tabular}{@{}c@{}}Spectral \\\\ Filter\\end{tabular} & Short Scales & Long Scales & \\begin{tabular}{@{}c@{}}Lanczos \\\\ Step\\end{tabular} & \\begin{tabular}{@{}c@{}}Validation \\\\ MAE ($\\times 1.0e^{-3}$)\\end{tabular} \\\\\n\\midrule\n\\midrule\nLanczosNet & & one-hot & & \\{1, 2, 3\\} & & & 10.71 \\\\\nLanczosNet & & one-hot & & \\{3, 5, 7\\} & & & 10.60 \\\\\nLanczosNet & & one-hot & & & \\{10, 20, 30\\} & 20 & 10.54 \\\\\nLanczosNet & & one-hot & & \\{3, 5 ,7\\} & \\{10, 20, 30\\} & 20 & \\textbf{10.41} \\\\\n\\midrule\n\\midrule\nLanczosNet & & one-hot & & & \\{10, 20, 30\\} & 5 & 10.49 \\\\\nLanczosNet & & one-hot & & & \\{10, 20, 30\\} & 10 & \\textbf{10.44} \\\\\nLanczosNet & & one-hot & & & \\{10, 20, 30\\} & 20 & 10.54 \\\\\nLanczosNet & & one-hot & & & \\{10, 20, 30\\} & 40 & 10.49 \\\\\n\\midrule\n\\midrule\nLanczosNet & & one-hot & 3-MLP & \\{3, 5 ,7\\} & \\{10, 20, 30\\} & 20 & 10.44 \\\\\nLanczosNet & & one-hot & 5-MLP & \\{3, 5 ,7\\} & \\{10, 20, 30\\} & 20 & 10.54 \\\\\nLanczosNet & & \\checkmark & 3-MLP & \\{3, 5 ,7\\} & \\{10, 20, 30\\} & 20 & 10.26 \\\\\nLanczosNet & & \\checkmark & 3-MLP & & \\{1, 2, 3, 5, 7, 10, 20, 30\\} & 20 & \\textbf{9.56} \\\\\n\\midrule\n\\midrule\nAdaLanczosNet & \\checkmark & one-hot & 3-MLP & \\{3, 5, 7\\} & \\{10, 20, 30\\} & 20 & 10.99 \\\\\nAdaLanczosNet & & \\checkmark & 3-MLP & \\{3, 5, 7\\} & \\{10, 20, 30\\} & 20 & 10.20 \\\\\nAdaLanczosNet & & \\checkmark & 3-MLP & \\{1, 2, 3\\} & \\{5, 7, 10, 20, 30\\} & 20 & \\textbf{9.96} \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\caption{Ablation study on QM8 dataset. Empty cell means that the component is neither used nor applicable. $X$-MLP means a MLP with $X$ hidden layers. `one-hot' means the node embedding is fixed as the one-hot encoding throughout learning and inference.}\n\\label{table:qm8_ablation}\n\\end{table}\n\n\\textbf{Multi-Scale Graph Convolution:} \nWe first study the effect of multi-scale graph convolution. \nIn order to rule out the impact of other factors, we use {LanczosNet}, do not employ the learnable spectral filter and use the one-hot encoding as the node embedding.\nThe results are shown in the first row of Table~\\ref{table:qm8_ablation}.\nUsing long scales for graph convolution clearly helps on this task.\nCombining both short and long scales further improves results.\n\n\\textbf{Lanczos Step:}\nWe then investigate the Lanczos step since it will have an impact on the accuracy of the low rank approximation induced by the Lanczos algorithm.\nThe results are shown in the second row of Table~\\ref{table:qm8_ablation}.\nWe can see that the performance is better with a relatively small Lanczos step like $10$ and $20$ which makes sense since the average number of nodes in this dataset is around $16$.\n\n\\textbf{Learning Spectral Filter:}\nWe then study whether learning spectral filter will help improve performance.\nThe results are shown in the third row of Table~\\ref{table:qm8_ablation}.\nAdding a $3$-layer MLP does help reduce the error compared to not using any learnable spectral filter.\nNote that the MLP consists of $128$ hidden units per layer and uses ReLU as the nonlinearity.\nHowever, using a deeper MLP does not seem to be helpful which might be caused by the challenges in optimization.\n\n\\textbf{Graph Kernel\/Node Embedding:}\nAt last, we study the usefulness of adding graph kernel and node embeddings.\nWe first fix the node embedding as one-hot encoding and learn a $3$ layer MLP which is the function $f_{\\theta}$ in Eq.~(\\ref{eq:graph_kernel}).\nNext, we learn the node embeddings directly.\nIntuitively, learning embeddings amounts to learn a separate function $f$ per node whereas our graph kernel learning enforces that $f$ is shared for all nodes, thus being more restrictive.\nAs shown in the $3$-rd and $4$-th rows of the table, learning node embeddings significantly improves the performance for both {LanczosNet} and {AdaLanczosNet} and is more effective than learning graph kernels.\nAlso, tuning the scale parameters further boosts the performance.\n\\subsection{Graph Convolution}\n\n\\paragraph{Graph Fourier Transform}\nGiven input node features $X$, we now discuss how to perform a graph convolution.\nBased on the adjacency matrix $A$, we compute the graph Laplacian $L$ which can be defined in different ways: (1) $L = D - A$; (2) $L = I - D^{-1} A$; (3) $L = I - D^{-\\frac{1}{2}} A D^{-\\frac{1}{2}}$, where $D$ is a diagonal degree matrix and $D_{i,i} = \\sum_{j = 1}^N {A_{i,j}}$.\nThe definition (3) is often used in the GSP literature due to the fact that it is real symmetric, positive semi-definite (PSD) and has eigenvalues lying in $[0, 2]$.\nIn certain applications~\\cite{kipf2016semi}, it was found that adding self-loops, i.e., changing $A$ to $A+I$, and using the affinity matrix $S = D^{-\\frac{1}{2}} A D^{-\\frac{1}{2}}$ instead of $L$ gives better results.\nSince $S$ is real symmetric, based on spectral decomposition, we have $ S = U \\Lambda U^{\\top}$ where $U$ is an orthogonal matrix and its column vectors are the eigenvectors of $S$. \nThe diagonal matrix $\\Lambda$ contains the sorted eigenvalues where $\\Lambda_{i,i}=\\lambda_i$ and $1\\geq \\lambda_1 \\geq \\dots \\geq \\lambda_N \\geq -1$. \nBased on the eigenbasis, we can define the \\textit{graph Fourier transform} $Y = U^{\\top} X$ and its inverse transform $\\hat{X} = U Y$ following~\\cite{shuman2013emerging}. \nNote that $L = I - D^{-\\frac{1}{2}} A D^{-\\frac{1}{2}}$ shares the same eigenvectors with $S=D^{-\\frac{1}{2}} A D^{-\\frac{1}{2}}$ and the eigenvalues of $L$ are $\\mu_i = 1 - \\lambda_i$.\nTherefore, $L$ and $S$ share the same \\textit{graph Fourier transform} which justifies the usages of $S$.\nDifferent forms of filters can be further constructed in the spectral domain.\n\n\\paragraph{Localized Polynomial Filter}\nA $\\tau$-localized polynomial filter is typically adopted in GSP literature~\\cite{shuman2013emerging}, $g_{w}(\\Lambda) = \\sum_{t=0}^{\\tau-1} w_{t} \\Lambda^{t}$, where $\\bm{w} = \\left[w_{0}, w_{1}, \\dots, w_{\\tau-1}\\right] \\in \\mathbb{R}^{\\tau \\times 1}$ is the filter coefficient, i.e., learnable parameter.\nThe filter is $\\tau$-localized in the sense that the filtering leverages information from nodes which are at most $\\tau$-hops away. \nOne prominent example of this class is the Chebyshev polynomial filter~\\cite{defferrard2016convolutional}. \nHere the graph Laplacian is modified to $\\tilde{L} = 2L\/\\lambda_{\\text{max}} - I$ such that its eigenvalues fall into $[-1, 1]$. \nThen the Chebyshev polynomial recursion is applied: $\\tilde{X}(t) = 2 \\tilde{L} \\tilde{X}(t-1) - \\tilde{X}(t-2)$ where $\\tilde{X}(0) = X$ and $\\tilde{X}(1) = \\tilde{L} X$. \nFor a pair of input and output channels $(i, j)$, the final filtering becomes, \n$y_{i, j} = [\\tilde{X}(0)_{:, i}, \\dots, \\tilde{X}(\\tau - 1)_{:, i}] \\bm{w}_{i,j}$, where $\\left[ \\cdot \\right]$ means concatenation along columns and $\\bm{w}_{i,j} \\in \\mathbb{R}^{\\tau \\times 1}$.\nChebyshev polynomials provide two benefits: they form an orthogonal basis of $L^2([-1, 1], dy\/\\sqrt{1 - y^2})$ and one avoids the spectral decomposition of $\\tilde{L}$ in the filtering. \nHowever, the functional form of the spectral filter is not learnable, and cannot adapt to the data.\n\nIn this paper, instead of using the modified graph Laplacian $\\tilde{L}$, we use the aforementioned $S$. \nTherefore, we can write the localized polynomial filtering in a more general form as,\n\\begin{align}\\label{eq:poly_filter}\nY = \\sum_{t=0}^{\\tau - 1} g_{t}(S, \\dots, S^{t}, X) W_{t},\n\\end{align}\nwhere $g_{t}$ is a function that takes node features $X$ and powers of the affinity matrices up to the $t$-th order as input and outputs a $N \\times F$ matrix. \n$W_{t} \\in \\mathbb{R}^{F \\times O}$ is the corresponding filter coefficient and $Y \\in \\mathbb{R}^{N \\times O}$ is the output.\nOne can easily verify that in the Chebyshev polynomial filter, any $i$-th column of the corresponding $g_{t}(X, S, \\dots, S^{t})$ lies in the \\textit{Krylov subspace} $\\mathcal{K}_{t+1}(S, X_{:, i}) \\equiv \\mathrm{span} \\{X_{:, i}, S X_{:, i}, \\dots, S^{t}X_{:, i}\\}$.\nThis naturally motivates the usage of Krylov subspace methods, like the Lanczos algorithm~\\cite{lanczos1950iteration}, since it provides an orthonormal basis for the above Krylov subspace, thus making the filter coefficients compact.\n\n\\section{Introduction}\n\nGraph-structured data is ubiquitous in real world applications, social networks, gene expression regulatory networks, protein-protein interactions, and many other physical systems. \nHow to model such data using machine learning, especially deep learning, has become a central research question~\\cite{battaglia2018relational}.\nFor supervised and semi-supervised tasks such as graph or node classification and regression, learning based models can be roughly categorized into two classes, formulated either in terms of graph convolutions~\\cite{bruna2013spectral} or recurrent neural networks~\\cite{scarselli2009graph}. \n\nMethods based on recurrent neural networks (RNN), especially graph neural networks (GNN)~\\cite{scarselli2009graph}, repeatedly unroll a message passing process over the graph by exchanging information between the nodes.\nIn theory, a GNN can have as large a model capacity as its convolutional counterpart.\nHowever, due to the instability of RNN dynamics and difficulty of optimization, GNN and its variants are generally slower and harder to train.\n\nIn this paper we focus on graph convolution based methods.\nBuilt on top of the graph signal processing (GSP) approaches~\\cite{ortega2018graph}, these methods extend convolution operators to graphs by leveraging spectral graph theory, graph wavelet theory, etc.\nGraph convolutions can be stacked and combined with nonlinear activation functions to build deep models, just as in regular convolutional neural networks (CNN).\nThey often have large model capacity and achieve promising results.\nAlso, graph convolution can be easily implemented with modern scientific computing libraries.\n\nThere are two main issues with current graph convolution approaches.\nFirst, it is not clear how to efficiently leverage multi-scale information except by directly stacking multiple layers.\nHaving an effective multi-scale scheme is key for enabling the model to be invariant to scale changes, and to capture many intrinsic regularities~\\cite{witkin1983scale,coifman2005geometric}.\nGraph coarsening methods have been proposed to form a hierarchy of multi-scale graphs \\cite{defferrard2016convolutional}, but this coarsening process is fixed during both inference and learning which may cause some bias.\nAlternatively, the graph signal can be multiplied by the exponentiated graph Laplacian, where the exponent indicates the scale of the diffusion process on the graph~\\cite{atwood2016diffusion}.\nUnfortunately, the computation and memory cost increases linearly with the exponent, which prohibits the exploitation of long scale diffusion in practice.\nOther fast methods for computing matrix power such as exponentiating by squaring are very memory intensive, even for moderately large graphs.\nSecond, spectral filters within current graph convolution based models are mostly fixed. \nIn the context of image processing, using a Gaussian kernel along with a spectral filter $f(\\lambda) = 2\\lambda - \\lambda^2$ corresponds to running forward the heat equation (blurring) followed by running it backwards (sharpening)~\\cite{singer2009diffusion}. \nMulti-scale kernels introduced in~\\cite{rabin2018multi} extends the idea of forward-backward diffusion process and can be represented as polynomials of matrices related to a Gaussian kernel.\nLearning the spectral filters is thus beneficial since it learns the stochastic processes on the graph which produce useful representations for particular tasks.\nHowever, how to learn spectral filters which have large model capacity is largely underexplored. \n\nIn this paper, we propose the Lanczos network (LanczosNet) to overcome the aforementioned issues.\nFirst, based on the tridiagonal decomposition implied by the Lanczos algorithm, our model exploits the low rank approximation of the graph Laplacian. \nThis approximation facilitates efficient computation of matrix powers thus gathering multi-scale information easily.\nSecond, we design learnable spectral filters based on the approximation which effectively increase model capacity.\nIn scenarios where one wants to learn the graph kernel and\/or node embeddings, we propose another variant, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, adaptive Lanczos network ({AdaLanczosNet}), which back-propagates through the Lanczos algorithm.\nWe show that our proposed model is closely related to graph based manifold learning approaches such as diffusion maps which could potentially inspire more work from the intersection between deep graph networks and manifold learning.\nWe benchmark against $9$ recent deep graph networks, including both convolutional and RNN based methods, on citation networks and a quantum chemistry graph regression problem, and achieve state-of-the-art results in most tasks.\n\n\\section{Lanczos Networks}\n\nIn this section, we first introduce the Lanczos algorithm which approximates the affinity matrix $S$. \nWe present our first model, called Lanczos network ({LanczosNet}), in which we execute the Lanczos algorithm once per graph and fix the basis throughout inference and learning.\nThen we introduce the adaptive Lanczos network ({AdaLanczosNet}) in which we learn the graph kernel and\/or node embedding by back-propagating through the Lanczos algorithm.\n\n\\begin{minipage}{6.6cm}\n\\begin{algorithm}[H]\n\\caption{: Lanczos Algorithm}\\label{alg:lanczos}\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Input:} $S, x, K, \\epsilon$\n\\STATE \\textbf{Initialization:} $\\beta_{0} = 0$, $q_{0} = 0$, and $q_{1} = x \/ \\Vert x \\Vert$\n\\STATE \\textbf{For} $j = 1, 2, \\dots, K $:\n\\STATE \\qquad $z = Sq_{j}$\n\\STATE \\qquad $\\gamma_{j} = q_{j}^{\\top}z$\n\\STATE \\qquad $z = z - \\gamma_{j}q_{j} - \\beta_{j-1}q_{j-1}$\n\\STATE \\qquad $\\beta_{j} = \\Vert z \\Vert_{2}$\n\\STATE \\qquad \\textbf{If} $\\beta_{j} < \\epsilon$, quit\n\\STATE \\qquad $q_{j+1} = z \/ \\beta_{j}$\n\\STATE\n\\STATE $Q = \\left[q_{1}, q_{2}, \\cdots, q_{K} \\right]$\n\\STATE Construct $T$ following Eq.~(\\ref{eq:tridiagonal})\n\\STATE Eigen decomposition $T = BRB^{\\top}$\n\\STATE Return $V=QB$ and $R$.\n\\end{algorithmic}\n\\end{algorithm}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{7.1cm}\n\\begin{algorithm}[H]\n\\caption{: LanczosNet }\\label{alg:graph_convolution_net}\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Input:} Signal $X$, Lanczos output $V$ and $R$, scale index sets $\\mathcal{S}$ and $\\mathcal{I}$,\n\\STATE \\textbf{Initialization:} $Y_0 = X$\n\\STATE \\textbf{For} $\\ell = 1, 2, \\dots, \\ell_c $:\n\\STATE \\qquad $Z = Y_{\\ell - 1}$, $\\mathcal{Z} = \\{\\emptyset \\}$\n\\STATE \\qquad \\textbf{For} $j = 1, 2, \\dots, \\max(\\mathcal{S})$:\n\\STATE \\qquad \\qquad $Z = SZ$\n\\STATE \\qquad \\qquad \\textbf{If} $j \\in \\mathcal{S}$:\n\\STATE \\qquad \\qquad \\qquad $\\mathcal{Z} = \\mathcal{Z} \\cup Z$\n\\STATE \\qquad \\textbf{For} $i \\in \\mathcal{I}$:\n\\STATE \\qquad \\qquad $\\mathcal{Z} = \\mathcal{Z} \\cup V \\hat{R}(\\mathcal{I}_i) V^{\\top}Y_{\\ell-1}$\n\\STATE \\qquad $Y_\\ell = \\text{concat}(\\mathcal{Z}) W_{\\ell}$\n\\STATE \\qquad \\textbf{If} $\\ell < L$\n\\STATE \\qquad \\qquad $Y_\\ell = \\text{Dropout}(\\sigma(Y_\\ell))$\n\\STATE Return $Y_{\\ell_c}$.\n\\end{algorithmic}\n\\end{algorithm}\n\\end{minipage}\n\n\n\\subsection{Lanczos Algorithm}\n\nGiven the aforementioned affinity matrix $S$\\footnote{When faced with a non-symmetric matrix, one can resort to the Arnoldi algorithm.} and node features $x \\in \\mathbb{R}^{N \\times 1}$, the $N$-step Lanczos algorithm computes an orthogonal matrix $Q$ and a symmetric tridiagonal matrix $T$, such that $Q^{\\top}SQ = T$. \nWe denote $Q = \\left[q_{1}, \\cdots, q_{N} \\right]$ where column vector $q_{i}$ is the $i$-th Lanczos vector. \n$T$ is illustrated as below, \n\\begin{align}\\label{eq:tridiagonal}\nT = \\left[ \\begin{array}{cccc} \\gamma_{1} & \\beta_{1} & & \\\\ \\beta_{1} & \\ddots & \\ddots & \\\\ & \\ddots & \\ddots & \\beta_{N-1} \\\\ & & \\beta_{N-1} & \\gamma_{N} \\end{array} \\right].\n\\end{align}\nOne can verify that $Q$ forms an orthonormal basis of the Krylov subspace $\\mathcal{K}_{N}(S, x)$ and the first $K$ columns of $Q$ forms the orthonormal basis of $\\mathcal{K}_{K}(S, x)$.\nThe Lanczos algorithm is shown in detail in Alg.~\\ref{alg:lanczos}.\nIntuitively, if we investigate the $j$-th column of the system $SQ = QT$ and rearrange terms, we obtain $\\beta_{j}q_{j+1} = Sq_{j} - \\beta_{j-1}q_{j-1} - \\gamma_{j}q_{j}$, which clearly explains lines $4$ to $6$ of the pseudocode, i.e., it tries to solve the system in an iterative manner.\nNote that the most expensive operation in the algorithm is the matrix-vector multiplication in line $4$.\nAfter obtaining the tridiagonal matrix $T$, we can compute the Ritz values and Ritz vectors which approximate the eigenvalues and eigenvectors of $S$ by diagonalizing the matrix $T$. \nWe only add this step in {LanczosNet} as we found back-propagating through the eigendecomposition in {AdaLanczosNet} is not numerically stable.\n\n\\subsection{LanczosNet}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{lanczos_net_model.pdf}\n \\vspace{-0.3cm}\n \\caption{The illustration of the model. The learnable spectral filters have different parameters per layer. The nonlinear activation function $\\sigma$ could be applied before or after the concatenation.}\n \\vspace{-0.5cm}\n \\label{fig:model}\n\\end{figure}\n\nIn this section, we first show the construction of the localized polynomial filter based on the Lanczos algorithm's output and discuss its limitations.\nThen we explain how to construct the spectral filter using a particular low rank approximation and how to further make the filter learnable.\nAt last, we elaborate how to construct multi-scale graph convolution and build a deep network.\n\n\\paragraph{Localized Polynomial Filter} \nFor the ease of demonstrating the concept of \\textit{Krylov subspace}, we consider a pair of input and output channels $(i,j)$.\nWe denote the input as $X_{:,i} \\in \\mathbb{R}^{N \\times 1}$ and the output as $Y_{:,j} \\in \\mathbb{R}^{N \\times 1}$.\nExecuting the Lanczos algorithm for $K$ steps with the normalized $X_{:,i}$ as the starting vector, one can obtain the orthonormal basis $\\tilde{Q}$ of $\\mathcal{K}_{K}(S, X_{:,i})$ and the corresponding tridiagonal matrix $\\tilde{T}$.\nRecall that in the localized polynomial filtering, given the orthonormal basis of $\\mathcal{K}_{K}(S, X_{:,i})$, one can write the graph convolution as\n\\begin{align}\\label{eq:krylov_filter_fix}\nY_{j} = \\tilde{Q} \\bm{w}_{i,j},\n\\end{align}\nwhere $\\tilde{Q} \\in \\mathbb{R}^{N \\times K}$ depends on the $X_{:,i}$ and $\\bm{w}_{i,j} \\in \\mathbb{R}^{K \\times 1}$ is the learnable parameter.\nThis filter has the benefit that the corresponding learnable coefficients are compact due to the orthonormal basis.\nHowever, if one wants to stack multiple graph convolution layers, the dependency of $\\tilde{Q}$ on $X_{:,i}$ implies that a separate run of Lanczos algorithm is necessary for each graph convolution layer which is computationally demanding.\n\n\\paragraph{Spectral Filter} \nIdeally, we would like to compute Lanczos vectors only once during the inference of a deep graph convolutional network.\nLuckily, this can be achieved if we take an alternative view of Lanczos algorithm.\nIn particular, we can choose a random starting vector with unit norm and treat the $K$ step Lanczos layer's output as the low rank approximation $S \\approx Q T Q^{\\top}$.\nNote that here $Q \\in \\mathbb{R}^{N \\times K}$ has orthonormal columns and does not depend on the node features $X_{i}$ and $T$ is a $K \\times K$ tridiagonal matrix.\nFollowing~\\cite{simon2000low}, we prove the theorem below to bound the approximation error.\n\\begin{restatable}{theorem}{errortheorem}\\label{the:error}\nLet $U \\Lambda U^{\\top}$ be the eigendecomposition of an $N \\times N$ symmetric matrix $S$ with $\\Lambda_{i,i} = \\lambda_i$, $\\lambda_1 \\geq \\dots \\geq \\lambda_N$ and $U = [u_1, \\dots, u_N]$.\nLet $\\mathcal{U}_j \\equiv \\mathrm{span}\\{u_1, \\dots, u_j\\}$.\nAssume $K$-step Lanczos algorithm starts with vector $v$ and outputs the orthogonal $Q \\in \\mathbb{R}^{N \\times K}$ and tridiagonal $T \\in \\mathbb{R}^{K \\times K}$.\nFor any $j$ with $1 < j < N$ and $K > j$, we have,\n\\begin{align}\n \\Vert S - Q T Q^{\\top} \\Vert_{F}^{2} \\leq \\sum_{i=1}^{j} \\lambda_i^2 \\left( \\frac{\\sin{\\left(v, \\mathcal{U}_i\\right)} \\prod_{k=1}^{j-1} (\\lambda_k - \\lambda_N) \/ (\\lambda_k - \\lambda_j) }{\\cos{\\left(v, u_i\\right)} T_{K-i}(1+2\\gamma_i) } \\right)^2 + \\sum_{i=j+1}^{N} \\lambda_i^2, \n \\nonumber\n\\end{align}\nwhere $T_{K-i}(x)$ is the Chebyshev Polynomial of degree $K-i$ and $\\gamma_i = (\\lambda_{i} - \\lambda_{i+1}) \/ (\\lambda_{i+1} - \\lambda_{N})$.\n\\end{restatable}\nWe leave the proof to the appendix. \nNote that the term $(\\sum_{i=j+1}^{N} \\lambda_i^2)^{1\/2}$\nis the Frobenius norm of the error between $S$ and the best rank-$j$ approximation $S_j$.\nWe decompose the tridiagonal matrix $T = B R B^{\\top}$, where the $K \\times K$ diagonal matrix $R$ contains the Ritz values and $B \\in \\mathbb{R}^{K \\times K}$ is an orthogonal matrix. We have a low rank approximation of the affinity matrix $S \\approx V R V^{\\top}$, where $V = QB$. \nTherefore, we can rewrite the graph convolution as,\n\\begin{align}\\label{eq:identity_filter_fix}\nY_{j} = [X_i, S X_i, \\dots, S^{K-1} X_i] \\bm{w}_{i,j} \\approx [X_i, V R V^{\\top} X_i, \\dots, V R^{K-1} V^{\\top} X_i] \\bm{w}_{i,j},\n\\end{align}\nThe difference between Eq.~(\\ref{eq:krylov_filter_fix}) and Eq.~(\\ref{eq:identity_filter_fix}) is that the former uses the orthonormal basis while the latter uses the approximation of the direct basis of $\\mathcal{K}_{K}(S, X_{:,i})$. \nSince we explicitly operate on the approximation of spectrum, i.e., Ritz value, it is a spectral filter.\nSuch a filtering form will have significant computational benefits while considering the long range\/scale dependency due to the fact that the $t$-th power of $S$ can be approximated as $S^{t} \\approx V R^{t} V^{\\top}$, where we only need to raise the diagonal entries of $R$ to the power $t$.\n\n\\paragraph{Learning the Spectral Filter} \nFollowing the previous filter, one can naturally design learnable spectral filters. \nDenoting the diagonal entries of $R$ and the column vectors of $V$, \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot, Ritz values and vectors, as $\\{(r_i, v_i) \\vert i = 1, \\dots, K\\}$, we perform $i$-th spectral filtering as follows,\n\\begin{align}\n\\hat{L}_{i} = \\sum_{k=1}^{K} f_{i}(r_{k}^{1}, r_{k}^{2}, \\cdots, r_{k}^{K-1}) v_{k} v_{k}^{\\top}\n\\end{align}\nwhere $f_{i}$ is a multi-layer perceptron (MLP).\nTherefore, we have the following graph convolution,\n\\begin{align}\\label{eq:learnable_filter_fix}\nY_{j} = [X_i, \\hat{L}_{1} X_i, \\dots, \\hat{L}_{K-1} X_i] \\bm{w}_{i,j}.\n\\end{align}\nNote that it includes the polynomial filter as a special case.\nWhen positive semi-definiteness is a concern, one can apply an activation function like ReLU to the output of the MLPs.\n\n\\paragraph{Multi-Scale Graph Convolution}\nUsing any above filter, one can construct a deep graph convolutional network which leverages multi-scale information.\nTaking the learnable spectral filter as an example, we can write one graph convolution layer in a compact way as below,\n\\begin{align}\\label{eq:multi_scale_graph_conv_fix}\nY = \\left[L^{\\mathcal{S}_1} X, \\dots, L^{\\mathcal{S}_M} X, \\hat{L}_{1}(\\mathcal{I}) X, \\dots, \\hat{L}_{N}(\\mathcal{I})X \\right] W,\n\\end{align}\nwhere weight $W \\in \\mathbb{R}^{(M+E)D \\times O}$, $\\mathcal{S}$ is a set of $M$ short scale parameters and $\\mathcal{I}$ is a set of $E$ long scale parameters.\nWe consider a non-negative integer as scale parameter, e.g., $\\mathcal{S} = \\{0, 1, \\dots, 5\\}$, $\\mathcal{I} = \\{10, 20, \\dots, 50\\}$.\nHere we slightly abuse the notation and define the $i$-th spectral filtering as \n\\begin{align}\n \\hat{L}_{i}(\\mathcal{I}) = \\sum_{k=1}^{K} f_{i}(r_{k}^{\\mathcal{I}_1}, r_{k}^{\\mathcal{I}_2}, \\cdots, r_{k}^{\\mathcal{I}_{\\vert \\mathcal{I} \\vert}}) v_{k} v_{k}^{\\top}.\n\\end{align}\nNote that the convolution corresponding to short scales is similar to~\\cite{atwood2016diffusion} where the number of matrix-vector multiplications is tied to the maximum scale of $\\mathcal{S}$.\nIn contrast, the convolution of long scales decouples the Lanczos step $K$ and scale parameters $\\mathcal{I}$, thus permitting great freedom in tuning scales as hyperparameters.\nOne can choose $K$ properly to balance the computation cost and the accuracy of the low rank approximation.\nIn our experiments, short scales are typically less than $10$ which have reasonable computation cost.\nMoreover, the short scale part could sometimes remedy cases where the low rank approximation is crude.\nWe set the long scale no larger than $100$ in our experiments.\nIf the maximum eigenvalue of $S$ is $1$, we can even raise the power to infinity, which corresponds to the equilibrium state of diffusion process on the graph.\n\nTo build a deep network, we can stack multiple such graph convolution layers where each layer has its own spectral filter weights. \nNonlinear activation functions, e.g., ReLU, and\/or Dropout can be added between layers.\nThe inference algorithm of such a deep network is shown in Alg. \\ref{alg:graph_convolution_net}.\nThe overall computation graph of the model is illustrated in Fig. \\ref{fig:model}.\nWith the top layer representation, one can use softmax to perform classification or a fully connected layer to perform regression.\nThe Lanczos algorithm is run beforehand once per graph to construct the network and will not be invoked during inference and learning.\n\n\n\\subsection{AdaLanczosNet}\nIn this section, we explain another variant which back-propagates through the Lanczos algorithm. \nThis facilitates learning the graph kernel and\/or node embeddings.\n\n\\paragraph{Graph Kernel}\nAssume we are given node features $X$ and a graph $\\mathcal{G}$. \nWe are interested in learning a graph kernel function with the hope that it can capture the intrinsic geometry of node representations. \nGiven data points $x_i, x_j \\in \\mathcal{X}$, we define the anisotropic graph kernel, $k: \\mathcal{X} \\times \\mathcal{X} \\mapsto \\mathbb{R}$ as,\n\\begin{align}\\label{eq:graph_kernel}\nk(x_{i}, x_{j}) = \\exp\\left( - \\frac{ \\Vert ( f_{\\theta}(x_{i}) - f_{\\theta}(x_{j}) ) \\Vert^{2} } { \\epsilon } \\right).\n\\end{align}\nwhere $f_{\\theta}$ is an MLP.\nThis class of anisotropic kernels is very expressive and includes self-tuning kernel~\\cite{zelnik2005self} and the Gaussian kernel with Mahalanobis distances~\\cite{weinberger2007metric}.\nMoreover, for different kernel functions, the resulted graph Laplacians will converge to different limiting operators asymptotically.\nFor example, even for isotropic Gaussian kernels, the graph Laplacian can converge pointwise to the Laplace-Beltrami, Fokker-Planck operator and heat kernel under different normalizations~\\cite{coifman2006diffusion,singer2006graph}.\nIn practice, we notice that choosing $\\epsilon = \\sum_{(p, q) \\in \\mathcal{E}} \\| ( f_\\theta(x_{p}) - f_\\theta(x_{q}) )\\|^2 \/ {\\vert \\mathcal{E} \\vert}$ helps normalizing the pairwise distances,\nthus avoiding the gradient vanishing issue due to the exponential function.\nThis type of learnable anisotropic diffusion is useful in two ways.\nFirst, it increases model capacity, thus potentially gaining better performance.\nSecond, it can better adapt to the non-uniform density of the data points on the manifold or nonlinear measurements of the underlying data points on a maninfold.\nWe can construct an adjacency matrix $A$ such that $A_{i, j} = k(x_{i}, x_{j})$ if $(i, j) \\in \\mathcal{E}$ and $A_{i, j} = 0$ otherwise. \nThen we can obtain the affinity matrix $S = D^{-\\frac{1}{2}} A D^{-\\frac{1}{2}}$.\n\n\n\n\n\n\n\n\\paragraph{Node Embedding}\nIn some applications, we do not observe the node features $X$ but only the graph itself $\\mathcal{G}$, so we may need to learn an embedding vector per node. \nFor example, this scenario applies in the quantum chemistry tasks where a node, i.e., an atom within a molecule, has rarely observed features. \nWe can still use the above graph kernel to construct the affinity matrix which results in the same form except $f$ is discarded.\nLearning embedding $X$ naturally amounts to learning the similarities between nodes.\n\n\\paragraph{Tridiagonal Decomposition}\n\nAlthough all operations in {LanczosNet} are differentiable, we empirically observe that backpropagation through the eigendecomposition of the tridiagonal matrix is numerically instable.\nThe situation would be even worse if multiple eigenvalues are numerically close or one takes a large power in Eq.~(\\ref{eq:multi_scale_graph_conv_fix}).\nTherefore, we instead directly leverage the approximated tridiagonal decomposition $S \\approx Q T Q^{\\top}$ which is obtained by running the Lanczos algorithm $K$ steps.\nThen we can rewrite the spectral filtering as following,\n\\begin{align}\n \\hat{L}_{i}(\\mathcal{I}) = Q g_{i}(\\text{vec}(T^{\\mathcal{I}_1}), \\text{vec}(T^{\\mathcal{I}_2}), \\cdots, \\text{vec}(T^{\\mathcal{I}_{\\vert \\mathcal{I} \\vert}})) Q^{\\top}, \n\\end{align}\nwhere $\\text{vec}(\\cdot)$ means vectorization and $g_i(\\cdot) = f_i(\\cdot) + f_i(\\cdot)^\\top$, with the output of an MLP $f_i$ reshaped to a matrix of the same size as $T$. This ensures that the output is symmetric.\n \n\n\nWith the above parameterization of the graph Laplacian and tridiagonal decomposition, we can back-propagate the loss through the Lanczos algorithm to either the graph kernel parameters $\\theta$ or the node embedding $X$.\nThe overall model is similar to the {LanczosNet} except that the Lanczos algorithm needs to be invoked for each inference pass.\n\n\n\\section*{Acknowledgments}\nRL thanks Roger Grosse for introducing the Lanczos algorithm to him. RL was supported by Connaught International Scholarships. RL, RU and RZ were supported in part by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior\/Interior Business Center (DoI\/IBC) contract number D16PC00003. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI\/IBC, or the U.S. Government.\n\n\\clearpage\n\n\\bibliographystyle{unsrt}\n\n\\section{Related Work}\n\nWe can roughly categorize the application of machine learning, especially deep learning, to graph structured data into supervised\/semi-supervised and unsupervised scenarios.\nFor the former, a majority of work focuses on node\/graph classification and regression~\\cite{vishwanathan2010graph,bronstein2017geometric,garcia2017few,battaglia2018relational}.\nFor the latter, unsupervised node\/graph embedding learning~\\cite{grover2016node2vec,duran2017learning} is common.\nRecently, generative models for graphs, such as molecule generation, has drawn some attention~\\cite{li2018learning,jin2018junction}.\n\n\\paragraph{Graph Convolution Based Models}\nThe first class of learning models on graphs stems from graph signal processing (GSP)~\\cite{shuman2013emerging,ortega2018graph} which tries to generalize convolution operators from traditional signal processing to graphs.\nRelying on spectral graph theory~\\cite{chung1997spectral} and graph wavelet theory~\\cite{hammond2011}, several definitions of frequency representations of graph signals have been proposed~\\cite{ortega2018graph}.\nAmong these, spectral graph theory based one is popular, where graph Fourier transform and its inverse are defined based on the eigenbasis of the graph Laplacian.\nFollowing this line, many graph convolution based deep network models emerge.\n\\cite{bruna2013spectral,henaff2015deep} are among the first to explore Laplacian based graph convolution within the context of deep networks.\nMeanwhile, \\cite{duvenaud2015convolutional} performs graph convolution directly based on the adjacency matrix to predict molecule fingerprints.\n\\cite{niepert2016learning} proposes a strategy to form same sized local neighborhoods and then apply convolution like regular CNNs. \nChebyshev polynomials are exploited by~\\cite{defferrard2016convolutional} to construct localized polynomial filters for graph convolution and are later simplified in graph convolutional networks (GCN)~\\cite{kipf2016semi}.\nFurther accelerations for GCN based on importance sampling and control variate techniques have been proposed by~\\cite{chen2018fastgcn,chen2018stochastic}.\nSeveral attention mechanisms have been introduced in~\\cite{velickovic2017graph,wang2018deep} to learn the weights over edges for GCNs.\nNotably, \\cite{atwood2016diffusion} proposes diffusion convolutional neural networks (DCNN) which uses diffusion operator for graph convolution.\nLanczos method has been explored for graph convolution in~\\cite{susnjara2015accelerated} for the purpose of acceleration.\nSpecifically, they only consider the localized polynomial filter case in our {LanczosNet} variant and do not explore the low rank decomposition, learnable spectral filter and graph kernel\/node embedding learning as we do.\n\n\n\\paragraph{Recurrent Neural Networks based Models}\nThe second class of models dates back to recursive neural networks~\\cite{pollack1990recursive} which recurrently apply neural networks to trees following a particular order.\nGraph neural networks (GNN)~\\cite{scarselli2009graph} generalize recursive neural networks to arbitrary graphs and exploit the synchronous schedule to propagate information on graphs.\n\\cite{li2015gated} later proposes the gated graph neural networks (GGNN) which improves GNN by adding gated recurrent unit and training the network with back-propagation through time.\n\\cite{dai2016discriminative} learns graph embeddings via unrolling variational inference algorithms over a graph as a RNN.\n\\cite{hamilton2017inductive} introduces random subgraph sampling and explores different aggregation functions to scale GNN to large graphs.\n\\cite{liao2018graph} proposes asynchronous propagation schedules based on graph partitions to improve GNN.\nMoreover, many applications have recently emerged for GNNs, including community detection~\\cite{bruna2017community}, situation recognition~\\cite{li2017situation}, RGBD semantic segmentation~\\cite{qi20173d}, few-shot learning~\\cite{garcia2017few}, probabilistic inference~\\cite{yoon2018inference}, continuous control of reinforcement learning~\\cite{wang2018nervenet,sanchez2018graph} and so on.\n\n\\paragraph{Graph based Manifold Learning}\nThe non-linear dimensionality reduction methods, such as locally linear embedding (LLE)~\\cite{roweis2000nonlinear}, ISOMAP~\\cite{tenenbaum2000global}, Hessian LLE~\\cite{donoho2003hessian}, Laplacian eigenmaps~\\cite{belkin2002laplacian}, and diffusion maps~\\cite{coifman2006diffusion}, assume that the high-dimensional data lie on or close to a low dimensional manifold and use the local affinities in the weighted graph to learn the global features of the data. \nThey are invaluable tools for embedding complex data in a low dimensional space and regressing functions over graphs. \nSpectral clustering~\\cite{nadler2006diffusion,von2007tutorial}, semi-supervised learning~\\cite{zhu2006semi}, and out-of-sample extension~\\cite{belkin2006manifold} share the similar geometrical consideration of the associated graphs.\nAnisotropic graph kernels are useful in many applications. For example, \\cite{zelnik2005self} improves the spectral clustering results with a self-tuning diffusion kernel that takes into account the local variance at each node in the Gaussian kernel function. \nSimilarly, \\cite{singer2008non} uses the anisotropic Gaussian kernel defined by the local Mahalanobis distances to extract independent components from nonlinear measurements of independent stochastic It{\\^o} processes. \nManifold learning with anisotropic kernel is also useful for data-driven dynamical system analysis, for example, detecting intrinsically slow variable for a stochastic dynamical system~\\cite{singer2009detecting}, filtering dynamical processes~\\cite{talmon2013empirical}, and long range climate forecasting~\\cite{giannakis2015dynamics,zhao2016analog}. \nThe anisotropic diffusion is able to use the local statistics of the measurements to convey the geometric information on the underlying factors rather than the specific realization or measurements at hand~\\cite{lafferty2005diffusion,amari2007methods}. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nRecently the ATLAS and CMS collaborations have announced the discovery of a new boson with a mass of approximately 125 GeV~\\cite{:2012gk,:2012gu}. The next challenge is to determine the nature of this new state, including its quantum numbers and couplings, and whether it is fundamental or composite. Because of the observed decays to $\\gamma\\gamma$, $WW$, and $ZZ$, with strengths seemingly comparable to those expected from the standard model (SM) Higgs, this state is likely a (mostly) $CP$-even spin-zero boson, although $CP$-odd candidates are not currently ruled out~\\cite{Frandsen:2012rj,Barroso:2012wz,Coleppa:2012eh,Chivukula:2012cp,Eichten:2012qb}.\n\nIn technicolor (TC), {\\it common lore} has it that the lightest $CP$-even spin-zero resonance, the analogue of the $\\sigma$ meson or $f_0(500)$ in QCD, cannot be as light as 125 GeV. In fact TC theories are sometimes considered as underlying theories for Higgsless models, despite the fact that in QCD the $\\sigma$ meson is among the lightest states. In this paper we consider the possibility of a light {\\it TC Higgs} arising from mass mixing between relatively heavy scalar singlets. This results in a ``see-saw'' mechanism, with one scalar singlet becoming lighter and one heavier than the corresponding diagonal mass. This is expected to occur in two-scale TC models, {\\it e.g.} low-scale TC~\\cite{Lane:1989ej,Eichten:2011sh} and ultra-minimal TC (UMT)~\\cite{Ryttov:2008xe}\\footnote{In models with fundamental scalars, a scalar see-saw mechanism has also been considered as a way of generating a negative mass squared for the Higgs \\cite{Calmet:2002rf,Calmet:2006hs}.}.\n\nTwo-scale TC theories feature two technifermion species with different representations under a single technicolor gauge group. These lead, for instance, to two different sets of composite scalars. Because of different quantum numbers, scalar multiplets from different representations do not mix through mass terms. However scalar singlets do. Because of the strength of the TC interaction, such a mixing can be sizable, which is the key ingredient in the see-saw mechanism. Moreover, radiative corrections from the top quark may contribute to further reduce the mass of the lightest scalar singlet~\\cite{Foadi:2012bb}.\n\nThis paper is organized as follows. In Sec.~\\ref{sec:seesaw} we briefly review the see-saw mechanism for scalar singlets. In Sec.~\\ref{sec:twoscale} we review the spin-zero sector of two-scale TC, and analyze the properties of the mixing mass term. Then we apply the general results to UMT and low-scale TC. \nFinally, in Sec.~\\ref{sec:conclusions} we offer a brief discussion of our findings.\n\\section{Higgs see-saw mechanism}\\label{sec:seesaw}\nConsider a theory featuring two scalar singlets in its spectrum, $H_1$ and $H_2$. Assume these to mix via mass term:\n\\begin{eqnarray}\n\\mathcal{L}\\supset -\\frac{M_1^2}{2}H_1^2 - \\frac{M_2^2}{2} H_2^2 - \\delta\\ M_1 M_2\\ H_1 H_2 \\ .\n \\label{Eq:basepotential}\n\\end{eqnarray}\nIn the limit $\\delta^2\\to 1$ one eigenstate is massless. It is therefore useful to define the parameter\n\\begin{equation}\n\\varepsilon\\equiv 1-\\delta^2 \\ .\n\\end{equation}\nDiagonalization gives\n\\begin{equation}\n\\left(\\begin{array}{c} H_1 \\\\ H_2 \\end{array}\\right) = \\left(\\begin{array}{cc} \\cos\\beta & \\sin\\beta \\\\ -\\sin\\beta & \\cos\\beta \\end{array}\\right)\n\\left(\\begin{array}{c} H_- \\\\ H_+ \\end{array}\\right)\\ , \\quad\n\\tan2\\beta = \\frac{2 M_1 M_2}{M_2^2-M_1^2}\\delta \\ ,\n\\end{equation}\nwhere $H_-$ and $H_+$ are the light and heavy mass eigenstate, respectively, with mass\n\\begin{equation}\nM_\\pm^2 = \\frac{M_1^2+M_2^2}{2}\\left[1\\pm\\sqrt{1-\\left(\\frac{2 M_1 M_2}{M_1^2+M_2^2}\\right)^2\\varepsilon}\\right] \\ .\n\\end{equation}\nFor $\\varepsilon\\ll 1$, $M_-^2$ becomes\n\\begin{equation}\nM_-^2 = \\frac{M_1^2 M_2^2}{M_1^2+M_2^2}\\varepsilon + {\\cal O}(\\epsilon^2) \\ll M_1^2,\\ M_2^2 \\ .\n\\end{equation}\nIn Fig.~\\ref{fig:eigenvalues} we show $M_-$ (solid) and $M_+$ (dashed), for the cases $M_1=M_2=1.0$ TeV (black) and $M_1=1.0$ TeV, $M_2=300$ GeV (red), as a function of $|\\delta|$. The dotted horizontal line corresponds to the experimental value of 125 GeV. This trivial exercise illustrates how theories with relatively heavy scalar mass scales, {\\it e.g.} $M_i$ between a few hundreds of GeVs and a few TeVs, may still feature a light scalar eigenstate after diagonalization, and thus a candidate for the recently observed 125 GeV boson. While this mechanism is simple and general,\nit is immediately applicable to TC models with two condensation scales. In the next section we discuss the general properties of singlet scalar mixing in two-scale TC theories, and provide specific examples.\n\\begin{figure}[t!]\n\\begin{center}\n \\includegraphics[width=0.55\\linewidth]{seesawplot.pdf}\n\\caption{$M_-$ (solid) and $M_+$ (dashed) as a function of $|\\delta|$, for the cases $M_1=M_2=1.0$ TeV (black) and $M_1=1.0$ TeV, $M_2=300$ GeV (red). The dotted horizontal line corresponds to the experimental value of 125 GeV.}\n\\label{fig:eigenvalues}\n\\end{center}\n\\end{figure}\n\\section{Two-scale technicolor}\\label{sec:twoscale}\nIn two-scale TC two dynamical scales arise due to the presence of two Dirac technifermion species $Q_i$, transforming under different representations $R_i$ of a single TC gauge group~\\cite{Lane:1989ej,Ryttov:2009yw}. The TC force causes technifermion bilinears to condense at different scales $\\Lambda_i$. An estimate of $\\Lambda_2\/\\Lambda_1$ can be obtained from the ladder Schwinger-Dyson equation for the techniquark propagator. In this approximation, the critical coupling for chiral symmetry breaking depends on the representation via $\\alpha_c (R_i)=\\pi\/3 C_2(R_i)$, where $C_2(R_i)$ is the Casimir of the representation $R_i$. Taking $C_2(R_1)\\leq C_2(R_2)$, and integrating the one-loop beta-function $\\beta(\\alpha) = -\\beta_0(R) \\alpha^2\/2\\pi$ from $\\Lambda_1$ to $\\Lambda_2$ gives\n\\begin{equation}\n\\frac{\\Lambda_2}{\\Lambda_1} \\simeq \\exp\\left[\\frac{2\\pi}{\\beta_0(R_1)}\\bigg(\\alpha_\\mathrm{c}(R_2)^{-1}-\\alpha_\\mathrm{c}(R_1)^{-1} \\bigg)\\right]\\ .\n\\label{Eq:scaleratio}\n\\end{equation}\nSince $\\Lambda_1 \\leq \\Lambda_2$, or equivalently $\\alpha_c(R_1) \\geq\\alpha_c(R_2)$, the fermions in the representation $R_2$ are effectively decoupled below $\\Lambda_2$. Therefore, only $\\beta_0(R_1)$ appears in the exponent. If $\\beta_0(R_1)$ and $\\alpha_c(R_1)$ are\nsmall then the scale separation can be sizeable and the presence of four-fermion operators can contribute to further enhance the scale separation~\\cite{Frandsen:2011kt}. This crude approximation serves to illustrate the appearance of two distinct scales.\n\nNow, let $N_i$ be the number of Dirac techniflavors in the representation $R_i$. The global symmetries of the corresponding fermion sector depend on whether $R_i$ is complex, real, or pseudoreal . To see this we express the Dirac fermions in terms of two Weyl fermions,\n\\begin{equation}\nQ_{im} = \\left(\\begin{array}{c}\\psi^1_{im} \\\\ \\bar{\\psi}^2_{im}\\end{array}\\right)\\ ,\n\\end{equation}\nwhere $m$ is the techniflavor index ($m=1,\\dots, N_i$), and the color index is suppressed. If $\\psi^1_{im}$ transforms under the $R_i$ representation, then $\\psi^2_{im}$ transforms under the conjugate representation $\\overline{R}_i$. For complex $R_i$ this implies that rotations in techniflavor space {\\it cannot} mix $\\psi^1$ and $\\psi^2$ fermions. As a consequence the TC Lagrangian features a global $SU(N_i)_1\\times SU(N_i)_2\\times U(1)$ techniflavor symmetry, where the extra $U(1)$ corresponds to technibaryon-number conservation. This global symmetry is spontaneously broken to diagonal $SU(N_i)\\times U(1)$ by the condensate. The latter is an $N_i\\times N_i$ complex matrix,\n\\begin{equation}\n\\left(\\phi_i\\right)_{mn} \\sim \\psi^1_{im} \\psi^2_{in} \\ ,\n\\end{equation}\nwhich transforms like the bi-fundamental of $SU(N_i)_1\\times SU(N_i)_2$:\n\\begin{equation}\n\\phi_i\\to u_{i1} \\phi_i u_{i2}^\\dagger\\ ,\\quad u_{iA}\\in SU(N_i)_A \\ .\n\\end{equation}\nIn terms of spin-zero composites $\\phi_i$ reads\n\\begin{equation}\n\\phi_i =\n\\frac{v_i + H_i + i\\Theta_i}{\\sqrt{2N_i}} + \\left(i \\Pi_i^a + \\Sigma_i^a\\right)T_i^a\\ ,\n\\end{equation}\nwhere $T_i^a$ are the $SU(N_i)$ broken generators, normalized according to ${\\rm Tr}\\ T_i^a T_i^b = \\delta^{ab}\/2$. Here $v_i$ is the vacuum expectation value of the condensate, and $H_i$ is a $\\psi_{im}^1\\psi^2_{im}$ scalar singlet.\n\nIf the representation $R_i$ is real, rotations in techniflavor space {\\it can} mix $\\psi^1$ and $\\psi^2$ fermions, as these transform in the same way under the TC gauge group. As a consequence the TC Lagrangian features a global $SU(2N_i)$ techniflavor symmetry, which is spontaneously broken to $SO(2N_i)$ by the condensate. The latter is a $2N_i\\times 2N_i$ complex matrix,\n\\begin{equation}\n\\left(\\Phi_i\\right)_{mn}^{AB} \\sim \\psi^A_{im} \\psi^B_{in} \\ ,\n\\label{eq:condensateReal}\n\\end{equation}\nwhere $A,B=1,2$, and $m,n=1,\\dots, N_i$. The spin-zero matrix $\\Phi_i$ transforms as the two-index symmetric representation of $SU(2N_i)$:\n\\begin{equation}\n\\Phi_i\\to u_i \\Phi_i u_i^T\\ ,\\quad u_i\\in SU(2N_i)\\ ,\\quad \\left(\\Phi_i\\right)_{nm}^{BA} = \\left(\\Phi_i\\right)_{mn}^{AB}\\ .\n\\end{equation}\nIn terms of spin-zero composites, $\\Phi_i$ reads\n\\begin{equation}\n\\Phi_i = \\left[\\frac{v_i + H_i + i\\Theta_i}{\\sqrt{4N_i}} + \\left(i \\Pi_i^a + \\Sigma_i^a\\right)X_i^a\\right] E_i\\ ,\n\\label{eq:2N2N}\n\\end{equation}\nwhere $X_i^a$ are the broken generators belonging to the $SU(2N_i)-SO(2N_i)$ algebra, and normalized according to ${\\rm Tr}\\ X_i^a X_i^b = \\delta^{ab}\/2$. The matrix $E_i$ is an $SO(2N_i)$ invariant satisfying\n\\begin{equation}\nE_i^T {X_i^a}^T = X_i^a E_i\\ ,\\quad E_i^T = E_i\\ ,\\quad E_i E_i^\\dagger = 1\\ .\n\\end{equation}\n\nFinally, if the $R_i$ representation is pseudoreal, the global symmetry in techniflavor space is $SU(2N_i)$, and the condensate is as in Eq.~(\\ref{eq:condensateReal}). However the matrix $\\Phi_i$ is now in the two-index antisymmetric representation of $SU(2N_i)$,\n\\begin{equation}\n\\Phi_i\\to u_i \\Phi_i u_i^T\\ ,\\quad u_i\\in SU(2N_i)\\ ,\\quad \\left(\\Phi_i\\right)_{nm}^{BA} = - \\left(\\Phi_i\\right)_{mn}^{AB}\\ ,\n\\end{equation}\nbecause of an extra minus sign introduced by the invariant which contracts the TC indices (not displayed in Eq.~(\\ref{eq:condensateReal})). As a consequence the condensate breaks the global symmetry to $Sp(2N_i)$ rather than $SO(2N_i)$. In terms of spin-zero composites, the $2N_i\\times 2N_i$ $\\Phi_i$ matrix is as in Eq.~(\\ref{eq:2N2N}), where now the $X_i^a$ broken generators belong to the $SU(2N_i)-Sp(2N_i)$ algebra, and $E_i$ is an $Sp(2N_i)$ invariant satisfying\n\\begin{equation}\nE_i^T {X_i^a}^T = - X_i^a E_i\\ ,\\quad E_i^T = - E_i\\ ,\\quad E_i E_i^\\dagger = 1\\ .\n\\end{equation}\n\nWe can unify notation for the complex and real or pseudoreal scenarios by defining, for the case of complex $R_i$, the $2N_i\\times 2N_i$ matrix\n\\begin{equation}\n\\Phi_i \\equiv \\frac{1}{\\sqrt2} \\left(\\begin{array}{cc} 0 & \\phi_i \\\\ \\phi_i^\\dagger & 0 \\end{array}\\right) \\ .\n\\end{equation}\nThen, assuming no violation of techniflavor symmetry, and retaining only terms up to dimension four, the symmetry-breaking potential reads\n\\begin{eqnarray}\nV &=& - \\mu_1^2\\ {\\rm Tr}\\ \\Phi_1 \\Phi_1^\\dagger - \\mu_2^2\\ {\\rm Tr}\\ \\Phi_2 \\Phi_2^\\dagger\n+\\lambda_1^\\prime {\\rm Tr}\\ \\Phi_1 \\Phi_1^\\dagger \\Phi_1 \\Phi_1^\\dagger\n+\\lambda_1^{\\prime\\prime} {\\rm Tr}\\ \\Phi_1 \\Phi_1^\\dagger\\ {\\rm Tr}\\ \\Phi_1 \\Phi_1^\\dagger \\nonumber \\\\\n&+& \\lambda_2^\\prime {\\rm Tr}\\ \\Phi_2 \\Phi_2^\\dagger \\Phi_2 \\Phi_2^\\dagger\n+\\lambda_2^{\\prime\\prime} {\\rm Tr}\\ \\Phi_2 \\Phi_2^\\dagger\\ {\\rm Tr}\\ \\Phi_2 \\Phi_2^\\dagger\n+ 2\\lambda {\\rm Tr}\\ \\Phi_1 \\Phi_1^\\dagger\\ {\\rm Tr}\\ \\Phi_2 \\Phi_2^\\dagger \\ .\n\\label{Eq:twoscalepot}\n\\end{eqnarray}\nA few comments are in order for this potential. First, the pseudoscalars are all massless in Eq.~(\\ref{Eq:twoscalepot}). In particular, the $\\Pi_i^a$ fields are the Nambu-Goldstone bosons (NGBs) associated to the spontaneous breaking of the $SU(N_i)_1\\times SU(N_i)_2$ or $SU(2N_i)$ techniflavor symmetries. Three of these NGBs become the longitudinal components of the SM $W$ and $Z$ boson, once the electroweak interactions are ``switched on''. The remaining NGBs receive mass through radiative effects and\/or additional new interactions beyond TC, such as Extended TC \\cite{Eichten:1979ah,Dimopoulos:1979es}. These interactions can be accounted for by adding techniflavor-breaking potential terms to Eq.~(\\ref{Eq:twoscalepot}). The $\\Theta_i$ pseudoscalar singlets are massless in Eq.~(\\ref{Eq:twoscalepot}), because of additional and spontaneously broken $U(1)_i$ symmetries. These states acquire mass from instantons -- in the form of ${\\rm Det}\\ \\Phi_i$ invariants to be added to $V$ -- and\/or ETC interactions. Finally, the scalar multiplets $\\Sigma_i^a$ can be made heavier than the singlets (as expected from scaling up the QCD spectrum) by adjusting the quartic terms, and\/or by including higher order invariant terms to the potential. We shall ignore all these issues, as our goal is to highlight the see-saw mechanism for the scalar singlets. This is fully accounted for in the potential of Eq.~(\\ref{Eq:twoscalepot}).\n\nMinimization of the potential gives\n\\begin{equation}\nv_1^2 = \\displaystyle{\\frac{\\mu_1^2\/\\lambda_1-\\lambda\\mu_2^2\/\\lambda_1\\lambda_2}{1-\\lambda^2\/\\lambda_1\\lambda_2}} \\ ,\\quad\nv_2^2 = \\displaystyle{\\frac{\\mu_2^2\/\\lambda_2-\\lambda\\mu_1^2\/\\lambda_1\\lambda_2}{1-\\lambda^2\/\\lambda_1\\lambda_2}}\\ ,\n\\end{equation}\nwhere\n\\begin{equation}\n\\lambda_i\\equiv \\frac{\\lambda_i^\\prime}{2 N_i} + \\lambda_i^{\\prime\\prime}\\ .\n\\end{equation}\nThe isosinglet mass Lagrangian is as in Eq.~(\\ref{Eq:basepotential}), with\n\\begin{equation}\nM_i^2 = 2\\lambda_i v_i^2\\ ,\n\\end{equation}\nand\n\\begin{equation}\n\\delta^2 = \\frac{\\lambda^2}{\\lambda_1\\lambda_2}\\ .\n\\end{equation}\n\nIn an $SU(N_{\\rm TC})$ theory, in the limit of large\n$N_{\\rm TC}$, the double trace terms, in particular the mass mixing term, are subleading in $1\/N_{\\rm TC}$, see {\\it e.g.} \\cite{Manohar:1998xv}. To see this explicitly, consider that the contributions to $\\lambda_i$ and $\\lambda$ are dominated by the diagrams of Fig.~\\ref{Fig:diagrams} (a) and (b), respectively, where the black disks represent scalar insertions. These introduce a normalization factor $1\/\\sqrt{d(R_i)}$, where $d(R_i)$ is the dimension of the representation $R_i$. Recalling that the TC gauge coupling scales like $1\/\\sqrt{N_{\\rm TC}}$, we find the scaling behaviors\n\\begin{equation}\n\\lambda_i \\sim \\frac{1}{N_i d(R_i)}\\ ,\\quad\n\\lambda\\sim \\frac{T(R_1)T(R_2)d(G)}{d(R_1)d(R_2)N_{\\rm TC}^2}\\ ,\n\\end{equation}\nHere $T(R_i)$ is defined by ${\\rm Tr}\\ t_i^a t_i^b=T(R_i)\\delta^{ab}$, where $t_i^a$ are the TC generators in the representation $R_i$, and $d(G)$ is the dimension of the TC group, $N_{\\rm TC}^2-1$. In the large-$N_{\\rm TC}$ limit this gives\n\\begin{equation}\n\\delta^2\\sim\\frac{N_1 N_2 T(R_1)^2 T(R_2)^2}{d(R_1) d(R_2)}\\ ,\n\\label{eq:deltascaling}\n\\end{equation}\nFor example, if $R_1$ is the fundamental and $R_2$ the adjoint representation, then $\\delta^2\\sim N_1 N_2\/N_{\\rm TC}$. The fact that $\\delta^2$ decreases with $N_{\\rm TC}$ was to be expected, as $\\lambda$ arises from a three-loop diagram, and is therefore subdominant in the large-$N_{\\rm TC}$ limit. However, for small values of $N_{\\rm TC}$ we expect $\\delta^2$ to be of order one, as suppression from loop factors is compensated by the large TC coupling. From Eq.~(\\ref{eq:deltascaling}) we also observe that $\\delta^2$ grows with the number of flavors.\n\nIn addition to mass mixing of scalar singlets, we also expect mass mixing of the pseudoscalar and spin-one singlets. These, however, may feature larger diagonal masses than those of the scalar singlets\nWe will not further address this point here.\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=10.0cm]{diagrams.pdf}\n\\end{center}\n\\caption{}\n\\label{Fig:diagrams}\n\\end{figure}\n\n\n\\subsection{Ultra-minimal technicolor}\nThe UMT model~\\cite{Ryttov:2008xe} is a two-scale TC model based on an $SU(2)_{\\rm TC}$ gauge theory, with $N_1=2$ Dirac techniflavors in the fundamental representation, $U$ and $D$, and $N_2=1$ Dirac techniflavor in the adjoint representation, $\\lambda$. The two fundamental technifermions are arranged in a doublet with respect to the weak interactions, whereas the adjoint technifermion is not charged under the electroweak interactions.\nUMT features the smallest contribution to the {\\it perturbative}~\\footnote{By $S_{\\rm pert}$ we mean the computation of $S$ from a loop of technifermions $Q$ with a dynamical mass $m_Q \\gg m_Z$, see {\\it e.g.} the discussion in \\cite{Dietrich:2005jn}.} electroweak $S$ parameter, $S_{\\rm pert}\\simeq 1\/3\\pi$, compatible with electroweak symmetry breaking and near-conformal dynamics~\\cite{Ryttov:2008xe,Ryttov:2009yw}.\n\nIn UMT both fermion species are assumed to condense at roughly the same scale $\\Lambda_1 \\simeq \\Lambda_2 $. In fact it is readily found that $C_{2}(R_1)\\simeq C_{2}(R_2)$, whence $\\alpha_c (R_1) \\simeq \\alpha_c (R_2)$ in the ladder approximation. The technifermion-condensate vevs are $\\langle \\overline{U}_R U_L + \\overline{D}_R D_L \\rangle$ and $\\langle \\lambda^1 \\lambda^2 \\rangle$. The former breaks the electroweak symmetry and produces, among others, an isospin triplet of Goldstone bosons, $\\Pi_1^{1,2,3}$. These become the longitudinal modes of the $W$ and $Z$ boson, which requires $v_1 = v=246$ GeV. Based on the above assumption, we also have $v_2 \\simeq v_1$. The full global symmetry breaking pattern, in the absence of electroweak interactions, is $SU(4)\\times SU(2) \\times U(1) \\to Sp(4) \\times U(1) \\times Z_2$. Notice the extra $U(1)$ symmetry, relative to the general symmetry breaking patterns discussed above. This arises from the fact that a linear combination of the two $U(1)$ symmetries, in $U(4)=SU(4)\\times U(1)$ and $U(2)=SU(2)\\times U(1)$, is anomaly free, whereas in isolation each one of these symmetries is anomalous.\n\nFollowing the above discussion we can describe the scalar sector using a linear realization of the global symmetries in terms of a $4\\times 4$ matrix $\\Phi_1$, and a $2\\times 2$ matrix $\\Phi_2$. Up to dimension-four terms the potential is as in Eq.~(\\ref{Eq:twoscalepot}). This gives nine massless pseudoscalars: $\\Pi_1^{1,2,3,4,5}$ and $\\Pi_2^{1,2}$, corresponding to $SU(4)\\to Sp(4)$ and $SU(2)\\to U(1)$ spontaneous symmetry breaking, respectively, plus the $\\Theta_{1,2}$ pseudoscalar singlets. A linear combination of these is the Goldstone boson corresponding to the $U(1)\\to Z_2$ spontaneous symmetry breaking, whereas the remaining linear combination receives mass from instantons, in the form of higher dimensional terms. The one of lowest order has dimension six:\n\\begin{eqnarray}\n\\left( \\det \\Phi_2 \\right)^2 \\text{Pf}\\ \\Phi_1 + \\text{h.c.} \\ .\n\\label{Eq:LUMT}\n\\end{eqnarray}\nIncidentally this provides an additional mass mixing term for the two scalar singlets. The UMT model is thus a prime TC example where the lightest scalar mass eigenstate can be as light as 125 GeV, as exemplified by the black curve in Fig.~\\ref{fig:eigenvalues}.\n\nNote however that before mass mixing the scalar masses in the UMT model might be well below the TeV scale due to the argued walking dynamics of the model. For example the mass of the lightest scalar in the UMT model has been estimated to be as low as 250 GeV in Ref.~\\cite{Doff:2009na}.\nThis model computation does not take into account mass mixing between the scalars, but relies on the assumed near-conformality of the theory.\n\\subsection{Low-scale TC}\nIn the two-scale TC framework proposed early on in Ref.~\\cite{Lane:1989ej} the dynamical assumption is that $\\Lambda_1 \\ll \\Lambda_2$ and such models are also referred to as low-scale TC~\\cite{Eichten:2011sh}. This hierarchy in scales requires choosing the representations such that\nthe quadratic Casimirs satisfy $C_2(R_1) \\ll C_2(R_2)$ and\/or lead to a small $\\beta_0(R_1)$~\\footnote{Achieving this typically requires a large number of technifermions in the representation $R_1$, such as the fundamental, or alternatively four-technifermion operators with large enough coefficients~\\cite{Frandsen:2011kt}.}. The low-scale TC assumption that both sectors have the technifermions arranged in weak doublets implies that the electroweak scale $v$ must be related to $v_i$ via $v=\\sqrt{N_1 v_1^2 + N_2 v_2^2}$ to ensure the correct $W$ and $Z$ masses. For non-large values of $N_1$, $v \\simeq \\sqrt{N_2} v_2$.\n\nAgain we can describe the scalar sector using a linear realization of the global symmetries in terms of appropriate matrices of composite fields $\\Phi_i$. The diagonal mass $M_1$ is by construction relatively light \\cite{Delgado:2010bb}, and mass mixing will further reduce its value. The corresponding scenario is similar to the one depicted by the red curves in Fig.~\\ref{fig:eigenvalues}.\n\\section{Summary and discussion}\\label{sec:conclusions}\nIn this paper we have discussed how a light composite scalar may arise in two-scale TC theories via mass mixing between relatively heavy scalar resonances. We have argued that the mass of the light scalar may be compatible with the recently observed $\\sim 125$ GeV resonance and discussed a concrete minimal two-scale TC model, UMT~\\cite{Ryttov:2009yw}, which provides mass mixing of the required order of magnitude. We have also discussed the mechanism in low-scale TC models~\\cite{Lane:1989ej}, where the lightest scalar resonance is expected to be relatively light compared to the TeV scale, already before including effects of mass-mixing. Finally radiative corrections from the interaction with the top quark can further reduce the mass of the lightest scalar resonance~\\cite{Foadi:2012bb}.\n\n\nIt will be interesting to study the phenomenology of TC models featuring a scalar see-saw mechanism in the light of LHC data, already indicating that the couplings of the scalar resonance to the SM fermions and gauge bosons must be SM Higgs-like.\n\\acknowledgements\nWe thank K. Schmidt-Hoberg and T. Ryttov for comments, discussions and suggestions.\nThe work of R.F. is supported by the Marie Curie IIF grant proposal 275012.\nThe work of M.T.F is supported by a Sapere Aude Grant.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nCold-atom systems have been regarded as efficient simulators of quantum many-body physics \\cite{BDZ,JSGM}\ndue to its ease of controllability. Research involving ultracold bosonic systems \nhas brought lot of interest in the subject \\cite{AMC1,MOTT}.\nExperiments with optical lattices in three dimensions have revealed \nthe superfluid-Mott insulator (MI-SF) phase transition \\cite{MG}.\nSuch phenomena have been studied in the frameworkof the \nJaynes-Cummings-Hubbard Model (JCHM) \\cite{Na1,Na2,NPA,HASP}.\n\nThe JCHM has been a widely used tool for the investigation of many-body\nsystems describing the interplay between the atom-cavity coupling and the \ninter-cavity hopping of photons \\cite{SB}. In the absence of the hopping term, the model\nreduces to the Jaynes-Cummings Model \\cite{JC,EPJ1} which can be \nexactly solved within the Rotating Wave Approximation. \nWhen the hopping term is non-zero the solution becomes non trivial. \nThe difficulty to find analytical solutions for the model has forced researchers\nto resort to approximation or numerical methods for dealing with such problems \\cite{FWG}.\nRecently, Mering et al. \\cite{AMPK} have proposed an approach in which spin\noperators are mapped to the fermionic ones, hence allowing the application of a Fourier \ntransform that decouples the Hamiltonian into independent\nones, which are associated to each momentum value. \nThe great advantage of this method is the simplicity\nin which physical quantities are found, such as the energies and the chemical potential. \nThe approach presented by Mering et al. includes all classes of Bravais structures. \nHowever, they only derived the results for one-dimensional lattices.\n\nDespite the experiments involving optical structures in three dimensions, \nonly a few theoretical results regarding these lattices in dimensions greater than one is available \nin the current literature \\cite{EPJ2}.\nIt lacks a systematic presentation of the phase diagram of typical Bravais lattices.\nThis is the purpose of the present paper. Here, we investigate the JCHM for different Bravais lattices in one, two and three\ndimensions and analyse the influence of the topology on the MI-SF phase transition. We use the approach\nintroduced by Mering et al. \\cite{AMPK}. \n\nThe paper is organized as follows. In Section II we introduce the JCHM.\nIn Section III we present the fermionic approximation. Results are presented\nin Section IV. Finally, IN Section V, we summarize our\nmain results and conclusions.\n\n\\section{Jaynes-Cummings-Hubbard Model}\nThe JCHM hamiltonian for a lattice of $L$ atoms is given by\n ($\\hbar=1$)\n\\begin{eqnarray}\n\\hat{H} & = \\omega &\\sum_{j}\\hat{a}_{j}^{\\dagger}\\hat{a}_{j}+ \\epsilon \\sum_{j}\n\\hat{\\sigma}_{j}^{\n+}\\hat{\\sigma}_{j}^{-}+g\\sum_{j}(\\hat{a}_{j}^{\\dagger}\\hat{\\sigma}_{j}^{-}+\\hat{a}_{j}\\hat{\\sigma}_{\nj}^{+}) \\nonumber \\\\\n{} & {}\n& - t \\sum_{\\langle ij \\rangle}(\\hat{a}_{i}^{\\dagger}\\hat{a}_{j}+\\hat{a}_{j}^{\\dagger}\\hat{a}_{i}),\n\\label{E1} \n\\end{eqnarray}\nwhere $\\hat{\\sigma}^{\\pm}=\\hat{\\sigma}_{x}\\pm i\\hat{\\sigma}_{y}$ and $\\hat{\\sigma}_{x,y,z}$ are the\nusual Pauli matrices, $\\hat{a}_{j}$ ($\\hat{a}^{\\dagger}_{j}$) is the annihilation (creation) operator\nof the light mode at the $j$th atom, $\\omega$ is the light mode frequency, and \n$\\epsilon$ the atomic transition frequency. The light-atom coupling\nis represented by $g$, $t$ is the hopping integral, and $\\langle ij \\rangle$ denotes pairs\nof nearest-neighbour atoms on the lattice.\n\nWhen $t=0$, the Hamiltonian (\\ref{E1}) is decoupled into $L$\nindependent Jaynes-Cummings model Hamiltonians. In this case the system has well-known eigenstates \\cite{CNA}.\nFor $t \\neq 0$, the atoms becomes coupled thus increasing the complexity of the solution, Since \nwe can not write the eigenstates of the whole system as a direct product of single-cavity eigenstates.\nAs discussed in the introduction, an appropriate approach is the fermionic\napproximation recently introduced by Mering et al. \\cite{AMPK}.\n\n\\section {The Fermionic Approximation} \\label{sec:fapp}\nThe fermionic approximation consists in replacing the spin operators by fermionic ones, i.e.,\n$\\hat{\\sigma}^+$ ($\\hat{\\sigma}^-$) IS replaced by $\\hat{c}^\\dagger$\n($\\hat{c}$). In this framework, we can rewrite Hamiltonian (\\ref{E1}) as\n\\begin{eqnarray}\n\\hat{H} & =\n&\\omega\\sum_{j}\\hat{a}_{j}^{\\dagger}\\hat{a}_{j}+\\epsilon\\sum_{j}\\hat{c}_{j}^{\\dagger}\\hat{c}_{j}\n+g\\sum_{j}(\\hat{a}_{j}^{\\dagger}\\hat{c}_{j}+\\hat{a}_{j}\\hat{c}_{\nj}^{\\dagger}) \\nonumber \\\\\n{} & {}\n& - t \\sum_{\\langle ij \\rangle}(\\hat{a}_{i}^{\\dagger}\\hat{a}_{j}+\\hat{a}_{j}^{\\dagger}\\hat{a}_{i}) .\n \\label{E1a} \n\\end{eqnarray}\nThis approximation allows to solve the model exactly, by means of a Fourier transformation.\nFor $t=0$, spin and fermionic operators are equivalent and then the approximation becomes exact.\nTherefore, for small values of $t$ this approach turns out to be very accurate when dealing\nwith the JCHM \\cite{AMPK}. \n \nNow we apply a Fourier transform to the fermionic and bosonic operators as \n\\begin{equation}\n\\hat{a}_{j}=\\frac{1}{\\sqrt{L}}\\sum_{\\vec{k}}e^{-2\\pi i \\frac{\\vec{k}\\vec{R}_j}{L}}\\hat{a}_{\\vec{k}}, \\\\\n\\hat{c}_{j}=\\frac{1}{\\sqrt{L}}\\sum_{\\vec{k}}e^{-2\\pi i \\frac{\\vec{k}\\vec{R}_j}{L}}\\hat{c}_{\\vec{k}}, \\label{E2} \n\\end{equation}\nthen the Hamiltonian can be written as\n\\begin{equation}\n\\hat{H}=\\sum_{\\vec{k}} \\left[ \\omega_{\\vec{k}}\\hat{a}_{\\vec{k}}^{\\dagger}\\hat{a}_{\\vec{k}}+g (\\hat{a}_{\\vec{k}}^{\\dagger}\n\\hat{c}_{\\vec{k}}+\\hat{a}_{\\vec{k}}\\hat{c}_{\\vec{k}}^{\\dagger}) +\\epsilon \\hat{c}_{\\vec{k}}^{\\dagger}\\hat{c}_{\\vec{k}} \\right],\n\\label{E3} \n\\end{equation}\nwhere $\\omega_{\\vec{k}}= \\omega$-$\\nu_{\\vec{k}}$ and $\\nu_{\\vec{k}}$ is the dispersion relation of the Bravais lattice. \n\nThe Hamiltonian (\\ref{E3}) corresponds to a sum of $L$ independent Hamiltonians $\\hat{H}_{\\vec{k}}$\n($\\hat{H}=\\sum_{\\vec{k}} \\hat{H}_{\\vec{k}}$), where each of them\nis associated with a particular momentum $\\vec{k}$. The ground-state energy\nof $\\hat{H}_{\\vec{k}}$ is given by\n\\begin{eqnarray}\nE^{n_{\\vec{k}}}_{\\vec{k}} & = & (1-\\delta_{n_{\\vec{k}}0}) \\left[ n_{\\vec{k}}\\omega_{\\vec{k}}+\n\\frac{\\Delta+\\nu_{\\vec{k}}}{2}+ \\right. \\nonumber \\\\ \n{\\ } & {\\ } & \\left. -\\frac{1}{2}\\sqrt{(\\Delta+\\nu_{\\vec{k}})^2+4n_{\\vec{k}}g^2} \\right],\n\\end{eqnarray}\nwhere the superscript denotes the excitation number and \n$\\Delta \\equiv \\epsilon - \\omega$ is the detuning between atomic trantition and light frequencies.\nNotice that $\\hat{n}_{\\vec{k}}$ commutes with $\\hat{H}_{\\vec{k}}$.\nFor a total excitation number $N$ ($N= \\sum_{\\vec{k}} n_{\\vec{k}}$) we have a particular configuration \n$\\{ n_{\\vec{k_1}},n_{\\vec{k_2}},n_{\\vec{k_3}},...\\}$ that minimizes the total ground-state energy, $\\sum_{\\vec{k}}\nE^{n_{\\vec{k}}}_{\\vec{k}}$. For $t \\ll 1$, it is easy to see that\nthis configuration is $\\{ n,n,n,...\\}$ corresponding to the Mott insulator state. By increasing $t$, a quantum phase transition takes phace\nand the system is driven to a superfluid state. Since $n$ is constant, the phase boundaries of the Mott lobes \nare $n$ dependent. Thus, the $n$th Mott lobe is obtained through an analysis of the particles chemical potential,\n$\\mu^{+}=E^{n+1}_{\\vec{k}'}-E^{n}_{\\vec{k}'}$, and the hole one, $\\mu^{-}=E^{n}_{\\vec{k}}-E^{n-1}_{\\vec{k}}$ \\cite{AMPK,GAS}, where\n$\\vec{k}'$ and $\\vec{k}$ are, respectively, the values \u200b\u200bthat minimize and maximize these potentials. For $\\mu^{+}=\\mu^{-}$, the Mott lobe is\nclosed at the critical hopping, $t_c$, hence describing the MI-SF transition.\n\n\\section{Results}\nIn order to analyse the influence of Bravais lattices topology on the MI-SF transition, we study the one-dimensional (1D), square (SQ),\nsimple cubic (SC), body-centered cubic (BCC) and face-centered cubic (FCC) lattices. The dispersion relations $\\nu_{\\vec{k}}$ are,\nrespectively, given by \\cite{kitel}\n\\begin{equation}\n\\nu_{k}^{(1D)}=-2t\\cos(ka)\n\\end{equation}\n\\begin{equation}\n\\nu_{k_x,k_y}^{(SQ)} =-2t [\\cos(k_x a)+\\cos(k_y a) ]\n\\end{equation}\n\\begin{equation}\n\\nu_{k_x,k_y,k_z}^{(SC)}=-2t [\\cos(k_x a)+\\cos(k_y a)+\\cos(k_z a)]\n\\end{equation}\n\\begin{equation}\n\\nu_{k_x,k_y,k_z}^{(BCC)}=-8t [\\cos(k_x a)\\cos(k_y a)\\cos(k_z a)]\n\\end{equation}\nand\n\\begin{eqnarray}\n\\nu_{k_x,k_y,k_z}^{(FCC)} & = & -4t [\\cos(k_x a)\\cos(k_y a)+\\cos(k_x a)\\cos(k_z a)+ \\nonumber \\\\\n{\\ } & {\\ } & \\cos(k_y a)\\cos(k_z a)], \n\\end{eqnarray}\nwhere $a$ is the lattice constant. For each structure we found the momentum vectors that maximizes\nthe hole chemical potentials and minimizes the particle ones in order to obtain $\\mu^{-}$, $\\mu^{+}$, \nand consequently the Mott phase boundary.\n\n\\begin{figure}\n\\includegraphics[width=.45\\textwidth]{fig1}\n\\caption{First three Mott lobes for different lattices. Inside the lobes we have a Mott insulator state, \nwhile outside the system is in a superfluid state.}\n\\label{fig1}\n\\end{figure}\n\nFigure \\ref{fig1} shows the first three Mott lobes for $\\omega \/g=1$ and $\\Delta=0$, for the considered lattices. \nWe see that as the number of lattice neighbors increases, the MI phase region decreases.\nThis result is expected since the probability of ohoton tunneling increases for higher \nnumber of nearest-neighbors.\nWe observe that for $d$-dimensional hypercubic lattices the lobes can be rescaled by\n$t_d =t\/d$ causing the collapse into a single curve. \nThe shape of the Mott lobes are equal for bipartite lattices, where a bipartite structure is such one \nthat we can decompose into two substructures, with all nearest-neighbour sites shared between each other.\nThe FCC lattice is non-bipartite, Hence displaying a different behaviour. Thus\nwe propose to make a detailed comparative analysis between the FCC and SC lattices representing, respectively, the non-bipartite\nand bipartite classes. \n\n\\begin{figure}\n \\subfigure[]{\n \\includegraphics[width=0.45\\textwidth]{fig2a}\\label{fig2a}}\n \\subfigure[]{\n \\includegraphics[width=0.45\\textwidth]{fig2b}\\label{fig2b}}\n \\caption{Mott lobe for $n=1$ and typical values of detuning for (a) SC and (b) FCC lattices.}\n\\label{fig2}\n\\end{figure}\n\n\\begin{figure}\n \\subfigure[]{\n \\includegraphics[width=0.45\\textwidth]{fig3a}\\label{fig3a}}\n \\subfigure[]{\n \\includegraphics[width=0.45\\textwidth]{fig3b}\\label{fig3b}}\n \\caption{Relationship between critical hopping and detuning on the (a) SC and (b) FCC lattices for varying excitation number.} \\label{fig3}\n\\end{figure}\n\nThe first Mott lobe ($n=1$) on the SC and FCC lattice is show in figure \\ref{fig2} for typical detuning values.\nWe see that both lattices have a similar MI-SF phase transition frame. However, the FCC always has a smaller MI phase region.\nFor both lattices, the MI phase region decreases for increasing detuning.\nFigure \\ref{fig3} shows the critical hopping $t_c$ as a function of the detuning. While at the first lobe, $t_c$ decreases when $\\Delta$ increases, in the other lobes ($n > 1$) \nwe observe that the critical hopping reaches a maximum at $\\Delta = \\Delta_m$. \nFor this particular detuning value, when $n$ increases the critical hopping decreases, as predicted in \\cite{AMPK}. \nThis behavior is present in both structures.\nFigure \\ref{fig4} shows the critical hopping as a function of $n$ for $\\Delta=0$. \nThe properties expressed by both lattices are again qualitatively equivalent where there\nis only one gap between the two curves. \nFigure \\ref{fig5} presents the detuning values corresponding to the maximal critical hopping as a function of\n$n$. We can observe that, except for the $n=1$ case, where the $t_c$ maximum is associated with the detuning minimum,\nThe critical hopping correspondent detuning is approximately null. \n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{fig4}\n\\caption{Critical hopping for the SC and FCC lattices in terms of $n$ for null detuning.}\n\\label{fig4}\n\\end{figure}\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{fig5}\n\\caption{Detuning versus $n$ for maximal critical hopping on the SC and FCC lattices.}\n\\label{fig5}\n\\end{figure}\n\nBy performing an asymptotic approach to large excitation number, $n \\gg 1$, we can find the dominant term of the critical hopping which is given by\n\\begin{equation}\n t_c = \\frac{g}{16 \\tilde{d} n^{3\/2}} + {\\cal O} (n^{-3}), \\label{tc.largen}\n\\end{equation}\nwhere we obtain $\\tilde{d}=4$ for the FCC and BCC lattices where it corresponds to the hypercubic lattices dimentions, i. e., 1 for linear, 2 for\nsquare, and 3 for the SC lattice. It is important to emphasize that at the large-$n$ regime, the detuning \nand the topology class (bipartite or non-bipartite) influence on the critical hopping $t_c$ is suppressed. \nFigure \\ref{fig6} confirms the prediction of the equation (\\ref{tc.largen}). It shows that the\nexact results are in excellent agreement with the asymptotic ones for $n>4$.\n\n\\begin{figure}\n\\includegraphics[width=0.45\\textwidth]{fig6}\n\\caption{Critical hopping versus $n$ for various lattices at the large $n$ regime. The symbols are related to \nnumerically obtained results for\n$\\Delta \/ g = -0.5, 0,$ and $0.5$, where each detuning value produces the same result with errors smaller than the symbol size.\nThe solid line represents the asymptotic analytical result given by means of equation (\\ref{tc.largen}).}\n\\label{fig6}\n\\end{figure}\n\n\\section{Conclusions}\nWe have studied the properties of the MI-SF phase transition for the Jaynes-Cummings-Hubbard model over several Bravais lattices\nby means of the fermionic approximation. We find that the transition parameters of hypercubic lattices are scalable, except for the FCC lattice,\nsince it is non-bipartite. \nThe Mott lobes for the SC and FCC lattices show a similar detuning dependence\nhaving only a quantitative difference which is suppressed as the detuning increases. \nAn analogous feature is observed in $t_c$ vs. $n$ for null\ndelta and in $\\Delta_m$ vs. $n$ where treir quantitative difference tends to be smaller as $n$\nincreases. \nFurthermore, we observed that not only the number of neighbors influences the\nMI-SF phase transition but also does the lattice topology.\n \nThe FCC lattice shows a behavior quantitatively different and non-scalable from bipartite lattices. \nOn the other hand, asymptotic\nresults for large excitation number indicate an universality on $t_c$ because it obeys a power law in $n$ which does not depend on\n$\\Delta$ and topology associated parameter can be rescaled for different classes through a multiplication of $t_c$ by an effective parameter\nthat corresponds to the dimension, for hypercubic, and to 4, for BCC and FCC lattices.\n\n\\section*{ACKNOWLEDGMENTS}\nThis work was partially supported by CNPq, CAPES and FAPITEC\/SE (Brazilian Research Funding Agencies).\n\n\n\\section{Introduction}\n\\label{intro}\nYour text comes here. Separate text sections with\n\\section{Section title}\n\\label{sec:1}\nand \\cite{RefJ}\n\\subsection{Subsection title}\n\\label{sec:2}\nas required. Don't forget to give each section\nand subsection a unique label (see Sect.~\\ref{sec:1}).\n\\begin{figure}\n\\resizebox{0.75\\textwidth}{!}{%\n \\includegraphics{leer.eps}\n}\n\\caption{Please write your figure caption here}\n\\label{fig:1} \n\\end{figure}\n\\begin{figure*}\n\\vspace*{5cm} \n\\caption{Please write your figure caption here}\n\\label{fig:2} \n\\end{figure*}\n\\begin{table}\n\\caption{Please write your table caption here}\n\\label{tab:1} \n\\begin{tabular}{lll}\n\\hline\\noalign{\\smallskip}\nfirst & second & third \\\\\n\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\nnumber & number & number \\\\\nnumber & number & number \\\\\n\\noalign{\\smallskip}\\hline\n\\end{tabular}\n\\vspace*{5cm} \n\\end{table}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWhen deep transformer models make mistakes, ML engineers have had little\nrecourse but to collect a better training set and hope the problem is fixed.\nAdversarial datasets have exposed a variety of phenomena under which\nmodels trained on common datasets fail, particularly for question answering\nand natural language inference \\citep{jia-liang-2017-adversarial, gururangan-etal-2018-annotation, kim-etal-2018-teaching, mccoy-etal-2019-right, nie-etal-2020-adversarial, thorne-etal-2019-fever2}.\nThey have provided new test data to expose problems but not always\nnew training data to correct them.\nRecently, the natural language processing community has adopted methodologies\ninspired by software development for probing and testing the capabilities\nof a model.\n\\citet{ribeiro-etal-2020-beyond} introduce CheckList,\nwhich helps users to develop test suites of examples, organized by capability.\n\nCollecting hundreds or thousands of examples for each error phenomenon is\nslow, expensive, and not always feasible. In this paper, we investigate\nhow just a few examples of a phenomenon (``debugging examples'', which were\nnot in the original dataset)\ncan be utilized to correct a model.\nThe goal is higher accuracy on the phenomenon (``debugging accuracy'')\nwhile retaining accuracy\non the original dataset (``original accuracy''). This problem differs\nfrom domain adaptation and\nfew-shot learning because performance must be maintained on original\nexamples, and no new classes are introduced.\n\nWe repurpose published test suites for several natural language understanding\n(NLU) tasks as debugging problems,\nnot just diagnostics. We identify methods that can update a model using\na few debugging examples without the expense of iterating over the whole\noriginal training set. We introduce a new fast method that samples\nin-danger examples from the original training set to obtain even better\noriginal accuracy for comparable debugging accuracy.\n\n\n\\section{Related work}\n\nTwo recent works \\citep{zhu, de-cao-etal-2021-editing} study how to modify transformer\nlanguage models so that they store updated facts, testing their\napproaches on downstream tasks such as zero-shot relation extraction\nand closed-book fact checking. To apply these methods,\none is given a modified fact as an example to train on, and one must\npredict the modified fact correctly (success rate) while achieving low\nperformance deterioration on the original test set.\nBecause success rate is measured on just one example which is\navailable at training time, to determine whether the update really\ngeneralizes,\n\\citet{de-cao-etal-2021-editing} also measures {\\em equivalence accuracy}, which reflects\naccuracy on paraphrases of the updated fact.\n\nBy contrast, our setting provides ten examples (not just one) for a\nphenomenon where the predictions are to be updated.\nThe phenomenon being debugged may involve deeper semantics than\na factoid update, which usually requires only a reassociation of\nparticular words that appear in the example.\nWe assume we are given a testing set for the phenomenon, so\nwe can measure generalization\nby directly measuring accuracy on the testing examples\ninstead of paraphrasing the training examples.\n\nDespite these differences, ideas from these papers provide relevant\nideas that can be used in our debugging setting as well.\nOne baseline considered by \\citet{zhu}, which we call {\\em intensive\nfine-tuning}, simply takes the updated facts (for us, the debugging\ntraining set) and repeatedly performs gradient descent updates\non them until they are classified correctly.\n\nThe proposed approach of \\citet{zhu} is to minimize loss on the updated\nfacts (the debugging set) subject to either an $L^\\infty$ or $L^2$ constraint\non the difference of the model parameters. We consider these as baselines.\n\nAs \\citet{de-cao-etal-2021-editing} observe, constraining the norm of the parameter update\nis only loosely tied to how a parameter change can affect the output of\na model. For this reason they introduce an approach based on constraining\nthe Kullback-Leibler divergence between the updated model and the original.\nTheir proposed method trains a hypernetwork to read a single updated example\nand make a change minimizing debugging loss subject to the Kullback-Leibler\ndivergence constraint. That does not apply as well to our scenario\nof multiple debugging examples, but we borrow the idea of using\nKullback-Leibler divergence to incentivize similar predictions in a\nmore straightforward baseline.\n\n\\citet{sinitsin} introduce a meta-learning method for making a model\nthat will preserve original accuracy when performing a series of gradient\ndescent steps to change the label of any particular example.\nWe are interested in methods that can be applied to any model,\nand for real debugging it is not necessary that all examples be\neasily relabeled.\n\nContemporaneously to our work, \\citet{pasunuru-etal-2021-continual}\ninvestigate few-shot debugging on error categories that are apparently\ntoo broad to be corrected with just a few examples.\nAlthough they report some success with feature matching methods such as\nprototypical networks \\citep{snell}, they either\nsuppose that test examples are identified as needing a correction or not\n(i.e. debugging or original), more like domain adaptation, or else\ntrain the prototypical network on a combined training set, which is the\nslowness we are trying to avoid. Our setting requires a single model that can\nbe applied to all examples without source information.\n\n\\section{Method}\n\n\\begin{table*}[htb]\n\\begin{center}\n\\begin{tabular}{p{1.1in}p{.8in}p{.8in}p{.8in}p{.8in}p{.8in}}\n\\hline\nTest suite & Dog & Or\/And & Becoming & People & Passive \\\\\n\\hline\nBefore debugging & (.000, .913) & (.000, .913) & (.002, .913) & (.005, .913) & (.009, .913) \\\\\n\\hline\n{\\em Fast} & & & & & \\\\\nDebug only & (.731, .909) & (1.000, .909) & (1.000, .910) & (.922, .910) & (.819, .910) \\\\\n$L^2$ ($\\delta=.1$) & (.704, .909) & (1.000, .909) & (1.000, .911) & (.880, .910) & (.876, .910) \\\\\n$L^\\infty$ ($\\delta=.1$) & (.704, .909) & (1.000, .909) & (1.000, .911) & (.880, .910) & (.876, .910) \\\\\nK-L ($\\lambda=10$) & (1.000, .905) & (1.000, .908) & (1.000, .909) & ( 1.000, .908) & (1.000, .908) \\\\\nOurs & (.731, .909) & (.994, .913) & (1.000, .913) & (.993, .911) & (.975, .912) \\\\\n\\hline\n{\\em Slow} & & & & & \\\\\nMixed in & (1.000, .913) & (.999, .912) & (1.000, .913) & (.933, .912) & (.859, .912) \\\\\nOversampling & (1.000, .911) & (1.000, .913) & (1.000, .912) & (.999, .914) & (1.000, .911) \\\\\n\\hline\n\\end{tabular}\n\\caption{(Debugging accuracy, Original accuracy) on CheckList test suites for\nQQP.}\n\\label{tbl:qqp}\n\\end{center}\n\\end{table*}\n\n\\begin{table*}[htb]\n\\begin{center}\n\\begin{tabular}{p{1.1in}p{1.3in}p{1.3in}p{1.3in}}\n\\hline\nTest suite & Used to but now & Negation with neutral & Opinion matters \\\\\n\\hline\nBefore debugging & (.793, .925) & (.448, .925) & (.616, .925) \\\\\n\\hline\n{\\em Fast} & & & \\\\\nDebug only & (.860, .914) & (1.000, .917) & (.602, .915) \\\\\n$L^2$ ($\\delta=.1$) & (.860, .915) & (1.000, .919) & (.600, .915) \\\\\n$L^\\infty$ ($\\delta=.1$) & (.860, .915) & (1.000, .919) & (.600, .915) \\\\\nK-L ($\\lambda=10$) & (.838, .915) & (1.000, .916) & (.538, .920) \\\\\nOurs & (.877, .919) & (1.000, .913) & (.777, .885) \\\\\n\\hline\n{\\em Slow} & & & \\\\\nMixed in & (.909, .913) & (1.000, .925) & (.673, .923) \\\\\nOversampling & (.735, .931) & (1.000, .921) & (.512, .928) \\\\\n\\hline\n\\end{tabular}\n\\caption{(Debugging accuracy, Original accuracy) on CheckList test suites for\nSST-2.}\n\\label{tbl:sst2}\n\\end{center}\n\\end{table*}\n\n\\begin{table*}[htb]\n\\begin{center}\n\\begin{tabular}{p{1.1in}p{.8in}p{.8in}p{.8in}p{.8in}p{.8in}}\n\\hline\nTest suite & After If & P. Participle & Disjunction & Passive & NP\/S \\\\\n\\hline\nBefore debugging & (.000, .838) & (.001, .838) & (.005, .838) & (.004, .838) & (.006, .838) \\\\\n\\hline\n{\\em Fast} & & & & & \\\\\nDebug only & (1.000, .813) & (1.000, .804) & (1.000, .807) & (.929, .827) & (1.000, .811) \\\\\n$L^2$ ($\\delta=.1$) & (.999, .816) & (.999, .810) & (.999, .812) & (.933, .827) & (.999, .817) \\\\\n$L^\\infty$ ($\\delta=.1$) & (1.000, .812) & (1.000, .804) & (1.000, .807) & (.933, .827) & (1.000, .811) \\\\\nK-L ($\\lambda=10$) & (1.000, .825) & (1.000, .820) & (1.000, .822) & (1.000, .824) & (1.000, .826) \\\\\nOurs & (1.000, .841) & (.926, .835) & (1.000, .836) & (.994, .832) & (.939, .842) \\\\\n\\hline\n{\\em Slow} & & & & & \\\\\nMixed in & (.468, .835) & (.114, .833) & (.344, .837) & (.791, .835) & (.298, .837) \\\\\nOversampling & (.920, .836) & (.992, .837) & (1.000, .838) & (.869, .837) & (1.000, .833) \\\\\n\\hline\n\\end{tabular}\n\\caption{(Debugging accuracy, Original accuracy) on HANS test suites for\nMNLI.}\n\\label{tbl:hans}\n\\end{center}\n\\end{table*}\n\nWe suppose we are given a model $p_\\theta (x,y)$ trained on\ntraining set $X$. We are also given\ndebugging training set $X^\\prime$, and original\ntest set $X_{test}$ and debugging test set $X^\\prime_{test}$.\nThese four sets are pairwise disjoint.\nWe consider the cross-entropy loss\n\\begin{equation}\n\\mathcal{L} (x, y; \\theta) = - p_\\theta (x, y) \\log p_\\theta (x, y) \\ldotp\n\\end{equation}\nOur method initializes $\\theta_0 = \\theta$ and then performs\n{\\em intensive fine-tuning} on the debugging set $X^\\prime$, by\nperforming Adam \\citep{adam}\niterations $\\theta_{t+1} = Adam(\\mathcal{L}, X^\\prime, \\theta_t)$\nwhere $Adam(\\mathcal{L}, S, \\theta)$ represents the parameter update\nachieved by training $\\theta$ with respect to the loss $\\mathcal{L}$ over\na complete epoch on $S$.\nIntensive fine-tuning stops at the minimal step $t = t_{X^\\prime}$ such that\n$\\mbox{argmax}_y p_{\\theta_t} (x_i, y) = y_i$ for all\n$(x_i, y_i) \\in X^\\prime$.\nWe write $\\theta_{X^\\prime} = \\theta_{t_{X^\\prime}}$.\n\nNext we collect random samples $W \\subset X$ that are misclassified\nby $\\theta_{X^\\prime}$ but not by $\\theta$. In our experiments we select\n$\\abs{W} = 2 \\abs{X^\\prime}$ such examples. Collecting $W$ is a fast\nprocess involving iterating through a random shuffle of $X$ and stopping\nwhen the required number of examples is retrieved. The expected iteration\ntime depends only on the error rates and correlation of the errors of the\nmodels and not on the size of the original training set $\\abs{X}$.\n\nFinally we restart from the original parameters $\\theta$ and intensively\nfine-tune using the set $X^\\prime \\cup W$. We take $\\theta^\\prime_0 = \\theta$\nand iterate Adam\n\\begin{equation}\n\\theta^\\prime_{t+1} = Adam(\\mathcal{L}, X^\\prime \\cup W, \\theta^\\prime_t)\n\\end{equation}\nuntil we reach $t^\\prime$ where\n$\\mbox{argmax}_y p_{\\theta^\\prime_{t^\\prime}} (x_i, y) = y_i$ for all\n$(x_i, y_i) \\in X^\\prime \\cup W$.\nThe resulting $\\theta^\\prime = \\theta^\\prime_{t^\\prime}$\nis the debugged model by our proposed method.\n\n\n\\section{Experiments}\n\nWe consider a BERT base model \\citep{devlin-etal-2019-bert} implemented in Pytorch\n\\citep{pytorch} by the HuggingFace Transformers library \\citep{wolf-etal-2020-transformers}\nfor all experiments, with batch size 16 per GPU on 3 or 4 GPU's,\notherwise following default training parameters.\n\nOur data sets are test suites from\nHANS \\citep{mccoy-etal-2019-right} debugging an MNLI model\n\\citep{williams-etal-2018-broad} and CheckList \\citep{ribeiro-etal-2020-beyond}\ndebugging models for SST-2 and QQP from GLUE \\citep{wang-etal-2018-glue}.\nWe take test cases with the worst accuracy before debugging, and select 10\nexamples from each suite for debugging ($X^\\prime$) and use the rest\n(e.g. 990 examples for HANS) to test debugging ($X^\\prime_{test}$).\nSee the appendix for details.\nOur data splits and our code for extracting examples\nfrom CheckList are available for download.\\footnote{https:\/\/github.com\/necla-ml\/debug-test-suites}\nFor HANS we use the BERT cased model and for CheckList we use the uncased model.\n\n\\subsection{Fast baselines}\n\nThe first of four fast baselines we consider, which is labeled ``debug only,'' performs\nintensive fine-tuning on the debugging set $X^\\prime$ only, returning the\nmodel $\\theta_{X^\\prime}$. In every case we tested, $t_{X^\\prime} \\leq 3$\nepochs over ten examples, so this completed within a minute.\n\nThe next baselines from \\citet{zhu} are\nfinding $\\theta^\\prime$ to minimize $\\mathcal{L}(X^\\prime, \\theta^\\prime)$\nsubject to an $L^\\infty$ constraint\n${\\norm{\\theta^\\prime - \\theta}}_\\infty < \\delta$ or an $L^2$ constraint\n${\\norm{\\theta^\\prime - \\theta}}_2 < \\delta$. Following \\citet{zhu} we use\n$\\delta = 0.1$ and implement the optimization as projected gradient descent,\n{\\em e.g.} for $L^\\infty$, taking a gradient descent step\nfrom $\\theta_0$ to $\\theta$\nand projecting the updated parameters back into the $L^{\\infty}$ ball as\n\\begin{equation}\n\\theta_0 + \\min ( \\max (\\theta - \\theta_0, - \\delta), \\delta)\n\\end{equation}\nlimiting the excursion in any coordinate to $\\pm \\delta$.\n\nThe fourth baseline we consider introduces a Kullback-Leibler divergence\non randomly sampled examples from $X$ into the loss:\n\\begin{equation}\n\\mathcal{L}^\\prime (\\theta^\\prime) = \\mathcal{L}(X^\\prime ; \\theta^\\prime) + \\lambda \\mathcal{L}_{KL} (X ; \\theta^\\prime)\n\\end{equation}\nwhere\n\\begin{equation}\n\\mathcal{L}_{KL} (X; \\theta^\\prime) = \\sum_{(x,y) \\in X} \\sum_{y^\\prime}\np_\\theta (x, y^\\prime) \\log \\frac{p_\\theta (x, y^\\prime)}{p_{\\theta^\\prime} (x, y^\\prime)}\n\\end{equation}\nIn practice, $\\mathcal{L}_{KL} (X; \\theta^\\prime)$ is estimated on\nminibatches from $X$ simultaneously with selecting a minibatch of the same\nsize from $X^\\prime$.\n\nTraining on each of these baselines stops when we reach $t^\\prime$ where\n$\\mbox{argmax}_y p_{\\theta^\\prime_{t^\\prime}} (x_i, y) = y_i$ for all\n$(x_i, y_i) \\in X^\\prime$. In experiments, this always happens within\nthree epochs over $X^\\prime$ .\n\n\\subsection{Slow baselines}\n\nOur first slow baseline is simply to train the model starting with the original\nBERT base parameters for three full epochs on randomly shuffled\n$X^\\prime \\cup X$, without accounting for the difference in\nsize $\\abs{X^\\prime} << \\abs{X}$. We call this ``mixed in'' training.\n\nOur second baseline (``oversampling'')\nequally weights $X^\\prime$ and $X$ in the training.\nIt starts with original BERT base parameters and trains for three full epochs\nover $X$, each time taking a batch consisting half of examples from $X$\nand half of examples from $X^\\prime$, interleaved. Although the $X$ samples\nare sampled without replacement, the $X^\\prime$ samples are replaced and\nare each seen many times.\n\n\\subsection{Results}\n\nWe consider the CheckList and HANS test suites for QQP, SST-2, and MNLI\ntogether (Tables~\\ref{tbl:qqp},~\\ref{tbl:sst2}, and~\\ref{tbl:hans}).\nAmong fast methods, our method has the highest\noriginal accuracy in 11 out of 13 subcases and the highest debugging\naccuracy in 6 out of 13. This makes it a better choice for retaining\noriginal accuracy out of several fast, good methods for improving debugging\naccuracy.\nKullback-Leibler divergence, which ranks first most often among fast methods \nin debugging accuracy, only ranks first in original accuracy once out of\n13 subcases. Notably, both methods frequently outperform the debug only\napproach in debugging accuracy, showing that\nsampling non-debugging examples helps achieve an update\nthat generalizes better even on the debugging phenomenon.\n\nConsidering slow methods, oversampling achieves maximal debugging\naccuracy on 8 of 13 subcases and best original accuracy on 8 of 13.\nOn HANS, mixing the debugging examples into the full training set is not\nsufficient for them to be learned, though this method achieves\nreasonable debugging accuracy on the other datasets.\n\n\\begin{figure}[htb]\n\\includegraphics[width=3in,height=2.2in]{original-acc} \n\\includegraphics[width=3in,height=2.2in]{debugging-acc}\n\\caption{Comparing\nour method to debug-only intensive fine-tuning for\ndifferent numbers of shots.}\n\\label{fig:bothfig}\n\\end{figure}\n\n{\\bf Number of shots and stability.}\nBesides the 10 shot setting described above, we compare our method to\n``debug only'' intensive fine-tuning for 5 shots and 20 shots.\nResults are shown for HANS's \\verb+cn_after_if_clause+ test suite\nin Figure~\\ref{fig:bothfig}. Each experiment is repeated, sampling\neight different sets of debugging and in-danger examples.\nThe standard deviation in accuracy over the samples is indicated by\nthe error bars around each mean result in the figure.\n\nFive shots is too few to be sure of good debugging accuracy.\nOur method achieves significantly higher debugging accuracy and original\naccuracy, compared to intensive fine-tuning, with ten or twenty shots.\nWith twenty shots the debug only method loses original accuracy, possibly\ndue to the tightened constraints of classifying more debugging examples\ncorrectly.\n\n{\\bf Other base models.} We repeat 10-shot experiments using Electra\n\\citep{electra}\ninstead of BERT. Using Electra, our method has the highest original\naccuracy among fast methods in 7 out of 13 subcases and the highest\ndebugging accuracy in 8 out of 13.\n\n\\begin{table}[htb]\n\\begin{center}\n\\begin{tabular}{lr}\n\\hline\nMethod & Seconds \\\\\n\\hline\n{\\em Fast} & \\\\\nDebug only & 10.89 \\\\\n$L^2$ & 14.74 \\\\\n$L^\\infty$ & 15.85 \\\\\nK-L & 14.79 \\\\\nOurs - {\\em total} & 25.29 \\\\\n\\, {\\em debug-only fine-tuning} & 10.89 \\\\\n\\, {\\em finding new misclassifications $W$} & 2.86 \\\\\n\\, {\\em final fine-tuning} & 11.54 \\\\\n\\hline\n{\\em Slow} & \\\\\nMixed in & 12663.14 \\\\\nOversampling ({\\em estimated}) & 25326.28 \\\\\n\\hline\n\\end{tabular}\n\\caption{Model debugging time in seconds.}\n\\label{tbl:time}\n\\end{center}\n\\end{table}\n\n{\\bf Time.} \nIntensive fine-tuning usually finishes after a few small batches,\nbut collecting the 20 misclassified examples potentially can require\nmore evaluations.\nOn QQP these can be found in 1\/60 of an epoch (forward only)\nand at worst (on ``negation with neutral'' of SST-2) in 1\/5,\nyielding roughly 720x and 60x speedups over oversampling (three\nepochs, forward and back, alternating with debugging examples), respectively.\n\nIn Table~\\ref{tbl:time} we collect total timings for each debugging procedure\non HANS's \\verb+cn_after_if_clause+ test suite,\nincluding the time our method needs to collect the new misclassifications $W$\nfrom the original MNLI training set. Whereas the slow methods require hours\nto update the model, all the fast methods finish in a matter of seconds.\n\n\\section{Conclusion}\n\nWe study the new problem of few-shot debugging natural language\nunderstanding problems on narrowly defined test suites,\naddressing a real-life need not addressed by past benchmark datasets.\nIntensive fine-tuning on debugging examples with a few newly misclassified\nexamples is substantially faster than full epoch retraining, and retains\nsuperior accuracy on the original dataset in more of our tests than any other\nfast method, for competitive debugging accuracy. Kullback-Leibler\nregularization may achieve better debugging accuracy, but its original\naccuracy is lagging, probably because it samples randomly rather\nthan focusing on the\nnewly misclassified examples that the debugging examples are opposed to.\nOur results suggest a way for practitioners to quickly address problems\nin deployed systems and inspire the search for more refined ways of\nusing debugging information.\n\nTo further this research, there is a need for test suites that are not\nconstructed by templates, so that the debugging phenomena are less easily\nlearned, and yet not too broad to be taught in the few-shot setting.\nThis limitation forced us to focus on relatively small differences\nin accuracy. Because our method requires only a few debugging examples,\nit should be practical to construct test suites by hand or by manually\norganizing existing misclassifications.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{introduction}\nQuantum mechanics is one of the most successful physical theories human kind has ever formulated. Nonetheless, its interpretation and range of validity eludes our full grasping. One of the basic features of quantum physics is the superposition principle which, when applied to the macroscopic world, leads to counter-intuitive states akin to the celebrated Sch\\\"odinger's cat. While \nmodels beyond quantum mechanics, challenging some of its interpretational issues, have been formulated in its early days, testing the predictions of the theory when applied to the macroscopic world has proven to be a tall order. The main reason for this is the intrinsic difficulty in isolating large systems from their environment. \n\nSpace offers a potentially attractive arena for such an endeavour, promising the possibility to create and verify the quantum properties of macroscopic superpositions far beyond current Earth-based capabilities~\\cite{kaltenbaek2012macroscopic,kaltenbaek2016macroscopic,2018cosp...42E1659K,QPPF}. In this work, we focus on the efforts to test the boundaries of quantum physics in space employing nanoparticles, which are one of the best suited candidate for quantum superpositions of high-mass objects. \\MP{It should be noticed that, while we will focus on testing quantum physics, large spatial superpositions of massive systems are bound to be \\MP{sensitive probes for many other} physical phenomena, from dark matter and dark energy searches~\\cite{riedel2013direct, bateman2015existence, riedel2017decoherence, carney2019ultralight, carney2020mechanical, monteiro2020search,khoury2004chameleon, rider2016search, moore2014search} to gravimetry and Earth observation applications~\\cite{qvarfort2018gravimetry,hebestreit2018sensing}.}\n\nWe delve into the possibilities offered by the state-of-the-art of nanoparticle physics projected in the space environment. In doing so, we offer an ab-initio estimate of the potential of space-based interferometry with some of the largest systems ever considered and show that there is room for testing quantum mechanics at an unprecedented level of detail. \n\n\\MP{The remainder of this paper} is organized as follow. In Sec.~\\ref{Ch1}, after a brief introduction to the problem at hand and its relevance in fundamental physics, we discuss the advantages potentially offered by a space environment for quantum experiments \\MP{based on} large quantum superpositions of nanoparticles. We also give a self-contained overview of the current state-of-the-art for space missions proposals. In Sec.~\\ref{Applications}, we discuss a first class of experiments that can be performed in space, i.e., non-interferometric ones which do not require the creation of macroscopic superpositions and exploit the free-evolution spread of the position of a quantum particle. In Sec.~\\ref{ProofOfPrincipleImlementation}, we consider interferometric experiments which, in contrast, require the creation and verification of large superpositions but also offer the benefit of a direct test of both the superposition principle of quantum mechanics and of competing theories. In particular, we present here an ab-initio estimate of the potential of space-based interferometry with large nanoparticles. We conclude in Sec.~\\ref{WishList} with a discussion of our results and an outlook to the evolution of the field.\n\n\n\n\\section{Superposition of macroscopic systems: the case for space}\\label{Ch1}\nThe predictions of quantum physics have been confirmed with a high degree of precision in a multitude of experiments, from the sub-atomic scale up to matter-wave interferometry with tests masses of nearly $10^5$ atomic mass units (amu)~\\cite{fein2019quantum}. The basis for observing matter-wave interference is the quantum superposition principle, one of the pillars of quantum physics. While quantum physics does not pose any fundamental limitation to the size of quantum superposition states, the Gedankenexperiment of the Schr\\\"{o}dinger's cat~\\cite{Schroedinger1935b} illustrates the controversies entailed by the superposition principle when extended to the macroscopic world. Many proposals have been formulated in an attempt to establish a mechanism that would lead to the emergence of a classical world at macroscopic scales. Among them we find Bohmian mechanics~\\cite{bohm1952suggested,durr2009bohmian}, decoherence histories~\\cite{griffiths1984consistent}, the many-world interpretation~\\cite{everett1957relative} and collapse models~\\cite{bassi2003dynamical,RevModPhys.85.471} to name a few. The latter differs from the other proposals in the fact that they predict a phenomenology that deviates from the one of standard quantum mechanics, albeit in a delicate fashion. In this sense, collapse models represent an alternative construction to standard quantum theory, more than an alternative interpretation recovering all the predictions of the latter.\n\\MP{In light of the central role that they play in the experimental investigation of quantum macroscopicity~\\cite{PhysRevLett.110.160403,PhysRevLett.119.100403}, in the following, we will focus on such models as benchmarks for precision tests of quantum mechanics.}\n\n{\nIn 2010, a proposal for experimentally creating and verifying a state akin to the one of Schr\\\"{o}dinger's cat based on the use of massive mechanical resonators was put forward within the context of the MAQRO proposal~\\cite{kaltenbaek2012macroscopic}. The latter put forward the vision of harnessing the unique environment provided by space to test quantum physics in a dedicated, medium-sized space mission to be conducted within the framework of the \"Cosmic Vision programme\" run by the European Space Agency (ESA). The scope of the endeavour was to create a macroscopic superposition of motional states of a massive particle and probe its quantum coherence by allowing the wave functions of the components of such superposition to interfere, as in a double-slit experiment. The space-based environment would guarantee unprecedented levels of protection from environmental noises, as well as favourable working conditions for the engineering of the cat-like state~\\cite{kaltenbaek2012macroscopic}.}\n\nNear-field interferometry has later been identified as a viable route \\MP{for} the achievement of the original goals of MAQRO~\\cite{kaltenbaek2016macroscopic}, holding the promises for testing the superposition principle with particles of mass up to $10^{11}$~amu. This would be at least six orders of magnitude larger than the current record~\\cite{fein2019quantum}\\MP{. It would also far exceed the projected upper-bound to the masses that could be used in similar ground-based experiments. \\MP{Such terrestrial upper-bounds are strongly limited by the achievable free-fall times on Earth}~\\cite{gasbarri2020prospects} (cf. Sec.~\\ref{advantage}).} \n\n{\nThe basic payload consists of optically trapped dielectric nanoparticles with a target mass range from $10^7$ to $10^{11}$~amu. The main scientific objectives are to perform both near-field interferometric and non-interferometric experiments. In both cases, high-vacuum and cryogenic temperatures are needed. The particles, after loading, are initially trapped in an optical cavity and their center-of-mass degree of freedom cooled down by a 1064~nm laser entering the quantum regime. For this purpose, two TEM$_{00}$ modes with orthogonal polarization are to be used for trapping and side-band cooling along the cavity axis. The transverse motion is instead cooled employing a TEM$_{01}$ and a TEM$_{10}$ mode. After the initial state preparation, the particle can be released from the optical trap and undergoes different evolutions -- free fall expansion, coherent manipulations, and quantum detection -- depending on the experiments to be performed. \n}\n\n{\nThe feasibility of the avenue identified in MAQRO has recently been investigated in a Quantum Physics Payload platForm (QPPF) study at the ESA Concurrent Design Facility~\\cite{QPPF}. Such study has identified {\\bf (a)} the core steps towards the realisation of a space-based platform for high-precision tests of quantum physics, and {\\bf (b)} the potential of such platforms to test quantum physics with increasing test mass with the scope to ascertain potential deviations from the predictions of quantum physics due, for instance, to gravity. The ultimate goal of these endeavours is to provide a reference mission design for quantum-physics experiments in space.\n}\n\n{\nThe QPPF~\\cite{QPPF} study culminated with the identification of a suitable combination of feasible free-fall times, temperature and pressures [cf. Table \\ref{tab:parameterssmall}], setting a target of 2034 for the launch of the mission. \n}\n{While several technical challenges remain to be addressed, the QPPF study has consolidated the intention to leverage on the expertise in near-field interferometry and optomechanics where state-of-the-art experiments with large molecule at near $10^5$~amu have been reported~\\cite{fein2019quantum}, ground-state cooling achieved~\\cite{delic2020cooling}, and proof-of-concept proposals for ground-based interferometry with large-mass experiments put forward~\\cite{bateman2014near,gasbarri2020prospects}. \nIn this respect, it should be mentioned that theoretical proposals based on other approaches, most noticeably magnetic levitation, have recently attracted the attention of part of the community~\\cite{pino2018chip}. These proposals envision testing the superposition principle with ground-based experiments, overcoming some of the existing limitations through \n{the low noise level, long coherent-operation times, and lack of need for light fields driving the dynamics promised by magnetic levitation}. The use of masses of the order of $10^{13}$~amu has been forecast in this context~\\cite{pino2018chip}. While extremely interesting, magnetic levitation technologies for quantum experiments are still at an early stage~\\cite{rusconi2018levitated,\nrusconi2017linear,rusconi2017quantum,druge2014damping} with, at present, no quantum superposition having been created which such techniques. }\n\n{\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{c|c}\n \\hline\n \\textbf{Parameters} & \\textbf{Values}\\\\\n \\hline\n \\hline\n Free fall time $t$& up to $100\\,$s \\\\\n Environmental temperature $T_\\text{env}$&down to 20\\,K\\\\\n Pressure& down to $10^{-11}\\,$Pa\\\\\n Masses& $10^{7} - 10^{11}$\\,amu\\\\ \n Diameters & $20 - 400$\\,nm\\\\\n \\hline\n \\end{tabular}\n \\caption{Possible combination of the parameters identified for the MAQRO mission and considered by ESA QPPF study~\\cite{QPPF}.}\n \\label{tab:parameterssmall}\n\\end{table}\n}\n\n\n\\subsection{In search of the quantum-to-classical boundary}\\label{subsec::QuantumClassical}\n\nThe extreme fragility of spatial quantum superpositions in the presence of environmental interactions (ubiquitous in any realistic setting) makes testing the superposition principle at macroscopic scales a tall order. Indeed, such interactions result in a suppression of quantum coherence in position that can be described by the following master equation in position representation\\cite{joos1985emergence}\n\\begin{equation}\\label{master_gen}\n \\frac{d\\langle x|\\hat\\rho_{t}|x'\\rangle}{dt}=-\\frac{i}{\\hslash}\\langle x|\\left[\\hat H,\\hat\\rho_{t}\\right]|x'\\rangle-\\Gamma(x-x')\\langle x|\\hat\\rho_{t}|x'\\rangle,\n\\end{equation}\nwhere $\\hat \\rho_t$ is the statistical operator of the system at time $t$, $\\hat H$ is the system's Hamiltonian, and\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{decoherenceGammaV2.pdf}\n \\caption{Typical dependence of the decoherence function $\\Gamma(x)$ from the delocalization distance $x$. The relevant limits are $\\Gamma(x)\\sim\\Lambda x^2$ for $x\\ll a$ and $\\Gamma(x)\\sim\\gamma$ for $x\\gg a$, where $a$ is the characteristic length of the noise~\\cite{PhysRevA.84.052121}.}\n \\label{fig:decoherencefunction}\n\\end{figure}\nthe last term of Eq.~\\eqref{master_gen} describes the deviations from unitary dynamics occurring at a rate $\\Gamma(x)$, which quantifies the decoherence effect. The typical behaviour of the latter, with a quadratic dependence for small spatial separations and saturating for large ones, is shown in Fig.~\\ref{fig:decoherencefunction}. Such deviations from unitarity can be due to environmental noises or non-standard modifications of quantum mechanics~\\cite{RevModPhys.75.715,PhysRevA.84.052121,RevModPhys.85.471}. The environmental influence is always present, and it inevitably disturbs the experiment compromising the possibility to detect superposition states. \nThe typical noise effects on the experimental setups addressed in this paper are due to collisions with residual gas particles, blackbody radiation, vibrations and in general any noise propagating through the experimental setup. \n{Quantitatively, for} a sphere made of fused silica with a radius of $60\\,$nm and an internal temperature of $40\\,$K,\nplaced in vacuum in an environment at $20\\,$K and a pressure of $10^{-11}\\,$Pa { [cf.~Table~\\ref{tab:parameterssmall}]}, {one has that for spatial superpositions larger than a nanometer but smaller than a millimeter, which is the range of interest for the interferometric test we will consider here, the }gas collisions give a constant contribution {$\\Gamma(x-x')\\sim1.1\\,$s$^{-1}$} [cf. Ref. ~\\citeonline{PhysRevA.84.052121}], while the contribution from blackbody radiation depends explicitly on the superposition size as $\\Gamma(x-x')\\sim 4.9\\times 10^{12}\\times |x-x'|^2$\\,m$^{-2}$s$^{-1}$. \n\n\nAs mentioned, Eq.~\\eqref{master_gen} is conducive of an investigation on potential deviations from standard quantum theory due, for instance, to collapse models. \nDue to its interesting phenomenology, theoretical interest and current strong experimental effort in testing it~\\cite{RevModPhys.85.471,carlesso2019current,bassi2014collapse,vinante2019testing,carlesso2019collapse,vinante2020narrowing}, in the following we will focus on {the Continuous Spontaneous Localization (CSL) model}.\n{The CSL model describes, through a stochastic and non-linear modification of the Schr\\\"{o}dinger equation, the collapse of the wave function as a spontaneous process whose strength increases with the mass of the system~\\cite{Ghirardi1990a}. Its action is characterized by two phenomenological parameters: $\\lambda_\\text{CSL}$ and $r_c$. \\MP{These are respectively the} collapse rate, \\MP{which quantifies the strength of the collapse noise,} and the localization length of the model, \\MP{setting the spatial resolution of the collapse}. Theoretical considerations lead to different proposed values for such parameters: $\\lambda_\\text{CSL}=10^{-16}\\,$s$^{-1}$ and $r_c=10^{-7}\\,$m for Ghirardi, Rimini and Weber~\\cite{ghirardi1986unified}; $\\lambda_\\text{CSL}=10^{-8\\pm2}\\,$s$^{-1}$ for $r_c=10^{-7}\\,$m, and $\\lambda_\\text{CSL}=10^{-6\\pm2}\\,$s$^{-1}$ for $r_c=10^{-6}\\,$m by Adler \\cite{adler2007collapse}.}\n{Consequently, one can describe the evolution of} the density matrix of a system\n{with} Eq.~\\eqref{master_gen} where, in addition to the decoherence effects ascribed to the environment, a term accounting for spontaneous collapse appears. \\MP{The form of such term and its effects are discussed in detail in Section \\ref{Applications}.} This reveals the importance of a careful characterization of environmental sources of decoherence in view of probing new physics, which is the aim of the space experiments with large nanoparticles reviewed here. It should be noted indeed that, the experimental setups considered here are relevant also for testing other models predicting non-standard decoherence mechanisms~\\cite{bahrami2014schrodinger,pikovski2015universal,gasbarri2017gravity} or models like the Di\\'osi-Penrose (DP) one~\\cite{diosi1987universal,PhysRevA.40.1165,penrose1996gravity} in which the wave function collapse is related to gravity. \n\n\n\\subsection{Possible advantages of a space environment}\\label{advantage}\nThe main advantage offered by space for quantum experiments with large particles is undoubtedly a long free-fall time. While freely-falling systems are not necessary for some non-interferometric experiments, they are the golden standard for the interferometric ones. For the latter, long free-fall times are of crucial importance to achieve better sensitivity and to increase the mass of the particles in quantum superposition \\MP{as the rate of the wavefunction spreading is set by $1\/m$.} In state-of-the-art interferometric experiments, and for masses of up to $10^6$\\,amu, the necessary free-fall times are far below $1\\,$s and can be readily achieved in laboratory experiments~\\cite{bateman2014near,fein2019quantum,gasbarri2020prospects}. However, going to significantly higher test masses requires correspondingly longer free-fall times~\\cite{kaltenbaek2012macroscopic,kaltenbaek2016macroscopic,gasbarri2020prospects} such as to eventually rendering it inevitable to perform such experiments in space (see Fig.~\\ref{fig::talbotTime}).\nLong free fall times help also in non-interferometric settings. The latter do not require the creation and verification of quantum superpositions but are based on the modified dynamics predicted by alternative models to quantum mechanics -- as for example the heating induced by the CSL noise on massive particles. In this context, letting the particle fall freely allows to reduce \\MP{the effects of all the sources of noise that affect the centre of mass motion. Among them certainly is acceleration noise typically originating from mechanical vibrations. However, one should also include other forces acting on the particle's motion and which might be} present in the experiment thus maximizing the effects induced by modifications of quantum mechanics. \\MP{In what follows, we provide a brief yet rigorous account of the most relevant of such forces.} \n\nAn equally important challenge is the isolation from vibrations,\nwhich contribute to the overall decoherence mechanisms acting on the system. Especially in the low-frequency regime, space experiments can provide strong advantages compared to those performed on ground. {For example, ensuring that an interference pattern with a period of $d=1\\,\\mu$m formed during an evolution time of $T=100\\,$s is not washed out requires a maximum acceleration noise of $S_{aa}^\\text{max}\\sim3d^2\/8\\pi T^3$ corresponding to $\\sqrt{S_{aa}}\\sim 3.5 \\times 10^{-10}\\,\\mathrm{m\\, s^{-2}\/\\sqrt{Hz}}$.}\nSuch low noise can be achieved in space. The most impressive achievement so far has been LISA Pathfinder with an acceleration noise as small as $\\sqrt{S_{aa}}\\sim 10^{-15}\\,\\mathrm{m\\, s^{-2}\/\\sqrt{Hz}}$ in the mHz regime \\cite{PhysRevLett.120.061101}. \nThis value has to be compared to those of the state-of-art ground-based experiments. For example, the Bremen drop-tower allows for up to around 9\\,s of free fall in an environment characterized by an acceleration noise of $\\sim 10^{-5}\\,\\mathrm{m\\, s^{-2}\/\\sqrt{Hz}}$ (cf. Ref.~\\citeonline{Selig2010a}). \n\\MP{We also mention that the exceptionally low level of noise achieved in LISA Pathfinder has already allowed this experiment to provide bounds on the CSL parameters \\MP{that are} more than three orders of magnitude stronger than \\MP{those} provided by the ground-based gravitational waves detector LIGO~\\cite{carlesso2016experimental,helou2017lisa,carlesso2018non}, \\MP{thus demonstrating the advantages -- in terms of isolation from vibration -- of space-based experiments}.}\n\n\n\nA potential additional advantage of a dedicated space mission is data statistics. In state-of-the-art matter-wave experiments~\\cite{fein2019quantum}, many test particles pass through the interferometer simultaneously. In proposals, considered in the QPPF, suggesting to prepare the initial state of the test particle by optomechanical means, the test particles pass through the interferometer individually~\\cite{RomeroIsart2011b,kaltenbaek2012macroscopic,bateman2014near}. Thus, the time of the experiment is inevitably longer, and growing with the number of data points required. For example, in an experiment with $10^4$ data points, where a single shot takes $100\\,$s ($10\\,$s), the data collection would last more than $11.5$ days ($27$ hours). This compares very favourably to the typical number of two or three runs per day that can be performed in a micro-gravity environment on the ground as at the Bremen drop-tower, \\MP{which is limited by the necessity of \\MP{setting and resetting the pressure in} the entire tower between two consecutive drops}\n\\footnote{\\MP{It should be mentioned that this limitation is not present in the Hannover {\\it Einstein Elevator} \\MP{platform} where, for free-fall times of $\\le 4\\,$s, 300 runs per day are possible~\\cite{lotz2020tests}.} }.\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=\\columnwidth]{talbot_timev2.pdf}\n\\caption{The free-fall time for the test particles in a near-field interferometer is of the order of the Talbot time $t_T=m d^2\/h$, where $m$ is the particle mass, $h$ is Planck's constant, and $d$ is the grating period. In experiments with significantly higher test masses than the current record~\\cite{fein2019quantum}, the required free-fall time may eventually necessitates a space environment~\\cite{kaltenbaek2016macroscopic}.}\n\\label{fig::talbotTime}\n\\end{figure}\n\n\\subsection{State-of-the-art technological platforms}\nThe space environment promises, in principle, to provide a unique combination of low temperature, extremely high vacuum and very long free-fall times. In particular, the temperature in space is naturally limited by the temperature of the microwave background radiation of about $3\\,$K while the vacuum is instead limited by the presence of cosmic and solar radiation~\\cite{Biermann1957a}. However, in actual space-based experiments, additional shielding may be required. For example, if the spacecraft is in an orbit about one astronomical unit from the Sun, the payload will have to be shielded from direct solar radiation. {The spacecraft will require stabilization using microthrusters, which will introduce force noise, and there will need to be station-keeping maneuvers. These measures as well as the gravitational field of the spacecraft will reduce the achievable free-fall time.} In addition, the equipment necessary to operate the payload and the spacecraft typically will be in an enclosure kept at a stable temperature of about $300\\,$K. Inside the spacecraft, it is therefore not trivial to achieve cryogenic temperatures and extremely high vacuum levels. Achieving the vacuum levels and temperatures necessary for macroscopic tests of quantum physics therefore requires careful considerations. In the context of the MAQRO mission concept, it has been suggested to use purely passive radiative cooling and direct outgassing to space in order to achieve these requirements~\\cite{kaltenbaek2012macroscopic,kaltenbaek2016macroscopic,hechenblaikner2014cold,zanoni2016thermal}. During the QPPF study, this concept was adapted protect the scientific instrument with a cover and to enhance the cooling performance ny additional active cooling using a hydrogen sorption cooler~\\cite{QPPF}. \n\nThe protective cover limits outgassing to space, and the QPPF study concluded that the achievable pressure would at best be\n$10^{-11}$\\,Pa instead of the aimed pressure of $10^{-13}$\\,Pa. As a result,\nthe experiments were constrained to a test particle mass up to $2\\times10^9$\\,amu and free-fall time up to $40\\,$s. Because this has a significant impact on the science objectives, improving the achievable vacuum in space-based experiments will be a critical issue to be solved before a space mission of this type can be launched. Two other critical issues were identified in the QPPF study~\\cite{QPPF}. \n{Firstly, the mechanism needed for loading the test particles into an optical trap in extremely high vacuum conditions needs further scrutiny and several viable alternatives are under consideration. The QPPF study suggested to desorb particles from microelectromechanical systems (MEMS) and guide them to the experiment using linear Paul traps~\\cite{QPPF}, a mechanism that is currently under further investigation by ESA. An alternative suggestion makes use of a combination of linear Paul traps and hollow-core photonic-crystal fibers~\\cite{kaltenbaek2016macroscopic} or the desorption of particles from a piezoelectric substrate using surface acoustic waves. \nThe challenge of such desorption-based approaches is to make sure that the desorbed sub-micron particles do not carry a net charge~\\cite{PhysRevA.95.061801}, and that their center-of-mass motion is sufficiently cold to allow for optical trapping. At the same time, a sufficiently low internal temperature of the particles is required to avoid decoherence due to the emission of blackbody radiation~\\cite{RomeroIsart2011b,kaltenbaek2012macroscopic}. Secondly, the optical gratings used for preparing non-classical states has grating apertures comparable to the size of the nanoparticles to be employed. This can decohere the quantum states via photon scattering. A recent study~\\cite{PhysRevA.100.033813} investigated the latter issue extending the formalism of near-field interferometry beyond the point-particle approximation and offering the basis for the analysis reported in Sec.~\\ref{ProofOfPrincipleImlementation}.}\n\n\n\\section{Non-interferometric tests}\\label{Applications}\n\nIn this section, we focus on non-interferometric tests of quantum mechanics. Differently from the interferometric ones, this class of tests does not rely on the availability of quantum superpositions but are based on side-effects of modifications of quantum mechanics. {Consequently, they can be performed also in presence of strong decoherence, although the latter will influence the effectiveness of the test. For this reason, they currently provide the most stringent tests of collapse models on ground. \n}\n\nA plethora of different experiments belongs to this class and exploit different physical systems. {Among them, precision measurements of the internal energy of a solid, expected to vary due to the collapse noise, have been exploited in Refs.~[\\citeonline{adler2018bulk,bahrami2018testing,adler2019testing}]. The modifications to the free evolution dynamics of Bose-Einstein condensate due to the presence of the collapse mechanism has been investigated in Refs.~[\\citeonline{laloe2014heating,bilardello2016bounds}]. And X-ray measurements -- which exploit the fact that the collapse mechanism makes charged particles to emit radiation~\\cite{fu1997spontaneous,curceanu2016spontaneously,piscicchia2017csl} -- have already provided strong limits on the Di\\'osi-Penrose model~\\cite{donadi2020underground}. In this context, also optomechanical experiments are of particular relevance~\\cite{vinante2016upper,carlesso2016experimental,PhysRevA.98.022122,vinante2017improved,helou2017lisa,zheng2020room,vinante2020narrowing,pontin2020ultranarrow}. They are typically used to characterize noise~\\cite{aspelmeyer2014cavity,millen2020optomechanics,goldwater2019quantum}, and thus possibly discriminate between standard and non-standard noise sources~\\cite{vinante2020narrowing}.}\n\n\n{One of the most promising non-interferometric test in space is based on monitoring the expansion of the centre-of-mass position spread of a freely-falling nanoparticle~\\cite{PhysRevA.93.062102}. The main reason, as it is shown in Eq. (\\ref{variance}) below, is that the position variance grows as the cube of time, making evident the advantage of the long free-fall time that can be achieved in space.\nIt could be argued that long times can also be achieved in ground experiments by suspending the particles using an harmonic trap. However, the use of such a trap would certainly introduce additional noises and, more importantly, it would imply a position variance growth that scales only linearly with time~\\cite{bilardello2016bounds,PhysRevA.94.010104}. \n\nGiven the evolution in Eq.~(\\ref{master_gen}), it is easy to show that its non-unitary part does not affect the average position $\\langle x_{t}\\rangle$ of the particle, but changes its variance $\\sigma^{2}= \\braket{x_{t}^{2}}-\\braket{x_{t}}^{2} $ by a factor $\\braket{\\Delta \\sigma^{2}}$ that, for a free system and in the $x\\ll{a}$ regime [cf. Fig.~\\ref{fig:decoherencefunction}], reads\n\\begin{equation}\\label{variance}\n\\braket{\\Delta \\sigma^{2}}=\\frac{2\\Lambda\\hslash^{2}t^{3}}{3m^{2}}.\n\\end{equation}\nThe diffusion rate $\\Lambda$ is the sum of different contributions stemming from residual gas collisions, blackbody radiation and non-standard sources, such as the CSL or the Di\\'osi-Penrose model.\nFor the CSL model and a homogeneous sphere of radius $R$ and mass $M$, one has~\\cite{zheng2020room}\n\\begin{equation}\n\\Lambda_{\\text{CSL}}=\\frac{6 \\lambda_\\text{CSL} M^2 }{m_0^2 R^2\\eta^4_{CSL}}\\left[\\left(1+\\frac{\\eta^2_{CSL}}{2}\\right) e^{-\\eta^2_{CSL}}+\\frac{\\eta^2_{CSL}}{2}-1\\right],\n\\end{equation}\nwhile for the DP model one obtains\n\\begin{equation}\n\\begin{aligned}\n\\Lambda_\\text{DP}&=\\frac{M^{2}G}{2\\hslash \\sqrt{\\pi}R^{3}}\\left[\\sqrt{\\pi}\\operatorname{erf}\\left(\\eta_{DP}\\right)-\\frac{3}{\\eta_{DP}}+\\frac{2}{\\eta^{3}_{DP}}\\right.\\\\\n&\\left.+\\frac{e^{-\\eta^2_{DP}}}{\\eta_{DP}}\\left(1-\\frac{2}{\\eta^{2}_{DP}}\\right)\\right].\n\\end{aligned}\n\\end{equation}\nWe have used the dimensionless parameters $\\eta_{CSL}=R\/r_c$ and $\\eta_{DP}=R\/R_0$ with $R_0$ a free parameter that is characteristic of the DP model~\\cite{bahrami2014role}. These expressions can be then used to set bounds on, respectively, CSL and DP parameters with space-based experiments, as we discuss next.\n}\n\n\n\\subsection{Long free-fall times: opportunities and challenges for space-based experiments}\n{A possible space-based experiment, as envisioned in the MAQRO proposal and QPPF, is as follows. A nanosphere is initially trapped by an harmonic optical potential and its center-of-mass motion is optically cooled. The trapping is then removed and the nanosphere remains in free-fall for a time $t$ after which its position is measured. Achieving a high position resolution is possible by, for example, combining a coarse-grained standard optical detection on a CMOS chip with a high-resolution backscattering detection scheme~\\cite{Tebbenjohanns2019a}, which could eventually provide a position accuracy on the order of $\\varepsilon=10^{-12}\\,$m \\MP{at a typical bandwidth of 100 kHz, by controlling the measurement back-action.~\\cite{vovrosh2017parametric}}\nBy repeating such a procedure $N$ times, one can reconstruct the position spread $\\sigma^{2}$ and thus quantify the effects of the non-unitary dynamics through Eq.~(\\ref{variance}). To detect effects as those predicted by the CSL or the DP model, one needs to minimize the competing standard decoherence effects (from collisions and blackbody radiation), which contribute to the total $\\Lambda$ in Eq.~(\\ref{variance}). \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{CSLnonint.pdf}\n \\caption{Exclusion plots for the CSL parameters $\\{r_c,\\lambda\\}$ from non-interferometric experiments. \\MP{The solid, red line represents the bound on the CSL parameters that can be \\MP{potentially} achieved through non-interferometric experiments in space with the parameters in Table~\\ref{tab:parameterssmall}. Here, the main limitation is due to the environmental conditions of pressure and temperature. The dashed red line indicates the upper bound that could be obtained by decreasing the pressure to $P=3 \\times 10^{-14}$~Pa, so that the main limitation would be represented by the statistical error. These bounds are} compared to the strongest bounds and corresponding excluded parameter regions present in literature: X-rays emission (blue region)~\\cite{piscicchia2017csl}, LISA Pathfinder (green region) \\cite{carlesso2016experimental,carlesso2018non}, multilayer cantilever (brown region) \\cite{vinante2020narrowing}. The gray region is the theoretical lower bound, \\MP{which is estimated by requiring the collapse to become effective at the mesoscopic scale where the quantum-to-classical transition is expected}~\\cite{torovs2017colored}. }\n \\label{fig:noninterf}\n\\end{figure}\n{We are now in the position to estimate the bounds on the CSL parameters.} To do this,\nwe employ the values in Table~\\ref{tab:parameterssmall}. We consider silica nanospheres with a 120\\,nm diameter as test particles and an\ninternal temperature fixed at 40\\,K. \nMoreover, we also assume levels of vibrational noise similar to those obtained in LISA Pathfinder~\\cite{PhysRevLett.120.061101}. With these assumptions, the strongest competing effect to the CSL noise is the collisional decoherence, which limits the bounds on the CSL parameters. Such a bound is indeed obtained by setting $\\Lambda_\\text{CSL}$ equal to the collisional contribution to the diffusive constant $\\Lambda$, whose form is reported in Ref.~[\\citeonline{PhysRevA.84.052121}].} \\MP{We show the corresponding bound as the solid red line in Fig.~\\ref{fig:noninterf}, where such bound is compared to ground-based ones achieved by state-of-the-art experiments on the CSL model.}\n\nFor what concerns the DP model, the state-of-the-art experimental bounds indicate that the free parameter $R_0$ is limited to~\\cite{donadi2020underground} $R_0\\geq R_0^*= 0.5 \\times 10^{-10}$\\,m. Because the DP-induced collapse becomes stronger for smaller $R_0$, the maximum effect is obtained for~{$R_0^*$.} Such a value of $R_0$ leads to a position spread in the aforementioned set-up of $\\sqrt{\\braket{\\Delta\\sigma^2}}\\sim3\\times 10^{-26}\\,$m {for $t=100\\,$s}, well beyond the state-of-art position measurement sensitivity $\\varepsilon$.\n\n\n\nAn important aspect to consider for experiments performed in space is their limited lifetime. Especially when one considers long free-fall times, this will have an impact on the statistical accuracy with which one can determine the variance of the measured data points~\\cite{kaltenbaek2016macroscopic,QPPF,Kaltenbaek2021a}. The long free-fall time $t$ required to see potential deviations from the quantum predictions has to compete with a finite time $T$ available to take the complete data set. At best, the number of data points can be $N=T\/t$.\n{This limit on the number of data points implies a statistical uncertainty in determining the position spread. To quantify it, we assume that the initial quantum state of the test particle is the ground state of an harmonic oscillator with a mechanical frequency $\\omega$. Consequently, the measured position will be normally distributed and the corresponding fractional uncertainty of the variance of the measured position will be~\\cite{Taylor1997a} $\\sqrt{2\/(N-1)}\\approx \\sqrt{2 t\/T}$.}\n{Assuming} the deviations from the quantum predictions to be small, the statistical uncertainty\nof the variance is $\\Delta x^2_f\\approx \\sqrt{2 t\/T} x^2_s$, where $x^2_s\\approx t^2\\omega\\hslash\/2m$ is the variance of the wavepacket predicted by quantum physics for times much longer than $1\/\\omega$. \n{By taking} $\\omega=10^{5}\\,$Hz for the trap frequency, a free evolution time $t=100\\,$s and a total time $T$ of 30 days, we have that $\\Delta x_f\\sim 3\\times 10^{-5}\\,$m which has to be compared to the sensitivity $\\varepsilon\\sim10^{-12}\\,$m. For these parameters, the statistical uncertainty will dominate over the position sensitivity $\\varepsilon$ already after about~\\cite{Kaltenbaek2021a} $0.1\\,$ms. \n{Such a statistical uncertainty becomes a fundamental limitation for the experiment. The corresponding upper bound on the CSL parameters is represented by the dashed, red line in Fig.~\\ref{fig:noninterf}. To reach such a limit, the pressure would need to be reduced by more than two orders of magnitude, down to $P=3 \\times 10^{-14}$\\,Pa, with respect to the conditions setting the continuous red bound. }\n\n\\MP{Fig.~\\ref{fig:noninterf} and the analysis above \\MP{suggest} that non-interferometric experiments performed with the parameters in Tab.~\\ref{tab:parameterssmall} can enhance only partially the exploration of the CSL parameter space. A more substantial improvement would require to solve technical challenges, \\MP{such} as a significant pressure reduction. Alternatively, one can pursue the path of interferometric experiments, which is discussed in the next section.}\n\n\n\n\n\\section{Interferometric tests}\\label{ProofOfPrincipleImlementation}\n\nHere, we will provide an overview of the current state-of-the-art for proposals of interferometric experiments testing the superposition principle of quantum mechanics for higher masses than the current experimental record on ground by using a space environment. We will discuss the challenges faced by such experiments, and we will provide novel simulations results estimating the interference visibility expected in space-based experimental tests of the superposition principle of quantum mechanics. \n\n\\subsection{Near-field interferometry}\n\nAfter Clauser envisioning its use for \\textit{small rocks and live viruses} experiments~\\cite{clauser1997broglie} and its initial demonstration for C$^{70}$ molecules interferometry~\\cite{PhysRevLett.88.100404}, for almost two decades the most successful technique harnessed for interferometric tests of quantum physics has been near-field interferometry~\\cite{RevModPhys.84.157,arndt2014testing}.\nWith this technique, and employing three optical gratings~\\cite{gerlich2007kapitza}, in 2019 the Arndt's group in Vienna was able to successfully build and demonstrate spatial the quantum superposition of big molecules with masses beyond~\\cite{fein2019quantum} $10^4$\\,amu.\n\n\nRecently, the possibility to consistently describe the effects of an optical grating on large dielectric particles with radii comparable to the optical wavelengths~\\cite{Nimmrichter2014a,PhysRevA.100.033813} has opened the possibility to use optical grating to study quantum interference on even larger particles. At the same time,\nconcrete proposals to go beyond the current mass record, employing individually addressed dielectric particles and single optical grating~\\cite{bateman2014near,kaltenbaek2016macroscopic,QPPF} have shown the experimental viability of near-field interferometry to actually perform such larger mass superposition experiments.\n\nWe thus focus specifically on these implementations to give a overview of how a near-field interferometric scheme works.\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=\\columnwidth]{figCommPhys_NFv2.pdf}\n\\caption{{\nSchematic representation of a space-based near-field interferometry experiments referred to in the text. While for ground-based experiments the time axis corresponds also with the vertical position of the particle, in space-based experiments the particle remains in the same position (relative to the experimental apparatus) and the trapping laser, the grating laser and a final measurement laser must be activated in turns at different times.}}\n\\label{figInonI}\n\\end{figure}\nWe refer to Fig.~\\ref{figInonI} for a schematic representation of a single-grating near-field set-up. Contrary to the case of lighter systems, where molecular beams are engineered, each nanoparticle in the experiment is individually addressed. We thus have, at each run of the experiment, four main stages: \n\n\\begin{itemize}\n \\item[\\textbf{(a)}] The nanoparticle is trapped and cooled down in an optical cavity for a time $t_c$ after which the center-of-mass degree of freedom is in a very low-temperature thermal state characterised by the momentum and position variances $\\sigma_p,\\,\\sigma_z$. No cooling down to the ground state is required. \n \\item[\\textbf{(b)}] The particle is released and freely falls for a time $t_1$. \n During this time, residual gas collisions and thermal radiation are the main sources of decoherence. The free evolution of the post-cooling state needs to guarantee that the coherence-length is sufficient to cover at least two adjacent ``slits'' of the optical grating.\n \\item[\\textbf{(c)}] A retro-reflected pulsed laser provides a pure-phase grating~\\cite{Nimmrichter2014a} for the dielectric nanoparticle. Scattering and absorption of grating photons constitute the main decoherence channels in the short interaction time with the grating.\n \\item[\\textbf{(d)}] Second period of free evolution for a time $t_2$ during which the same sources of decoherence as in point (b) act. This stage has to last enough time for the interference pattern to form.\n \\item[\\textbf{(e)}] The position of the particle is measured via optical detection~\\cite{kaltenbaek2016macroscopic,QPPF}. \n\\end{itemize}\nBy repeating this protocol \\textbf{(a-e)} many times, an interference pattern can form in the measured position distribution. This pattern can be mathematically described by a probability distribution function $P(z)$ which can be analytically derived from a phase-space treatment of the interferometric experiment~\\cite{bateman2014near,Nimmrichter2014a}:\n\\begin{equation}\n\\label{pattern}\n\\frac{P\\left(z\\right)}{\\delta}= \n1+2\\sum_{n=0}^{\\infty} R_{n} \\,B_{n} \\left[\\frac{n dt_{2}}{t_{T}D}\\right] \\cos\\left(\\frac{2 \\pi n z}{D}\\right)e^{-2\\left(\\frac{n\\pi\\sigma_{z}t_{2}}{Dt_1}\\right)^2},\n \\end{equation}\nwhere $\\delta=m\/\\left(\\sqrt{2 \\pi} \\sigma_{p}(t_{1}+t_{2})\\right)$, $t_T= md^2\/h$ is the Talbot time and $D=d(t_1+t_2)\/t_1$ is a geometric magnification factor. In this last expression, the $B_{n}$'s are known as the generalized Talbot coefficients~\\cite{Nimmrichter2014a,PhysRevA.70.053608} and account for the coherent and incoherent effects of the optical grating, while the kernels $R_{n}$ account for environmental decoherence, due to absorption, emission, and scattering of thermal radiation and collisions with residual gas, during the free-falling times $t_1, t_2$. \nThis expression remains formally unchanged when classical particles following ballistic trajectories are considered, but the explicit expressions for the decoherence kernels and Talbot coefficients will change. \nExpression \\eqref{pattern}, with the proper coefficients, can thus be used to describe the classical shadow pattern arising from a completely classical description of the system (see Fig.~\\ref{fig:carpet}). \nFinally, non-linear modifications of quantum mechanics -- but also other sources of positional decoherence like e.g. a stochastic gravitational waves background~\\cite{bassi2017gravitational,asprea2019gravitational} -- can easily be included in Eq.~\\eqref{pattern} by introducing their respective noise kernels $R_n$. We refer the interested reader to Refs.~[\\citeonline{PhysRevA.100.033813,gasbarri2020prospects}] and the Supplemental Material (SM)~\\cite{SI} for a detailed derivation and explicit expressions of the functions entering Eq.~\\eqref{pattern}.\n\nIn order to go beyond the current near-field interferometry mass record, large particles need to be used. Here, ``large'' refers to spherical particles with a radius $R$ comparable to or greater than the grating period $d$ such that $kR\\gtrsim 1$, where $k=2\\pi\/\\lambda$ is the wave-vector of the optical grating. In the following, we will use the formalism developed in Ref.~[\\citeonline{PhysRevA.100.033813}], and based on Mie scattering theory, to account for a large particle traversing an optical grating. For what concerns the pure-phase character of the grating -- i.e., its coherent effect on the particle's state -- it can be shown that the unitary evolution of the particle's state $\\hat{\\rho}$ (reduced along the longitudinal direction $z$) when traversing the grating assumes, in the eikonal approximation, the form $\\bra{z}\\rho\\ket{z'}\\rightarrow \\exp\\left[{-i\\phi_0 (\\cos^2{kz}-\\cos^2{kz'})}\\right]\\bra{z}\\rho\\ket{z'},$\nwhere $\\phi_0$ is the eikonal phase factor characterising the coherent evolution. This is the same as in the case of a point-like particle and the only difference introduced by the use of Mie scattering theory~\\cite{mie1908beitrage,bohren2008absorption} is found in the structure of the eikonal phase $\\phi_0$ which can be expressed as \n\\begin{equation}\\label{phi0mie} \n \\phi_0=\\frac{8F_0 E_L}{\\hslash c \\epsilon_0 a_L k|E_0|^2},\n\\end{equation}\nin terms of the laser and particle's parameters. Here, $c\\epsilon_0|E_0|^2\/2$ is the intensity parameter of the incident light, $E_L$ and $a_L$ are the grating laser energy and spot area respectively, and $F_0$ is obtained from Mie theory upon the evaluation at $z=-\\lambda\/8$ of the longitudinal conservative force acting on the particle (see Refs.~[\\citeonline{Nimmrichter2014a,PhysRevA.100.033813}] and references therein). Eq.~\\eqref{phi0mie} reduces to the well-known result $\\phi_0=2 \\mathcal{R}(\\chi)E_L\/(\\hslash c\\epsilon_0 a_L)$ with $\\chi$ the polarizability for a point-like particle. For what concerns the incoherent effects of the grating, the finite-size of the particles leads to modify the Talbot coefficients with respect to the point-like case. We refer the reader to Ref.~[\\citeonline{PhysRevA.100.033813}] and the SM~\\cite{SI} for further details.\n\n\n\n\n\\begin{figure}\n \\centering\n\\includegraphics[width=1.\\columnwidth]{Capetv2.pdf}\n\\caption{Talbot carpet arising from pure phase-grating without any source of decoherence for a point-like particle. This picture shows the comparison between the interference pattern predicted by quantum mechanics, $P(z)$ in Eq.~\\eqref{pattern} (on the left), and the shadow pattern that is formed by classical particles following ballistic trajectories (on the right) for different values of $\\tau=t_2\/t_T$. It is then apparent that, in order to can claim the observation of a quantum superposition, we need to be able to distinguish between these two patterns.}\\label{fig:carpet}\n\\end{figure}\n\nFinally, as discussed in Ref.~[\\citeonline{gasbarri2020prospects}], large particles near-field interferometric experiments presents several technical challenges. Common to both ground and space-based experiments is the challenge of diminishing as far as possible any environmental noise which would suppress the interference pattern. This can be achieved by a combination of ultra-high vacuum and cryogenic conditions. Moreover, for experiments aiming at using single particles in several ($\\sim 10^4$) runs, fast reloading\/recycling technique must be developed~\\cite{grass2016optical,mestres2015cooling,PhysRevLett.121.063602,QPPF}.\nOn top of these challenges, the key limitation for ground-based experiments is the short free-fall time. This is due to the Earth gravitational field and limits such experiments to a few seconds of free evolution. While this challenge can be overcome in principle, it will require a substantial modification of the scheme to go beyond masses of the order of $10^7$~amu~\\cite{pino2018chip,gasbarri2020prospects}. This is not the case for space-based experiments, where current estimates show the promise to reach masses of the order of $10^9-10^{11}$~amu and free-falling times of the order of hundreds of seconds~\\cite{kaltenbaek2016macroscopic,QPPF}. In the following section, we substantiate these claims by presenting an optimized analysis of space-based near-field interferometry showing the actual possibilities offered by a space environment. \n\n\\begin{figure*}[t!]\n\\centering\n\\includegraphics[width=1.\\textwidth]{Full_tablev2.pdf}\n\\caption{Values of the cost functions $\\aleph_{QC}$ and $\\aleph_{QCSL}$ as a function of $t_1, t_2$. The triangular area of the density plot is determined by the constraint $t_1+t_2\\leq 100$~s. The different columns correspond to five different values of the mass of the nanoparticles considered, respectively $10^{7},10^{8},10^{9},10^{10},10^{11}$~amu. The first row shows $\\aleph_{QC}$, i.e., how distinguishable the quantum and classical interference figures are. These figures are obtained by choosing as $E_L\/a_L$ the value that maximizes $\\aleph_{QC}$ in the physically feasible range between $10^{-6}$ and $5$~J\/m$^{2}$. For the numerical values of $E_L\/a_L$ employed we refer to Fig.~2 in the SM~\\cite{SI}. The second row shows the values of $\\aleph_{QCSL}$, i.e., how distinguishable the quantum interference figure is from the one accounting for the CSL. These figures are obtained by assuming the same values of $E_L\/a_L$ used in the first row and for a value of the CSL parameters proposed by Adler and given by $\\lambda_\\text{CSL} =10^{-8}$~s$^{-1}$ and $r_c=10^{-7}$~m. Finally, the third row shows the values of $\\aleph_{QCSL}$ like in the second row where, however, the values of $E_L\/a_L$ used are the ones that maximize $\\aleph_{QCSL}$ independently from the results in the first row of figures. For the numerical values of $E_L\/a_L$ employed we refer the interested reader to Fig.~1 in the SM~\\cite{SI}. }\\label{fig:QMCSLYet}\n\\end{figure*}\n\\subsection{Optimization for large particles: the current frontiers}\nWe present in this section the results of a numerical investigation of the possibilities offered by space-based experiments in conjunction with near-field interferomerty as discussed in the previous section. We employ the formalism developed in Ref.~[\\citeonline{gasbarri2020prospects}] to account for the finite size of the particles with respect to the grating period, and we use the experimental parameters, as summarized in the SM~\\cite{SI}, which have been extracted from the QPPF study about the MAQRO mission~\\cite{QPPF}. We are able to include in our analysis all the major known sources of environmental decoherence which can affect the interference pattern. In particular, we account for scattering and absorption of grating photons at stage \\textbf{(c)} of the protocol, residual gas collisions and black-body thermal radiation decoherence during the free evolution stage \\textbf{(b)-(d)}. We then include the effect of modifications of quantum mechanics in the from of the Continuous Spontaneous Localization (CSL) model with white-noise~\\cite{RevModPhys.85.471}. What we present here is the first fully consistent analysis of such a set-up and its potential for fundamental physics studies, which does not rely on the Rayleigh approximation, which cannot be consistently used unless for order of magnitude estimates. \n\nOne complication of near-field interferometry -- in contrast with the textbook case of far-field interferometry -- that needs to be taken into account when performing an experiment is that also perfectly classical particles following ballistic trajectories through the optical grating would form a typical interference-like figure known as Moir\\'{e} shadow pattern~\\cite{RevModPhys.84.157} (see Fig.~\\ref{fig:carpet}). It is thus of crucial importance to discriminate between this pattern and a quantum mechanical one~\\cite{PhysRevLett.88.100404}. This is a prerequisite for both claiming to be able to test the superposition principle and for any analysis of modifications of standard quantum mechanics. Thus, we introduce a first figure of merit ($\\aleph_{QC}$) to estimate the ``distance'' between the quantum interference patter's probability distribution (pdf) and the pdf of the shadow pattern which would result from classical mechanics\n\\begin{align}\\label{alQC}\n\\aleph_{QC}= \\frac{1}{L}\\int_{-L\/2}^{L\/2} \\frac{|P_{\\rm Q}(z)-P_{\\rm Clas}(z)|}{|P_{\\rm Q}(z)+P_{\\rm Clas}(z)|} dz\n\\end{align}\nwhere $L = 10^{-7}$~m is the spatial window in which the position measurement is performed and $P_{\\rm Clas}$ ($P_{\\rm Q}$) is the pdf predicted by classical (quantum) mechanics. A similar quantity can then be obtained to discriminate between a quantum interference pattern and the pattern deriving from modifications of quantum mechanics. We will focus here on the CSL model with white noise. Thus the second figure of merit that we will employ is $\\aleph_{QCSL}$, which is given by Eq.~\\eqref{alQC} with $P_{{\\rm Clas}}\\rightarrow P_{{\\rm CSL}}$. \n\nIn the following we assume to be able to discriminate values of $\\aleph\\ge 0.05$ (i.e., difference bigger than $5\\%$) which appears to be an experimentally justifiable choice~\\cite{juffmann2013experimental}. Moreover, we optimize over the parameters $t_1,t_2, E_L\/a_L$ of the set-up, which can be easily controlled, to maximize the figure of merits. As we will see, the optimization leads to values of the figure of merits well above the $5\\%$ threshold. Before illustrating the results of the analysis, let us comment on the choice of parameters. On the one hand, the free-falling times $t_1,t_2$ are extremely important in the formation of the interference figure, whether it is $t_1$ which guarantees a sufficient spreading of the initial state to a coherence length covering more than two ``slits'' or $t_2$ which allows the intereference to happen. These two times can also be easily adjusted in a space-based experiment by simply changing the activation times of the grating and measurement lasers. On the other hand, the parameter $E_L\/a_L$ enters directly in Eq.~\\eqref{phi0mie} and thus determines the pure-phase coherent effect of the grating. This parameter can also be easily tuned, being a property of the way the grating laser is operated. We keep instead fix all the other parameters entering our analysis (see the table reported in the SM~\\cite{SI}). These are: the wavelength of the grating laser, which is dictated by current technological possibilities; the material(s) parameters of the nanosphere, we considered silica (SiO2) particles which are widely employed in optomechanical experiments for their optical properties; environmental parameters, which have been extracted from the QPPF study~\\cite{QPPF} and represent the current state-of-the-art for space-based set-ups. Furthermore, always referring to the QPPF study on the stability of a possible mission's spacecraft, we constrain the total free fall time to $t_1+t_2\\leq 100$~s. Note that, the QPPF study concludes that, due to vacuum restriction, the interference pattern for the proposed MAQRO mission would be visible for free-falling times of up to 40~s. However, the 100~s benchmark is among the scientific objectives of the community, as reported in the QPPF. We thus chose to present our results with this constraint on the times. Nonetheless, our analysis shows that free-fall time of 100~s would be achievable within the parameters of the QPPF without spoiling the interference pattern. \n\n\n\\begin{figure*}[t!]\n\\centering\n\\includegraphics[width=1.5\\columnwidth]{CSL_MAQROfinal.pdf}\n\\caption{Exclusion plots for the CSL parameters $\\{r_c,\\,\\lambda_\\text{CSL} \\}$ {from interferometric experiments}. Black points, with their error bars, represent the GRW's and Adler's theoretical values for these parameters~\\cite{ghirardi1986unified,adler2007collapse,adler2007corrigendum}.\nThe solid lines, and their respective excluded areas, show the upper bound that can be obtained by near field interferometric experiments in space with the parameters used in our simulations. In particular, the red line is obtained with $m=10^7 $~amu, $t_1=t_2=12$~s and $E_{L}\/a_{L} = 1.1\\times 10^{-2}$~$J\/m^2$. The dark-green one with $m=10^8$~amu, $t_1=t_2=10$~s and $E_{L}\/a_{L} = 3.5\\times 10^{-4}$~J\/m$^2$. The green, solid line with $m=10^9$ amu, $t_{1}=t_{2}=10$~s and $E_{L}\/a_{L} = 8.7\\times 10^{-6}$~J\/m$^2$. The blue one with $m=10^{10}$~amu , $t_{1}=t_{2}=50$~s and $E_{L}\/a_{L} = 8.7\\times 10^{-6}$~J\/m$^2$ and, finally, the orange solid line with $m=10^{11}$~amu , $t_{1}=t_{2}=50$~s and $E_{L}\/a_{L} = 2.2\\times 10^{-5}$~J\/m$^2$. \nFor comparison, the blue dot-dashed line shows the upper bound that could be achieved with a ground-based interferometric experiment with $m=10^7$~amu and times $t_1=t_2=9.95$~s -- free-falling times which are clearly beyond current reach for ground-based experiments -- as discussed in a recent study of four of the authors~\\cite{gasbarri2020prospects}.\nFinally, the dashed grey line represents the upper bounds given by current non-interferometric tests~\\cite{vinante2020narrowing,piscicchia2017csl,carlesso2016experimental,vinante2016upper}, the black dotted lines on the top of the figure are the current upper bounds coming from interferometric tests~\\cite{fein2019quantum,carlesso2019collapse,torovs2017colored,kovachy2015quantum}, and the grey area at the bottom of the plot represents the theoretical lower-bound provided in Ref.~\\citeonline{torovs2018bounds}.}\\label{fig:csl_excl}\n\\end{figure*}\n{As outlined above, the first step in the analysis is to consider when $\\aleph_{QC}$ is large enough to guarantee the possibility to certify a quantum mechanical interference pattern and then consider the corresponding $\\aleph_{QCSL}$. Figure~\\ref{fig:QMCSLYet} shows the results of our numerical investigation in this respect. The panels in the first row show the values of $\\aleph_{QC}$, i.e., the distance between the classical shadow pattern and the quantum interference one, for particles masses $\\{10^7,10^8,10^9,10^{10},10^{11}\\}$~amu as a function of $t_1,t_2$ and for the values of $E_L\/a_L$ which maximize the distinguishability. The latter are reported, as a function of $t_1,t_2$, in the SM~\\cite{SI} (see Fig.~2 therein). From the first row of Fig.~\\ref{fig:QMCSLYet} we see that $\\aleph_{QC}$ takes values definitely larger than the experimentally justifiable threshold of $5\\%$ for free-fall times $t_1+t_2\\leq 100$~s, opening the way to direct tests of the quantum superposition principle with mesoscopic quantum systems in large spatial superpositions. The panels in the second row in Figure~\\ref{fig:QMCSLYet} show instead the comparison between the quantum interference pattern and the one which would arise if the CSL noise -- with parameters chosen at $\\lambda_\\text{CSL} =10^{-8}$~s$^{-1}$ and $r_c=10^{-7}$~m as proposed by Adler~\\cite{adler2007lower} -- was present. The panels on the second row are obtained by evaluating the cost function $\\aleph_{QCSL}$ at the same values of $E_L\/a_L$ used for the upper row, i.e. the values that, at fixed $\\{t_1,t_2\\}$, maximize the quantum-to-classical distinguishability. \nIt should be noted that, for the comparison between CSL and quantum mechanics, we do not necessarily need to restrict our attention to only the values of the parameter $E_L\/a_L$ that maximize the classical-quantum distinguishability. Indeed, by direct inspection of the interference figures it can be deduced that, in general, the classical and CSL patterns are quite different as far as they are not both flattened out by the effects of the noises (environmental or fundamental). This means that we can look for other parameters values which increase the distance between the quantum and CSL patterns. We show this on the third row of Fig.~\\ref{fig:QMCSLYet} where we report the values of $\\aleph_{QCSL}$ at the values of $E_L\/a_L$ which maximize it. As it can be seen, the difference with the panels of the second row is not large apart for very light masses, meaning that the combined maximum distinguishability is nearly achievable. \n\\\\\nFinally, Figure~\\ref{fig:csl_excl} extends the previous analysis to the whole parameter space of the CSL model. This exclusion plot is obtained for values of the parameters $t_1,t_2,E_L\/a_L$ which maximize the distinguishability between the quantum and CSL predictions, i.e., $\\aleph_{QCSL}$, as shown in the third row of Figure~\\ref{fig:QMCSLYet}. The solid lines in Figure~\\ref{fig:csl_excl} show the upper bounds that could be achieved with space-based near-field interferometry experiments with particle masses up to $10^{11}$~amu. As it can be seen, already the use of $10^9$~amu particles (green solid line) has the potential to rule out collapse models even beyond the values GRW originally proposed for the parameters, a feat that is outside the reach of current experiments. \nThis is one of the main results of this work. It shows that near-field space-based experiments holds the promise to push tests of quantum mechanics -- and of collapse models -- way beyond what is possible with ground-based experiments and have the ability {to directly access a large and unexplored area of parameter space \\{$\\lambda_\\text{CSL} ,r_{c}$\\} of } the considered modifications of quantum mechanics. \n}\n\n\nIn conclusion we should cite that, while the analysis presented in this section makes use of the formalism developed to account for the finite size of the particles~\\cite{PhysRevA.100.033813} , and we have included all major sources of decoherence following the technical details laid down in the ESA's QPPF report~\\cite{QPPF}, the description of the system suffers from an unavoidable level of idealization. Without entering in the discussion of technical challenges like the load and re-use of the nanoparticles in several runs of the experiment, we can still point out some of the idealizations made that enter directly into the simulations of the interferometric set-up. In particular, throughout this work we have assumed: the particles to be perfectly spherical, thus neglecting rotational degrees of freedom; the particles to be homogeneous, which has allowed us to use the formalism~\\cite{PhysRevA.100.033813} derived from Mie-scattering theory; finally we have employed the sphere's bulk material refraction index which is tabulated in the literature. This last point is discussed in some detail in Ref.~[\\citeonline{PhysRevA.100.033813}], where it is shown how the coherence properties of the grating interaction strongly depend on the refraction index. It is thus a crucial step for any realization of interferometric space-based experiments with large nanoparticles to conduct preliminary experiments to determined the physical properties of the nanoparticles, with particular reference to their refractive index which could deviate from the bulk material one.\n\n\n\n \\section{Conclusion and Outlook}\\label{WishList}\nIn this perspective article, we have discussed the unique possibilities offered by the space environment for investigating the quantum superposition principle by dedicated interfrometric and non-interferometric experiments and to test quantum mechanics in the parameter regime of large-mass particles, impossible to reach on ground by today's technology. In particular, we have focused our attention on the generation and certification of spatial quantum superpositions of particles with sizes of the order of hundreds of nanometers and the possibilities that this offers for fundamental tests of quantum theory and alternatives thereof~\\cite{millen2020quantum}.\n\nAfter arguing for the advantages offered by space, being the long free-fall times and the availability of low-noise conditions, we considered two main experimental strategies for fundamental studies in space. The first one is the indirect approach of non-interferometric experiments, which does not require the creation of the spatial superposition. This strategy has been proven key in recent work on ground to test collapse models in otherwise unreachable parameter regimes. The second strategy is the more direct one based on interferometric experiments. Here, near-field interferometry with large dielectric nanospheres is the current powerhouse, proven experimental technology, and shows its potential when combined with the advantages of the space environment. We have reported a detailed forecast of the potential offered by these techniques based on state-of-the-art parameters values and showed how space-based experiments offer the possibility to both certify the creation of macroscopic superpositions and essentially rule-out an entire family of alternative models to standard quantum mechanics. Most importantly, we have not found a fundamental showstopper for performing both interferometric and non-interferometric experiments in space.\n\nNeedless to mention, large spatial superpositions of high-mass systems will provide a fine probe for further tests of fundamental physics. This includes: the domain of high-energy particle physics beyond the standard model, when it comes to test candidates of Dark Matter~\\cite{riedel2013direct, bateman2015existence, riedel2017decoherence, carney2019ultralight, carney2020mechanical, monteiro2020search} and possible effects in particle interactions related to Dark Energy~\\cite{khoury2004chameleon, rider2016search, moore2014search}; the low-energy regime of the interplay between quantum mechanics and gravity~\\cite{pikovski2015universal, torovs2017quantum, belenchia2016testing, belenchia2019tests, carlesso2019testing}; precision tests of gravity~\\cite{qvarfort2018gravimetry, hebestreit2018sensing, belenchia2018quantum, bose2017spin, hu2008stochastic, bahrami2014role, bassi2017gravitational, grossardt2016optomechanical}; the test of the equivalence principle and of general relativity's predictions, such as gravitational waves, in a parameter range complementary to existing experiments such as LIGO, VIRGO, GEO600 and the planned LISA space antenna~\\cite{arvanitaki2013detecting, pontin2018levitated}, and frame-dragging effects~\\cite{fadeev2020gravity}. Furthermore, large-mass experiments in space will unavoidably provide a formidable platform for applications in Earth and planet observation~\\cite{middlemiss2016measurement, banerdt2020initial}, where large-mass mechanical systems have already shown a superb capability as force and acceleration sensors~\\cite{millen2020optomechanics,li2018characterization, fadeev2020ferromagnetic, vinante2020ultralow, mitchell2020colloquium, timberlake2019static, hempston2017force, ranjit2016zeptonewton, ahn2020ultrasensitive, geraci2010short}, including in rotational mechanical modes~\\cite{carlesso2018non, stickler2018probing}; as well as their use for frequency conversion~\\cite{forsch2020microwave}\\footnote{\\MP{Whilst in a different set-ups with respect to the ones considered here.}} \n\nIt is clear that the realisation of large-mass, fundamental physics experiments in space is an immensely challenging project. Therefore, the most important next step is to form a community of scientists, industry and space agencies for defining a concrete road-map for the accomplishment of a successful space mission by working on fine-tuned theoretical analysis of conditions for the experiment, coming up with new proposals to test further new physics in the large-mass regime and, last but not least, to push the development of technology readiness for space. Such a roadmap should include performing proof-of-principle large-mass experiments in micro-gravity environments -- such a sounding rockets, drop towers and Einstein elevators, space stations, cubesats, and potentially on the Moon and Mars -- in alignment with international and national space agencies plans for future fundamental physics experiments in space. We hope that the results of this work will stimulate the physics community to further investigate the possibilities offered by space-based experiments of this kind. \n\n\n\\section*{Acknowledgements}\nA. Belenchia acknowledges support from the MSCA IF project pERFEcTO (Grant No. 795782) and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) project number BR 5221\/4-1. A. Bassi acknowledges financial support from the INFN, the University of Trieste and the support by grant number (FQXi-RFP-CPW-2002) from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation.\nS. Donadi acknowledges financial support from INFN and the Fetzer Franklin Fund. G. Gasbarri acknowledges support from the Spanish Agencia Estatal de Investigaci\\'{o}n, project PID2019-107609GB-I00, from the QuantERA grant C'MON-QSENS!, by Spanish MICINN PCI2019-111869-2, and from COST Action CA15220. {R. Kaltenbaek} acknowledges support by the Austrian Research Promotion Agency (projects 854036, 865996) and by the Slovenian Research Agency (research projects N1-0180, J2-2514, J1-9145 and P1-0125). M. Paternostro is supported by the DfE-SFI Investigator Programme (grant 15\/IA\/2864), the Royal Society Wolfson Research Fellowship (RSWF\\textbackslash R3\\textbackslash183013), the Leverhulme Trust Research Project Grant (grant nr.~RGP-2018-266), and the UK EPSRC (grant nr.~EP\/T028106\/1). H. Ulbricht acknowledges support from The Leverhulme Trust (RPG-2016- 046) and the UK EPSRC (EP\/V000624\/1). A. Bassi, M. Carlesso, M. Paternostro, and H. Ulbricht are supported by the H2020 FET Project TEQ (Grant No. 766900). All the authors acknowledge partial support from COST Action QTSpace (CA15220) and thank Alexander Franzen for the creation of one of the figures in the manuscript.\n\n\\section*{Author contributions}\nG. Gasbarri and A. Belenchia led the development of the project with strong input from M. Carlesso and S. Donadi and with significant contributions from A. Bassi, R. Kaltenbaek, M. Paternostro, and H. Ulbricht. All authors contributed to the preparation of the manuscript and its finalisation.\n\n\\section*{Competing Interests}\nThe authors declare no competing interests.\n\n\n\n\\section{Interferometric Experiments: Theory}\n\n\\subsection{Generalized Talbot Coefficients}\nWe report here the explicit expressions of the generalized Talbot coefficients. We refer the interested reader to Ref.~[\\citeonline{PhysRevA.100.033813}] and reference therein for the detailed derivation of these expressions and the analysis of the Talbot effect. \n\nThe generalized Talbot coefficients encode encode the coherent and incoherent effects of the optical grating on the state of the nanoparticle. Assuming the waist of the standing wave in the grating to be much bigger than the matter-wave profile, and for short interaction times ($\\tau_{int}$) compared to the characteristic time of spreading of the matter-waves, the evolution of the nanoparticle density matrix in the longitudinal position representation is given by\n\\begin{align}\\label{eq:trgrat}\n\\bra{z}\\rho\\ket{z'}\\rightarrow R(z,z')T(z,z')\\bra{z}\\rho\\ket{z'},\n\\end{align}\nwhere \n\\begin{align}\\label{Tco}\n&T(z,z')=t(z)t^{*}(z')=e^{-\\frac{i}{\\hslash}\\int_{0}^{t}d\\tau [V(z,\\tau)-V(z',\\tau)]}\\\\\n&R\\left(z,z'\\right)= e^{\\int_{0}^{\\tau_{int}} d\\tau \\mathcal{L}_{t}(z,z')}=R_{\\rm{sca}}(z,z')R_{\\rm{abs}}(z,z').\n\\end{align}\nwith $V(z,t)$ the classical interaction potential. The term $T(z,z')$ describes the coherent effect of the grating while $R(z,z')$ the incoherent combined effect of scattering and absorption of grating photons.\nThe generalized Talbot coefficients entering the interference pattern are given by the convolution between the Talbot coefficient describing the sole coherent grating $B^{\\rm{coh}}_{n}\\left({s}\/{d}\\right)= \\sum_{k}b_{k}b^{*}_{k-n}\\exp\\{{i \\pi (n-2k)s}\/d\\}$, which are given in terms of \nthe Fourier coefficients $b_{n}$ of the transmission function $t(z)$ entering the coherent transmission mask $T(z,z')$, and the Fourier coefficients $R_{n}$ of the decoherence mask function $R(z,z')$ \n\\begin{align}\\label{eq:TalbotCoeff}\n{B}_{n}\\left(\\frac{s}{d}\\right)&=\\sum_{j}B^{\\rm{coh}}_{n-j}\\left(\\frac{s}{d}\\right)R_{j}\\left(\\frac{s}{d}\\right)\\\\\n&= e^{F-c_{\\rm{abs}}\/2} \\sum_{k=-\\infty}^{\\infty}\\left(\\frac{\\zeta_{\\rm{coh}}+a+c_{\\rm{abs}}\/2}{\\zeta_{\\rm{coh}}-a-c_{\\rm{abs}}\/2}\\right)^{\\frac{n+k}{2}}J_{k}(b)J_{n+k}\\left(\\sign(\\zeta_{\\rm{coh}}-a-c_{\\rm{abs}}\/2)\\sqrt{\\zeta_{\\rm{coh}}^{2}-(a+c_{\\rm{abs}}\/2)^{2}}\\right).\n\\end{align}\nHere \n\\begin{align}\\label{abF}\nc_{\\rm{abs}}&=\n\\frac{4\\sigma_{abs}}{ h c}\\frac{E_{L}}{a_{L}}(1-\\cos(\\pi s\/d)\\nonumber\\\\\n\\zeta_{coh} &= \\frac{16F_{z}(-\\lambda\/8) E_L}{\\hslash a_L k I_{0}}\\sin\\left(\\frac{\\pi s}{d}\\right)\\nonumber\\\\\na(s) &= \\int d\\tau \\frac{8\\pi E_{L}}{\\hslash \\omega a_{L}}\\int d \\Omega\\,{\\rm Re}\\big(f^{*}(k , k \\mathbf{n})f(- k , k\\mathbf{n})\\big)[\\cos(k n_{z} s)-\\cos(ks)],\\nonumber\\\\\nb(s)&= \\int d\\tau\\frac{8\\pi E_{L}}{\\hslash \\omega a_{L}}\\int d \\Omega \\,{\\rm Im} \\big(f^{*}(k , k \\mathbf{n})f(- k , k\\mathbf{n})\\big)\\sin(kn_{z}s),\\nonumber\\\\\nF(s)&= \\int d\\tau \\frac{8 \\pi E_{L}}{\\hslash \\omega a_{L}}\\int d \\Omega\\, |f(k,k\\mathbf{n})|^{2} [\\cos((1-n_{z})ks)-1]\n\\end{align}\nwhere $\\lambda= 2\\pi \/ k= 2\\pi \/\\omega c $ is the light wavelength, $E_{L}$ is grating laser energy, $I_{0}$ and $a_{L}$ are respectively the intensity parameter and the spot area of the laser, $F_{z}(z)$ is the longitudinal light-induced force on the dielectric sphere, $\\sigma_{abs}$ is the photon absorption cross section, and $f(k,k\\mathbf{n})$ the photon scattering cross section.\n\n\\subsubsection{Classical Limit of the optical grating}\\label{ClaLim}\nAs we discussed in the main text, also classical particles following ballistic trajectories through the grating laser would form a near-field shadow pattern. The effect of the optical grating on classical particles can be described by modifying in a suitable way the generalized Talbot coefficients. In particular, the classical behaviour can be obtained as the limiting case of $\\hslash \\to 0$ as outlined in Ref.~[\\citeonline{PhysRevA.100.033813}]. To summarize, the classical analogue of the coherent effect of the grating corresponds to a momentum kick due to the classical optical force acting on a dielectric and, by performing an expansion of the trigonometric functions in $a,b, F$ to first order in $\\hslash$, it is easy to see that the only non-vanishing contribution to the classical coefficients arise from the function $b$. \n\n\n\n\\subsection{Environmental Decoherence}\nAs discussed in the main text, our simulations account for several sources of environmental decoherence acting during the free fall times in the experiment. In particular, we account for decoherence due to collision with the residual gas in the vacuum chamber; black-body radiation, emission, scattering, and absorption -- taking into account the heating of the particle (photonic environment) during the trapping period and the subsequent cooling during free-fall. Here we report the expression for the kernel $R_{n}$ in Eq.~(3) in the main text that account for all these decoherence sources.\n\\begin{align}\n\\ln(R_{n})=& -\\Gamma_{\\text{coll}}(t_{1}+t_{2})+\\int d\\omega \\gamma_{\\text{abs}}(\\omega)\\left[\\frac{\\text{Si}(a_{n})}{a_{n}}-1\\right](t_{1}+t_{2})+\\int d\\omega \\gamma_{\\text{sca}}(\\omega)\\left[\\frac{\\text{Si}(2a_{n})}{a_{n}}-\\text{sinc}^{2}(a_{n})-1\\right](t_{1}-t_{2})\\nonumber\\\\\n&\\int d\\omega \\int_{0}^{1} d\\theta \\{t_{1}\\gamma_{\\text{emi}}[\\omega,T_{\\text{int}}(t_{1}-t_{1}\\theta)]+t_{2}\\gamma_{\\text{emi}}[\\omega,T_{\\text{int}(t_{1}+t_{2}\\theta)] }[\\text{sinc}(a_{n}\\theta)-1]\n\\end{align}\nwith $a_{n}= n\\,h\\,\\omega\\, t_{2} t_{1} \/ ((t_2+t_1)m\\,c\\,d) $, and $\\text{Si}$ the sine integral. The collision rate $\\Gamma_\\text{coll}$ is given by\n\\begin{align}\n\\Gamma_{\\text{coll}} = \\frac{ 4\\pi \\Gamma(9\/10)}{5\\sin(\\pi\/5)}\\left(\\frac{3\\pi C_{6}}{2\\hslash}^{2\/5} \\frac{p_{g}v_{g}}{k_{\\text{B}}T_{\\text{env}}}\\right)\n\\end{align}\nwhile the scattering, emission and absorption rates by\n\\begin{align}\n\\gamma_{\\text{sca\/abs}}(\\omega) =\\frac{ (\\omega \/ \\pi c)^{2}\\sigma_{\\text{sca\/abs}}(\\omega)}{\\exp(\\hslash \\omega\/k_{\\text{B}}T_{\\text{env}})-1},\\hspace{1cm}\n&\\gamma_{\\text{emi}}[\\omega, T_{\\text{int}}]= \\left(\\frac{\\omega}{\\pi c}\\right)^{2}\\sigma_{\\text{abs}}\\exp\\left(-\\frac{\\hslash \\omega}{k_{\\text{B}}T_{\\text{int}}}\\right)\\text{Im}\\left\\{\\frac{\\varepsilon(\\omega)-1}{\\varepsilon(\\omega)+2}\\right\\}\n\\end{align}\nwhere $v_{g}$ and $p_{g}$ are the mean velocity and pressure of the gas, $T_{\\text{env}}$ the environmental temperature, $\\sigma_{\\text{abs\/sca}}$ the photon scattering\/absorption cross section, $\\varepsilon(\\omega)$ the electric permittivity, and \n\\begin{align}\nC_{6} \\simeq \\frac{ 3 \\alpha(\\omega=0) \\alpha_{g} I_{g}I}{32 \\pi^{2}\\varepsilon_{0}^{2}(I_{g}+I)}\n\\end{align}\nthe van der Waals coupling constant where $\\alpha$, $\\alpha_{g}$ are the static polarizabilities and $I$, $I_g$ the ionization energies of the nanosphere and the gas particle, respectively.\n\nWe refer the reader to the supplemental information of~[\\citeonline{bateman2014near}] and the reference therein for a detailed derivation of these expressions.\n\n\\subsection{ CSL-Decoherence}\nThe dissipative term describing the effective decoherence of the center-of-mass wavefunction -- for the reduced one-dimensional state of motion of a nanoparticle -- due to the CSL reads\n\\begin{align}\\label{cslme}\n\\mathcal{L}_{CSL}(x,x')=-\\frac{\\lambda_{CSL}(4\\pi r_C^2)^{3\/2}}{(2\\pi\\hslash)^{3}}\\int d\\mathbf{q}\\,\\frac{\\tilde{\\mu}(\\mathbf{q})^2}{m_0^2}\\,e^{-r_C^2\\mathbf{q}^{2}\/\\hslash^{2}}\\,\\left(e^{-\\frac{i}{\\hslash}{q}_{z}({x}-{x'})}-1\\right)\n\\end{align} \nin position representation. Here $m_{0}$ the nucleon mass , $\\lambda_{CSL}$ and $r_{c}$ the rate and the localization length of the CSL model, and $\\mu(\\bf{q})$ the Fourier transform of the nano-particle mass density $\\mu(\\bf{x})$\n,i.e.\n\\begin{align}\n\\tilde{\\mu}(\\mathbf{q}):= \\int e^{-\\frac{i}{\\hslash} \\mathbf{q}\\cdot\\mathbf{x}}\\mu(\\mathbf{x}).\n\\end{align}\nthat in the case of a homogeneous and spherical mass distribution of radius $R$ is given by\n\\begin{align}\n \\tilde{\\mu}(\\mathbf{q})= \\frac{4 \\pi \\hslash R^2}{q} J_{1}(q R\/\\hslash)\n\\end{align}\nwhere $J_{1}(q)$ denotes the spherical Bessel function of the first kind. \nExploiting this equation, rewriting the integral in Eq.~\\eqref{cslme} in spherical coordinates, and performing the integration over the solid angle the CSL dissipative term simplifies to \n\\begin{align}\n\\mathcal{L}_{CSL}(\\mathbf{x},\\mathbf{x}')= \\frac{64 \\pi^{3\/2}\\,\\lambda_{CSL}\\, rc^{3} R^{4}}{\\hslash\\, m_{0}^{2}}\\int_{0}^{\\infty} d q\\, e^{-r_{c}^2 q^2\/\\hslash^2} J_{1}(q R\/\\hslash)^{2}\\left(\\text{sinc}(q|{x}'-{x}|)\/\\hslash)-1 \\right).\n\\end{align}\nThis results in the following, additional kernels entering the expression for the interference pattern probability:\n\\begin{align}\nR_{n}^{\\text{CSL}}= \\exp\\left\\{ \\Gamma_{\\text{CSL}}\\left(f_{\\text{CSL}}\\left(\\frac{ h\\, n\\, q\\, t_1 t_2}{m\\, d (t_1+t_2)}\\right)-1\\right)(t_1+t_2)\\right\\}\n\\end{align}\nwhere $d$ is again the grating period and \n\\begin{align}\n\\Gamma_{\\text{CSL}}&= \\sqrt{\\frac{32}{\\pi}}\\frac{\\lambda_{\\text{CSL}}\\, r_{c}^{3}}{\\hslash^{3} m_{0}^{2}} \\int dq q^2 e^{-r_{c}^{2}q^2\/\\hslash^2}\\tilde{\\rho}(q)^{2} \\nonumber\\\\\nf_{\\text{CSL}}(x)&=\\sqrt{\\frac{32 }{\\pi}} \\frac{\\lambda_{CSL}\\,r_{c}^{3}}{\\hslash^3 m_{0}^{2} \\Gamma_{\\text{CSL}}}\\int dq q^{2} e^{-r_{c}^{2}q^{2}\/\\hslash^{2}} \\tilde{\\rho}(q)^{2} \\text{Si}\\left(\\frac{q x}{\\hslash}\\right) =\\frac{64 \\pi^{3\/2}\\lambda_{CSL} rc^{3} R^{4} }{\\hslash m_{0}^{2} \\Gamma_{\\text{CSL}}} \\int_{0}^{\\infty}dq e^{-q^2 r_{c}^{2}\/\\hslash^{2}}J_{1}\\left(\\frac{q R}{\\hslash}\\right)^{2}\\text{Si}\\left(\\frac{qx}{\\hslash}\\right).\n\\end{align}\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1.\\textwidth]{QM-Collapse.pdf}\n\\caption{Comparison between quantum mechanics prediction and CSL one. In particular, the first row of the panel shows the cost function $\\aleph_{QCSL}$, for increasing mass from left to right, as a function of $t_1, t_2$. The second row shows the values of the $E_{L}\/a_{L}$ ratio that maximise the cost function $\\aleph_{QCSL}$.\n}\\label{fig:csl_cost_del}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=1.\\textwidth]{QMCSL.pdf}\n\\caption{Comparison between quantum mechanics, classical shadow pattern and the prediction of CSL. The first row shows the value of $\\aleph_{QC}$ for increasing value of the mass while the second row shows the value of $\\aleph_{QCSL}$. The third row shows instead the values of the $E_{L}\/a_{L}$ that have been used in both the first and second row figures. These values of the $E_{L}\/a_{L}$ ratio are the one which maximize $\\aleph_{QC}$. \n}\\label{fig:QMCSL}\n\\end{figure*}\n\n\n\\section{Interferometric Experiments: Simulations}\nHere we report additional details for the simulations performed for near-field intereferometric experiments in space. In Table~\\ref{table} we report all the relevant parameters used in the simulations leading to the results in section~IV.\n\nIt should be noted that, in the results shown, we did account for the heating of the nanoparticle while initially optically trapped for a time $t_c$ by assuming an internal temperature of $40$~K after time $t_c$ in an environment at $20$~K. This is a conservative estimate and we also run the same simulations for higher and lower values of the initial internal temperature without noticing any significant deviation. \n\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{llll}\n \\hline\n \\textbf{Symbol} & & \\textbf{Name} & \\textbf{Value} \\\\ \n &&&{\\bf or Expression}\\\\\n \\hline\\hline\n Nanosphere\\\\ properties:\\\\ \\hline\n \n $\\rho_{SiO2}$ & & Glass SiO2 density & $1850$~kg\/m$^3$ \\\\\n $c_{m}$ & & Specific heat & 700~J\/(kg K) \\\\\n \n $I$ & & Ionization energy & $5\\times 10^{-19}$~J \\\\\n \n $m$ & & Mass & $10^7 - 10^{11}$~amu \\\\ \n $\\sigma_x$ & & Position variance & $\\sqrt{\\frac{\\hslash}{4\\gamma}\\coth{({\\beta_0\\nu_m})}}$ \\\\\n &&post-cooling &\\\\\n $\\sigma_p$ & & Momentum variance& $\\sqrt{\\hslash\\gamma\\coth{({\\beta_0\\nu_m})}}$ \\\\\n &&post-cooling &\\\\\n \\hline \n Trapping\\\\ parameters: \\\\\n \\hline\n $\\lambda_c$ & & Trap's laser wavelength & 1550~nm \\\\\n $t_c$ & & Trapping and cooling time & 1~s \\\\\n $I_{{\\rm trap}}$ & & Trap's laser intensity & $90\\times 10^9$~W\/m$^2$ \\\\\n $\\nu_{m}$ & & Trap mechanical frequency & $10^5\/2\\pi$~Hz \\\\%$200$~Hz\n &&--longitudinal direction &\\\\\n $T_{{\\rm int}0}$ & & C.o.m. temperature & $\\sim 5\\times 10^{-6}$~K\\\\%$20$~mK \n &&post-cooling &\\\\\n \\hline\n Optical grating\\\\ parameters: \\\\ \\hline\n $\\lambda$ & & Grating laser wavelength & $100$~nm \\\\\n $E_L\/a_L$ & & Pulse-energy per spot-area & $10^{-6}- 5$ J\/m$^{2}$ \\\\ \n \n \\hline\n Environment\\\\ parameters: \\\\ \\hline\n $T_{\\rm env}$ & & Environmental temperature & 20~K \\\\\n $\\alpha_g\/(4\\pi\\epsilon_0)$ & & Residual gas polarizability & $0.6668\\times 10^{-30}$~m$^3$ \\\\\n $I_g$ & & Residual gas ioniz. energy & $2.17\\times 10^{-18}$~J\\\\\n $m_g$ & & Residual gas mass & 1.00784~amu\\\\\n $v_g$ & & Residual gas mean velocity & $\\sqrt{2k_B T_{\\rm env}\/m_g}$\\\\\n $P_g$ & & Residual gas pressure & $10^{-13}$~mbar \\\\ \\hline\n\\end{tabular}\n\\caption{Specifics of all the parameters entering the simulation of the near-field interferometric experiments with dielectric nanospheres. The residual gas in the vacuum chamber is assumed to be composed mostly by nitrogen, of which we use the physical properties. The refractive index, as a function of the frequency, for Si and SiO2 can be found tabulated in the supplementary material of Bateman et al.~\\cite{bateman2014near} (see also references therein). The pulse energy to spot area ratio of the grating laser is related to the eikonal phase by Eqs.~(20-22) in Ref.~[\\citeonline{PhysRevA.100.033813}] to which we refer the reader for additional details.\nWe have set $\\beta_0=h\/(2k_B T_{{\\rm int}0})$ and $\\gamma=\\pi m\\nu_m$.}\\label{table}\n\\end{table}\n\nIn Figure~\\ref{fig:csl_cost_del} we report the values of $\\aleph_{QCSL}$ (first row) for different masses of the nanoparticles and as a function of $t_1, t_2$. This correspond to the panels in the third row of Figure~6 in the main text. The panels on the second row of the figure show the values of $E_L\/a_L$ used. These are the values that maximize $\\aleph_{QCSL}$.\n\nIn Figure~\\ref{fig:QMCSL} we report instead both the values of $\\aleph_{QC}$ (first row) and the ones of $\\aleph_{QCSL}$ (second row) obtained using the values of $E_L\/a_L$ shown in the third row which that maximize $\\aleph_{QC}$.\n\n\n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}