diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlnzt" "b/data_all_eng_slimpj/shuffled/split2/finalzzlnzt" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlnzt" @@ -0,0 +1,5 @@ +{"text":"\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nWidespread adoption of Machine Learning (ML) for critical tasks brought along the question of {\\em trust}: Are ML models robust in making correct decisions when safety is at risk? Despite the significant advances in deep learning, near-human performance on several tasks did not translate to robustness in adversarial settings~\\cite{szegedy2013intriguing,biggio2013evasion,athalye2017synthesizing,papernot2016transferability}. Several ML models, especially Deep Neural Networks (DNNs), are found susceptible to perturbed inputs resulting in potential safety implications~\\cite{dreossi2018semantic, huang2011adversarial, janasosp}. While many defenses have been proposed to make ML models more robust, only a few have survived adaptive attack strategies~\\cite{athalye2018obfuscated}. Among these defenses are certification~\\cite{raghunathan2018certified} and its variants~\\cite{cohen2019certified,lecuyer2018certified}, and adversarial training~\\cite{madry-iclr2018}. The common theme across these defenses is to ensure that the model's output is stable within an $p$-norm ball around the input, where the $p$-norm is a crude proxy for human imperceptibility. \n\nHowever, these defenses suffer significant degradation in accuracy~\\cite{tsipras2018there,jacobsen2019exploiting,cohen2019certified,lecuyer2018certified}. Recent results in literature have demonstrated that there are settings where robustness and accuracy cannot be simultaneously achieved~\\cite{tsipras2018there,Bubeck:19,win-win}. Additionally, these defenses have largely been detached from the real world; they do not consider the semantic and contextual properties of the classification problem. Thus, the design of efficient and robust DNN classifiers remains an open research problem~\\cite{carlini2019evaluating}. How can we defend against powerful adversaries and avoid the shortcomings of previous defenses? \n\n\n\\begin{figure}[ht]%\n \\centering\n \\subfloat[Decision regions without invariant]{{\\includegraphics[width=3.25cm]{figures\/DB1.png} }}%\n \\qquad\n \\subfloat[Decision regions with invariant enforced]{{\\includegraphics[width=3.25cm]{figures\/DB2.png} }}%\n \\caption{\\small Intuitive understanding of how invariances reduce adversarial mobility. In Figure 1(b), the bottom left red region is no longer accessible to the adversary.}%\n \\label{fig:invariances}%\n\\vspace{-5mm}\n\\end{figure}\n\n\nFortunately, there is one constraint on the power of the adversary: \\textit{the adversarial perturbation should not modify specific salient features which will change the perception\/interpretation of the object.} Take the example of classifying a US road sign (Figure~\\ref{fig:stop}). The shape of the sign is {\\em intentionally} designed to be octagonal; few (if any) other signs have this particular shape. To attack such a sign, an attacker can no longer generate arbitrary perturbations; if the new shape deviates significantly from an octagon, then one can argue that the new object has deviated significantly from (and is no longer) the \\texttt{Stop} sign. Thus, by forcing the adversary to satisfy these constraints (such as preserving the shape), we can increase the attack's cost relative to its resources, thereby satisfying Saltzer and Shroeder's ``work factor'' principle for the design of secure systems~\\cite{saltzer1975protection}. \n\n\nThese constraints posed by the defender can be modeled as {\\em invariances}, which can be explicitly enforced in the classification procedure. When enforced, such invariances limit the adversary's attack space, consequently making the classifier more robust (refer Figure~\\ref{fig:invariances}). Given an input domain $\\mathcal{X}$, invariances (denoted $\\mathcal{I}$) can be captured mathematically as a relation $\\mathcal{I} \\subseteq \\mathcal{X} \\times \\mathcal{X}$. Observe that defense strategies that ensure that the label of a datapoint remains fixed in an $\\varepsilon$-ball around it can also be captured in our framework \\textit{i.e.}\\@\\xspace points $(x,x') \\in \\mathcal{X}$ belong to the invariant $\\mathcal{I} = \\{(x, x') : ||x - x'||_{p} \\leq \\varepsilon\\}$ if and only if they have the same label.\n\nTo the best of our knowledge, there is no previous work that has investigated how to embed a set of invariances into a learning process. In this paper, we wish to understand how one can re-design the classification process to be {\\em more robust} if they are given a set of pre-determined invariances. By doing so, we wish to understand if we can improve the trade-off between accuracy and robustness. Specifically, we discuss:\n\n\\vspace{1mm}\n\\noindent{\\em Modelling domain knowledge as invariances:} We argue that classification problems in the wild can benefit from utilizing domain knowledge both in terms of improving its performance and improving robustness. Enforcing these invariances on an attacker introduces additional constraints on its search space. In particular, we prove that applying invariances affects results from literature that present statistical settings~\\cite{tsipras2018there} (refer Appendix~\\ref{tsipras}) and computational settings~\\cite{win-win} (refer \\S~\\ref{sec:robustness}) where accuracy and robustness fail to co-exist.\n\n\n\\vspace{1mm}\n\\noindent{\\em Data-driven approaches to obtain invariances:} We show that in scenarios where it is intractable to use domain expertise, clustering embeddings obtained from the training data can give sufficient insight to obtain invariances. Through simple experiments on the MNIST and CIFAR-10 datasets, we observe that the clusters generated by intermediary and ultimate layers (respectively) results in distinct label equivalences. We proceed to analyze if these equivalences can be used to design classification schemes with enhanced robustness (refer \\S~\\ref{sec:insights})\n\n\\vspace{1mm}\n\\noindent{\\em Hierarchical Classification:} When enforced, the invariances split the output\/label space of the classifier into equivalence classes, thereby partitioning the classification problem into smaller pieces (\\textit{e.g.,}\\@\\xspace refer Figure~\\ref{fig:road-sign-setup}). Thus, we can design a hierarchical classification scheme (conceptually similar to a decision tree). At the decision nodes of the hierarchy, we have classifiers that enforce the invariances (by forcing inputs to label equivalence classes). At the leaves of the hierarchy, we have classifiers that predict within an equivalence class of labels (refer \\S~\\ref{sec:hierarchies} and \\S~\\ref{sec:general}). By ensuring that all of these constituents (leaves and intermediary classifiers) are robust, we are able to obtain a hierarchical classifier that is more robust than the sum of its parts.\n\n\\vspace{1mm}\n\\noindent{\\em Robustness-Accuracy Improvement:} Employing the invariances within the hierarchical classifier provides robustness gains on two levels. At a high-level, a sequence of invariances limits the prediction to an equivalence class of labels. Equivalence classes limit the adversary's targeted attack capability; they can only target labels within the equivalence class. On a more fine-grained level, we show that reducing the number of labels improves the robustness certificate (defined in \\S~\\ref{certs}) of the classifier compared to the original classifier predicting within the full set of labels. More importantly, we show that these gains in robustness do not harm accuracy (refer \\S~\\ref{sec:robustness_analysis} and \\S~\\ref{sec:robustness_trade-off}).\n\nAs case studies of the real-world application of the invariances and the proposed hierarchical classifier, we study classification problems in two domains -- vision and audio. In the vision domain, we study the problem of road sign classification. Here, we demonstrate that {\\em shape} invariances can be enforced\/realized by using robust inputs. A realization of this approach is by predicting shape from robust LiDAR\\xspace point clouds as opposed to image pixels. From Figure~\\ref{fig:lidar-teaser}, one can observe that the pixel-space inputs perturbed using adversarial patches from previous work~\\cite{roadsigns17} is misclassified as a \\texttt{SpeedLimit} sign (which is supposed to be circular), but the LiDAR\\xspace point cloud shows that the shape of the sign is still an octagon. In the audio domain, we study the problem of speaker identification. Here, we showcase another approach. Specifically, we show that {\\em gender} invariances (through explicit gender prediction) can be enforced by using a classifier that is trained to be robust. Such a robust classifier will be able to perform predictions accurately despite using features that are not robust. \n\nOur results show that: \n\\begin{enumerate}\n\\item Our approach is agnostic to the domain of classification. For both audio and vision tasks, we observe gains in robustness (sometimes a 100\\% increase) and accuracy, suggesting that they may no longer need to be at odds.\n\\item The exact choice of implementation of different components of the hierarchy bears no impact on the correctness of our proposal. In \\S~\\ref{casestudy1}, we implement the root classifier of our hierarchy as a regular DNN trained to operate on robust features (obtained from a different input modality), and in \\S~\\ref{casestudy2}, we implement the root classifier as a smoothed classifier~\\cite{cohen2019certified}. Both approaches result in a hierarchical classifier with increased robustness and accuracy.\n\\item By adding one additional invariant (location), we observe that there are significant gains in robustness for the vision task (refer \\S~\\ref{sec:morerobustfeatures}).\n\\end{enumerate}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Background}\n\\label{sec:background}\n\n\\subsection{Machine Learning Primer}\n\\label{sec:notation}\n\nConsider a space $\\mathcal{Z}$ of the form $\\mathcal{X} \\times \\mathcal{Y}$ , where $\\mathcal{X}$ is the input\/sample space and $\\mathcal{Y}$ is the output space. For example, in our case studies detailed in \\S~\\ref{expts}, the inputs are images of road signs or audio samples from speakers, and the outputs are the exact road sign or identities (respectively). Often, we assume $\\mathcal{X}=\\mathbb{R}^n$ and $\\mathcal{Y} = \\mathbb{R}^m$. Let $\\mathcal{H}$ be a hypothesis space (\\textit{e.g.,}\\@\\xspace weights of a DNN). We assume a loss function $L:\\mathcal{H} \\times \\mathcal{Z} \\rightarrow \\mathbb{R}$ that measures the {\\em disagreement} of the hypothesis from the ground truth. The output of the learning algorithm is a classifier, which is a function $F$ which accepts an input $x \\in \\mathcal{X}$, and outputs $y \\in \\mathcal{Y}$. To emphasize that a classifier depends on a hypothesis $\\theta \\in \\mathcal{H}$, which is output from the learning algorithm, we will denote it as $F_{\\theta}$; if $\\theta$ is clear from the context, we will sometimes simply write $F$. We denote by $F_i(x)$ as the probability that the input $x$ has the label $i \\in [m]$, such that $0 \\leq F_i(x) \\leq 1$ and $ \\sum_{i}{F_i(x)} = 1$.\n\n\\subsection{Threat Model}\n\\label{sec:threat}\n\nOur threat model comprises of white-box adversaries that behave in a passive manner, and are capable of generating {\\em human imperceptible} perturbations. Since human imperceptability is hard to measure, it is often approximated using the $p$-norm. Thus, the adversary's objective is to generate $p$-norm bounded perturbations (typical values for $p$ include 0, 1, 2, $\\infty$); inputs modified using these perturbations are misclassified (\\textit{i.e.}\\@\\xspace the label predicted by the classifier does not match the true label). The misclassifications could be targeted (\\textit{i.e.}\\@\\xspace the target label which is to be output by the classifier is chosen by the adversary), or untargeted. Formally, an adversary wishes to solve the following optimization problem:\n\\begin{gather*}\n\\label{eq:original_attack}\n\\min \\quad \\|\\delta\\|_{p}, \\quad \\text{ s.t. } \\\\\n\\argmax F_i(x + \\delta) = y^* \\text{ (\\textit{targeted}) } \\\\ \n\\text{ or } \\argmax F_i(x+\\delta) \\neq \\argmax F_j(x) \\text{ (\\textit{untargeted}).} \n\\end{gather*}\n\nThere exists abundant literature for generating such adversarial examples~\\cite{madry-iclr2018,goodfellow2014explaining,moosavi2015deepfool,moosavi2017universal,papernot2017practical}. While adversaries are capable of generating perturbations that are not $p$-norm bounded~\\cite{roadsigns17,kurakin2016adversarial}, such adversaries are out of the scope of this work; even in such a simple setting, progress has been limited. Additionally, the robustness measurements we make (as defined in \\S~\\ref{certs}) rely on the assumption that perturbations generated are $p$-norm bounded.\n\n\\subsection{Measuring Robustness}\n\\label{certs}\n\nSeveral defense strategies have been devised to protect models\/classifiers from adversarial examples~\\cite{papernot2016distillation,samangouei2018defense,dhillon2018stochastic,meng2017magnet}; most of them have failed~\\cite{athalye2018obfuscated}. Orthogonal research has focused on obtaining regions around an input where there provably exists no adversarial examples. This notion of {\\em certified robustness} can be formalized by the $\\varepsilon$-ball around $x$, defined as: $B_{p,\\varepsilon}(x) = \\{ z \\in \\mathcal{X} \\mid \\|z-x\\|_p \\leq \\varepsilon \\}$; the $\\varepsilon$-ball of $x$ is a region in which no $p$-norm bounded adversarial example (for $x$) exists. The work of Lee \\textit{et al.}\\@\\xspace~\\cite[\\S2]{DBLP:journals\/corr\/abs-1906-04948} contains a detailed discussion of various certification methods and their shortcomings (primarily due to scalability). \n\nWe center our discussion on the robustness certificate of Cohen \\textit{et al.}\\@\\xspace~\\cite{cohen2019certified} for $2$-norm bounded perturbations. Their certificate computation scales for large DNNs, and several adversarial attacks focus on generating adversarial examples by bounding the $2$-norm, making their certificate relevant\\footnote{Through the remainder of the paper, any discussion of robustness certificates refers to the one in~\\cite{cohen2019certified}.}. The work of Lee \\textit{et al.}\\@\\xspace~\\cite{DBLP:journals\/corr\/abs-1906-04948} generalizes the result in~\\cite{cohen2019certified} for different distributions of noise and other norm bounds. \n\n\\noindent{\\bf Randomized Smoothing:} In a nutshell, Cohen \\textit{et al.}\\@\\xspace propose to transform any classifier to a smoothed classifier through the addition of isotropic Gaussian noise (denoted as $\\eta$). We refer readers to~\\cite[\\S2,3,4]{cohen2019certified} for more details. They prove that the lower bound on the robustness certificate of each input $x$ for the smoothed classifier is given as:\n\\begin{equation}\n\\label{eq:robustnesscertificate}\n R \\geq \\frac{\\sigma}{2}\\left(\\Phi^{-1}(\\underline{p_A})-\\Phi^{-1}(\\overline{p_B})\\right), \n\\end{equation}\nwhere (a) $\\Phi^{-1}$ is the inverse of standard \nGaussian CDF, (b) $\\underline{p_A}$ is the lower bound of the probability of the top label, (c) $\\overline{p_B}$ is the upper bound of the probability of the runner-up label, and (d) $\\eta$ is drawn from $\\ensuremath{\\mathcal{N}}(0,\\sigma^2)$. They also devise an efficient algorithm to compute this certificate for large DNNs by empirically estimating the probability of each class's decision region by running Monte Carlo simulations under the distribution $\\ensuremath{\\mathcal{N}}(x,\\sigma^2)$. \n\nOther certified smoothing approaches have similar robustness bounds that are functions of the margin between the top and runner-up labels in the base classifiers~\\cite{lecuyer2018certified, HeinA17}, so the trends in the results discussed in \\S~\\ref{expts} are independent of the actual certification methodology. \n\n\\iffalse\n\\section{Background}\n\\subsection{Classification and Notation}\n\\begin{enumerate}\n \\item DNN, inputs, outputs\n\\end{enumerate}\n\n\\subsection{Adversarial Threat Model}\n\\begin{enumerate}\n \\item what is an adversarial example (few sentences) - talk about both targeted + untargeted settings\n \\item threat model - $l_p$ bounded adversary; state clearly that larger, more perceptible attacks are possible, but they are out of the scope of this work.\n\\end{enumerate}\n\n\\subsection{Defenses}\n\\begin{enumerate}\n \\item describe what it means to be robust to adversarial examples\n \\item We focus mainly on improvements in certifiable defenses but also explore the results of empirical defenses\n\\end{enumerate}\n\n\\subsubsection{Robustness Certificates}\n\\begin{enumerate}\n \\item certified robustness metrics: raghunathan et al., kolter et al. (x2) - talk about pros and cons of each of them, explain why we converge on smoothing\n \\item can build upon smoothing by using the generalizations in (a) recurjac, (b) NIPS paper from germany, and (c) Jaakkola et al.\n \\item main takeaway from the earlier point is that certificate is dependent on the width of the margin.\n\\end{enumerate}\n\n\\subsubsection{Adversarial Training}\n\\begin{enumerate}\n \\item Formulation with pros and cons\n \\item YOPO\n\\end{enumerate}\n\\fi\n\n\n\n\n\\section{The Building Blocks}\n\nThrough the remainder of the section, we discuss \n\\begin{itemize}\n\\item how invariances can be used to restrict the adversary's freedom by introducing equivalence classes in the label space (\\S~\\ref{sec:invariances}). \n\\item how label equivalences create a hierarchy, which provides an opportunity to introduce a hierarchical paradigm for classification (\\S~\\ref{sec:hierarchies}).\n\\item how using a combination of hierarchies and invariances with specific modifications to (a) the input, or (b) the model architecture, will result in robustness and accuracy no longer being at odds (\\S~\\ref{sec:robustness}).\n \n\\end{itemize}\n\n\\subsection{Hierarchies}\n\\label{sec:hierarchies}\n\n\nIn general, one can observe that if classification was performed in a hierarchical manner, various concepts could be verified\/enforced at different levels of the hierarchy. Consider a simple example of a decision tree, where at each level, a local classification decision (based on a particular feature) occurs. In terms of understanding the relationship between hierarchies and test accuracy, one school of thought believes that knowledge distillation~\\cite{hinton2015distilling,caruana2006model} using ensembles can drastically improve generalization performance. Another believes that concepts form hierarchies which can be exploited to improve accuracy~\\cite{olah2017feature}. We investigate the latter. Several researchers have studied the notion of hierarchical classification~\\cite{yan2015hd,deng2014large,salakhutdinov2012learning}; at a high-level, the early stages of the hierarchical classifier performs classification based on high-level concepts that exist between various datapoints (such as shape, or color), and the later stages of the hierarchical classifier performs more fine grained classification. The intuition for using a hierarchy stems from earlier observations where DNNs extract high-level features (of the input images) in the earlier layers~\\cite{zeiler2014visualizing}; researchers believed explicitly enforcing this would result in better performance~\\cite{HD-CNN,category_structure,treepriors_transferlearning}. Prior work contains abundant literature on defining architectures to optimize for improving the test performance of machine learning models. In the context of DNNs, neural architecture search is one such emerging area~\\cite{zoph2016neural}. \n\nIntuitively, this process is inspired by how humans {\\em may} perform classification~\\cite{national2008human,hunt1962nature,dunsmoor2015categories,lopez1992development}. Consider the case where a human is tasked with identifying a road sign. One of the first observations the human makes with regards to the road sign is its shape, or its color, or its geographic location. These observations provide the human with priors, and consequently enable the human to perform more accurate classification (based on these priors). In essence, these priors encode abundant context for the humans, enabling better classification performance. For example, if the human was asked to identify the sign in Figure~\\ref{fig:stop}, the human may first reduce the potential candidates for the sign based on the {\\em shape} of the sign, and then make a final classification among different signs of the {\\em same shape.} Geirhos \\textit{et al.}\\@\\xspace~\\cite{geirhos2017comparing,geirhos2018generalisation} make a similar observation; they observe that humans are more robust to changes in the input (such as textural or structural changes) potentially due to the (hierarchical) manner in which they perform classification. DNNs operate in a similar manner, by extracting filters that behave similar to human perception~\\cite{zeiler2014visualizing}.\n\n\n\n\n\\subsection{Invariances}\n\\label{sec:invariances}\n\nFrom the discussion in \\S~\\ref{sec:hierarchies} (and the related work in \\S~\\ref{sec:related_work}), we observe that encoding priors about the classification task can help a model learn better. Our contribution is to demonstrate how encoding specific information (regarding the classification task, input features, network architectures etc.) as invariances can help increase the robustness. The inevitable existence of adversarial examples suggest that DNNs are susceptible to very subtle changes in the input~\\cite{DBLP:journals\/corr\/abs-1809-02104}. However, humans are (significantly) more robust to such changes\\footnote{Earlier work suggests that humans are fooled by certain type of inputs~\\cite{elsayed2018adversarial}.}, and can even identify these perturbations~\\cite{zhou2019humans}. In the case of road sign classification, the work by Eykholt \\textit{et al.}\\@\\xspace~\\cite{roadsigns17} introduces realizable perturbations (those which are larger than the $p$-norm bounded perturbations considered earlier) that fool DNNs, but humans are still able to classify these signs correctly. We believe that this is due to the priors that humans have regarding classification tasks (\\textit{e.g.,}\\@\\xspace in Germany, hexagonal road signs can only be \\texttt{Stop} signs, yellow signs can only be \\texttt{SpeedLimit} signs etc.). To further motivate our discussion, Geirhos \\textit{et al.}\\@\\xspace observe that DNNs are more susceptible to textural changes~\\cite{geirhos2018imagenet}, and {\\em explicitly enforcing} bias towards shapes can improve robustness. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figures\/stop.jpg}\n \\caption{\\small A US \\texttt{Stop} sign with stickers. Figure obtained from~\\cite{roadsigns17}.}\n \\label{fig:stop}\n\\end{figure}\n\nWe believe that this observation is more general; by enforcing such biases which we collectively call invariants\/invariances, one can improve DNN robustness. Mathematically, we denote an {\\em invariant} $\\mathcal{I} \\subseteq \\mathcal{X} \\times \\mathcal{X}$. For example, let $S : \\mathcal{X} \\rightarrow \\mathcal{Z}$ denote the function that predicts the shape of input $x \\in \\mathcal{X}$. Thus, the {\\em shape} invariant can be formalized as follows: $\\{(x, \\tilde{x}) \\in \\mathcal{I} \\text{ iff } S(x) = S(\\tilde{x}) \\}$. \n\nSeveral questions arise, first of which being {\\em how does one identify the invariances related to a particular task?} This is a challenging problem. In our evaluation in \\S~\\ref{sec:insights} and \\S~\\ref{expts}, we show two different approaches: (a) by measuring similarity between datapoints' embeddings in different feature representations, and (b) domain expertise related to the classification task. These approaches broadly encompass two different types of invariances: human un-interpretable and human interpretable. While the exact nature of the invariant is not important for methodology, interpretability\/explainability may help in debugging and understanding model failures. We discuss this in detail in \\S~\\ref{sec:discussion}. \n\nWe believe that both approaches have merit; while DNNs have displayed the capability to generalize, they are often used for very specific tasks in the real-world, motivating how domain expertise can be useful. Recent work in adversarial training~\\cite{new_madry_2019} suggests that human imperceptible features are potentially useful in improving robustness, and these features can be extracted by using axiomatic attribution methods~\\cite{sundararajan2017axiomatic}, or observing intermediary representations~\\cite{papernot2018deep}. \n\nNote that the existence of invariances induces a hierarchy in the labels. In the road sign classification example, assuming that shape is invariant, road signs can be grouped based on their shape, and then within each group, partitioned in a more fine-grained manner based on their individual label (refer to Figure~\\ref{fig:road-sign-setup}). Thus, we observe that invariances partition the label space into equivalence classes. The existence of equivalence classes is analogous to the existence of hierarchies, and this ties well with the hierarchical classification paradigm discussed earlier.\n\n\n\\noindent{\\bf Takeaway 1:} Invariances are task specific, and can be obtained using domain knowledge in some cases, and by analyzing properties of the dataset in others.\n\n\nThe next question is {\\em how to encode such invariances?} Before we discuss this further, it is important to note that invariances are a property of the data distribution that we analyze and the model that is being used to learn\/predict with this data. Changes to either will result in datapoints (or the embeddings they generate) violating the invariances. \n\nThe easiest way to encode invariances is to add additional features that are directly correlated to the invariances. For example, one could add an additional feature to an image of a road sign that is suggestive of its shape. However, assuming that the adversary has no access to this additional feature is too strong. An alternative approach is to encode invariance information in the standard features used for classification; now these features have two purposes: they contain useful information regarding the invariance, and they contain useful information about how the input can be classified. However, the existence of adversarial examples suggests that {\\em all} input features may not be useful in encoding information related to the invariances (as some input features are very susceptible to adversarial perturbations). To this end, Madry \\textit{et al.}\\@\\xspace~\\cite{madry-iclr2018} define {\\em robust features} as those which are hard to adversarially perturb, but are highly correlated with the correct classification output. Thus, one can utilize these robust features for invariance enforcement\\footnote{It is important to note that robust features, by themselves, are not sufficient for performant classification~\\cite{madry-iclr2018}}. For example, one could identify robust features in a \\texttt{Stop} sign image and use them to verify if it is indeed hexagonal. Alternatively, one could use domain knowledge to obtain robust features (by strategically modifying the inputs to obtain a different set of input features). In our evaluation in \\S~\\ref{sec:insights}, we highlight how robust features are useful for invariance enforcement. In our case study in \\S~\\ref{casestudy1}, we show how inputs from a different modality can be robust, and can be used for invariance enforcement. However, in the absence of robust features, or a different set of robust input features, invariance encoding happens in the vulnerable features. While this is not as accurate as the other two approaches, we discuss how this can be useful next.\n\n\n\\noindent{\\bf Takeaway 2:} Ideally, invariances can be encoded using robust (input) features. In the absence of such robust features, invariances can be encoded in the vulnerable features as well.\n\n\nThe final question is {\\em how to verify if the invariance has been preserved?} If robust features are used for encoding invariance information, we can use an off-the-shelf classification model to check if the invariance has been preserved by observing the outcome of the classification \\textit{i.e.}\\@\\xspace if the input falls into the correct equivalence class. For example, if the robust features obtained from an adversarially perturbed \\texttt{Stop} sign enables it to be classified into the {\\em hexagonal} equivalence class, it implies that the {\\em shape} invariance has been preserved. In the absence of robust features, invariance encoding occurs in the vulnerable features; one could train a model to be robust to check if the invariance is preserved in the features. We describe a realization of this approach in \\S~\\ref{sec:insights}. \n\n\n\\noindent{\\bf Takeaway 3:} Since the invariances partition the label space into equivalence classes, verifying if the invariance is preserved is as simple as checking if the input (benign or adversarially perturbed) falls into the correct equivalence class.\n\n\nWe would like to stress that the connection between invariances and stability (of learning) is not new; Arjovsky \\textit{et al.}\\@\\xspace~\\cite{arjovsky2019invariant} suggests that invariant properties (or stable properties as referred in their work) are essential in improving generalization. As they note, to improve robustness, one must detect (and enforce) these stable properties to avoid spurious correlations that exist in data. Similarly, Tong \\textit{et al.}\\@\\xspace~\\cite{tong2017improving} state that it is hard to simultaneously modify, what they term, conserved features and preserve functionality (in the case of malware). \n\n\\subsection{Robustness}\n\\label{sec:robustness}\n\nWe can see how invariances can potentially restrict the adversary's attack space. We formalize how these restrictions result in increased robustness by highlighting the construction of Degwekar \\textit{et al.}\\@\\xspace~\\cite{win-win} based on Pseudo-Random Functions (PRFs). This construction shows a scenario where a robust classifier exists~\\cite{Bubeck:19} but is hard to find in polynomial time, but under the existence of specific invariances, it becomes easy to learn a robust classifier. \n\n\n\\noindent{\\em Construction:} Let $\\mathcal{B} = \\{0,1\\}$, and $F_k : \\mathcal{B}^n \\rightarrow \\mathcal{B}$ be a PRF with a secret key $k$. Let $(Encode,Decode)$ be an error-correcting code (ECC), such as those in~\\cite{GI01}, where $Encode$ encodes a message and $Decode$ decodes a message, and $Decode$ can tolerate a certain number of errors in its input. Consider the following two distributions:\n\\begin{eqnarray*}\nD_0 & = & \\{0,Encode\\left(x,F_k(x)\\right)\\} \\\\\nD_1 & = & \\{1,Encode\\left(x,1-F_k (x)\\right)\\},\n\\end{eqnarray*}\nwhere $x$ is drawn uniformly from $\\mathcal{B}^n$, and `,' denotes concatenation. Given these distributions, there exists a classifier to distinguish between the two because the first bit always indicates which distribution ($D_0$ or $D_1$) it belongs to. This classifier has perfect natural accuracy and is easy to learn~\\cite{Bubeck:19}. This classifier is also robust (under certain assumptions which we explain next). Due to the properties of the ECC, $R$ can tolerate a constant fraction of the errors among {\\em all but the first bit}. If any attacker flips the first bit, a robust (or any) classifier is hard to learn if $R$ is unaware of $k$. However, if $R$ has access to the secret key $k$, then given an $\\tilde{x} \\sim D_b$ where $b \\in \\mathcal{B}$, $R$ first executes $Decode(\\tilde{x}_{1:})$ (where $\\tilde{x}_{i:}$ denotes the the last $n-i$ bits of $\\tilde{x}$ \\textit{i.e.}\\@\\xspace $\\tilde{x} = \\tilde{x}_{0:}$) and obtains $x$. Then, the classifier can then check the last bit $\\tilde{x}_{n-1}$ to see whether it is $F_k(x)$ or $1-F_k(x)$. Without knowing the key $k$, the verification described earlier would not always be accurate. This result follows from the fact that $F_k(\\cdot)$ is a PRF (\\textit{i.e.}\\@\\xspace essentially a probabilistic polynomial time adversary cannot distinguish between $F_k (x)$ and a random bit)~\\cite{win-win}. This setting demonstrates a situation where {\\it a robust classifier exists but cannot be found in polynomial time (without knowledge of $k$).}\n\nHowever, if the following invariant $\\mathcal{I} = \\{(x,\\tilde{x}) | x_0 = \\tilde{x}_0\\}$ (where $x_i$ denotes the $(i+1)^{th}$ bit of $x$) is enforced, then the hardness result above is trivially negated. Thus, it is clear how invariances can enable robustness. \n\nIntuitively, invariances help by limiting the attacker's actions (refer Figure~\\ref{fig:invariances}). Recall from our earlier discussion that these invariances also induce label equivalences. For the remainder of this work, we focus on those invariances that produce {\\em disjoint label equivalence classes}; this property makes our analysis easier (but has no bearing on our approach).\n\nAs stated earlier, we measure robustness through the certificate provided by the work of Cohen \\textit{et al.}\\@\\xspace~\\cite{cohen2019certified}\\footnote{However, we do show the generality of our approach using adversarial training~\\cite{madry-iclr2018} and standard datasets (such as MNIST and CIFAR-10) in \\S~\\ref{sec:insights}}. Recall that the certificate is proportional to the margin between the probability estimates for the {\\em top candidate} for the label and the {\\em runner-up} \\textit{i.e.}\\@\\xspace $R \\propto \\Phi^{-1}(\\underline{p_A})-\\Phi^{-1}(\\overline{p_B})$. Thus, it is clear that one can increase the certificate by increasing this margin. In \\S~\\ref{casestudy1}, we provide detailed arguments and constructions on how we can obtain an increased margin. \n\nIn principle, one could envision a scenario where there exist multiple invariances (at different levels of the hierarchy), resulting in (disjoint) label equivalence classes of size 1 (at the leaves of the hierarchy). In such a scenario, the robustness certificate for each equivalence class is $\\infty$ (even perturbations with large $p$-norms will not cause misclassifications).\n\n\\iffalse\nFor some specific invariances, the produced label equivalences potentially increase the margin, improving the robustness. To elaborate, observe that the probability estimates are also separated based on the label equivalence classes. To compute the robustness certificate for a classifier that predicts within a particular label equivalence class, all probability estimates must lie in that equivalence class as well. It is conceivable that the original runner-up label (and its probability estimate, the second highest overall) does not lie in the same label equivalence as the best prediction's label. Thus, a new runner-up that lies in the same equivalence class is chosen, and this has a lower probability estimate than the original runner-up, leading to an increased margin (refer \\S~\\ref{casestudy1} for more detailed arguments). \n\\fi\n\n\n\n\n\n\n\n\n\\iffalse\n\n\\subsection{Adversary Assumptions and Costs}\n\\begin{enumerate}\n \\item Intra-equivalence class targeted attacks still possible, this restricts attacker\n \\item Increases overall cost of untargeted attacks because of robust root and intermediate classifiers\n \\item Some attacks on certain input domains are even more difficult to physically realize\n \\item We assume adversary cannot modify the invariant in certain cases, in others, we see how the misclassification of these adversarial examples affect the hierarchy\n \\item Invariants can lead to hierarchies and partition the label space into different equivalence classes\n \\item Hierarchical classification is not new, and there are pros and cons to this approach. It is an intuitive and simple approach for structuring classification with invariants as a part of the system. Coming up with a novel approach for structuring classification with invariants is a future work.\n \\item Decision tree population algorithm when we have multiple invariants\n\\end{enumerate}\n\n\\subsection{Robustness and Accuracy Guarantees}\n\\begin{enumerate}\n \\item Toy example math model to increase certificate\n \\item show how the above partitions the label space; how that improves robustness, or improves the robustness vs. accuracy tradeoff\n \\item Evans et al.\n \\item Proof and intuition behind invariants partitioning label space\n\\end{enumerate}\n\\textcolor{red}{can reference some of the constructions from vaikuntanathan et al.?}\n\\fi\n\n\n\n\n\n\n\\section{Early Insights}\n\\label{sec:insights}\n\nUsing two datasets (CIFAR-10~\\cite{krizhevsky2014cifar} and MNIST~\\cite{lecun2010mnist}), we show how hierarchies used to enforce invariances can provide robustness gains using other robustness techniques such as adversarial training~\\cite{madry-iclr2018}. We also display a proof-of-concept approach to suggest that invariances are easy to obtain in a data-guided manner. The setup for our experiment is described in Appendix~\\ref{app:setup}.\n\n\\subsection{Obtaining Invariances}\n\\label{sec:obtaining}\n\n\n\\begin{algorithm}\n\\caption{Data-driven equivalence class generation}\n\\begin{algorithmic}[1]\n\n \n \n \\State Obtain embeddings \\texttt{C} of all points in the dataset by observing the output of a DNN \\texttt{F} at layer \\texttt{n}.\n \n \\State \\texttt{clusters} $\\leftarrow$ \\texttt{ClusterAlg}(\\texttt{C})\n \\State Verify number of centroids $\\lvert$\\texttt{clusters}$\\lvert$ = $k$.\n \\State Verify that each cluster in \\texttt{clusters} is separated.\n \n \n \\For{i = 1 $\\cdots$ k}\n \\State \\texttt{cluster} $\\leftarrow$ \\texttt{clusters[i]}\n \\State $l_i \\leftarrow$ {\\em unique} labels of embeddings in \\texttt{cluster}\n \\State \\texttt{L[i]} = \\texttt{L[i]} $\\cup$ $l_i$ \n \\EndFor\n \\If{$\\cap_i$\\texttt{L[i]}$=\\phi$}\n \\State Each \\texttt{L[i]} denotes a label equivalence class.\n \\Else\n \\State Repartition labels in $\\{$\\texttt{L[i]}$\\}_{i=1}^k$ s.t. $\\cap_i$\\texttt{L[i]}$=\\phi$ \n \\EndIf\n \\State Construct classifier \\texttt{F}$_{intermediate}$ to classify inputs to an equivalence class in $\\{1,\\cdots,k \\}$ such that the corresponding label equivalence classes are \\texttt{L[1]}$\\cdots$\\texttt{L[k]} \n\\end{algorithmic}\n\\end{algorithm}\n\n\\vspace{1mm}\n\\noindent{\\bf MNIST:} To obtain the invariances, we first trained a small CNN model\\footnote{3 convolutional layers followed by a FC layer and a softmax layer} on the MNIST dataset. We gather embeddings\\footnote{In the context of neural networks, embeddings are those low-dimensional, continuous vector representations of discrete variables that are learned by the network.} from the intermediary layers, and cluster them using k-means clustering (similar to the approach proposed by Papernot \\textit{et al.}\\@\\xspace~\\cite{papernot2018deep}). We observe that for $k=2$, the digits 2,3,0,6,8,5 form a cluster, while the others form another cluster.\n\n\\noindent{\\bf CIFAR-10:} To obtain the invariances, we first adversarially train a CNN\\footnote{Wide Resnet with 34 layers} on the CIFAR-10 dataset. We then observe the output of an adversarially trained CNN to produce a confusion matrix (refer Figure~\\ref{fig:confusionmatrix1} in the Appendix). From this matrix, one can clearly observe two equivalence classes partitioning the label space -- non-living objects and living objects. Note that this invariance is also human understandable. \n\n\n\\vspace{1mm}\nNote that the clustering could potentially be different if the outputs of a different layer was observed. However, it was not the case for our experiments. Additionally, this experiment just serves as a proof-of-concept. We believe that with such forms of intermediary clustering, one can automate the approach of creating such equivalence classes in a way as to avoid having domain specific knowledge. \n\n\\subsection{Methodology}\n\\label{sec:method}\n\nWe observe that for both datasets, we were able to obtain an invariant that partitions the label space into two equivalence classes. Thus, to enforce such an invariant, we train a {\\em root} classifier to classify inputs into one of these two equivalence classes. To make sure the root is robust to adversarial examples, we train the root classifier using adversarial training~\\cite{madry-iclr2018} (to minimize the $\\ell_\\infty$-norm of the generated perturbation). For the CIFAR-10 dataset, the root classifier has a natural accuracy of 96.57\\% and adversarial accuracy of 84.25\\%. For the MNIST dataset, the root classifier has a natural accuracy of 98.51\\% and adversarial accuracy of 93.68\\%. We detail all parameters used for our experiments in Appendix~\\ref{app:insights}. Thus, each hierarchy comprises of one root and two leaf classifiers.\n\n\n\n\\subsection{Results}\n\\label{prelim_results}\n\n\n\\begin{table}[h]\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm}}\n \\toprule\n \\text{\\footnotesize Model} & \\text{\\footnotesize Natural (\\%)} & \\text{\\footnotesize Adv (\\%)} & \\text{\\footnotesize Budget (\\%)} \\\\\n \\midrule\n \n {\\footnotesize Baseline} & {\\footnotesize 98.79\\%} & {\\footnotesize 87.13\\%} & {\\footnotesize 87.13\\%}\\\\\n {\\footnotesize Hierarchy} & {\\footnotesize 97.51\\%} & {\\footnotesize 84.72\\%} & {\\footnotesize 90.42\\%}\\\\\n \n \\bottomrule\n \\end{tabular}\n\\caption{\\small For the MNIST dataset and the invariances described in \\S~\\ref{sec:obtaining}), observe that adversarial accuracy, denoted Adv (\\%), decreases in the hierarchical classifier compared to the baseline. The legends are (a) {\\bf Natural (\\%)}: natural accuracy, (b) {\\bf Adv (\\%):} adversarial accuracy from PGD attack where an adversary attacks the root and 2 leaf classifiers, and (c) {\\bf Budget (\\%):} restricted adversarial accuracy using PGD where an adversary is only able to attack one classifier chosen from the hierarchy (in our example: the leaf classifier for the 6 classes).}\n\\label{table:mnist_adv}\n\\end{center}\n\\vspace{-5mm}\n\\end{table}\n\n\\noindent{\\bf MNIST:} For the given invariances, we observe a {\\em decrease} in adversarial accuracy by 2.41 percentage points in our hierarchical classification (refer Table~\\ref{table:mnist_adv}). This suggests (a) that the invariant obtained using the data-driven methodology is not the best for increasing robustness, or (b) there are some classes of invariances that potentially cause more harm than good (we discuss this in \\S~\\ref{sec:discussion}). This further motivates our other approach of obtaining invariances using domain knowledge. \n\n\n\\begin{table}[h]\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm}}\n \\toprule\n \\text{\\footnotesize Model} & \\text{\\footnotesize Natural (\\%)} & \\text{\\footnotesize Adv (\\%)} & \\text{\\footnotesize Budget (\\%)} \\\\\n \\midrule\n {\\footnotesize Benchmark} & {\\footnotesize 83.99\\%} & {\\footnotesize 44.72\\%} & {\\footnotesize 44.72\\%}\\\\\n {\\footnotesize Baseline} & {\\footnotesize 83.68\\%} & {\\footnotesize 38.77\\%} & {\\footnotesize 38.77\\%}\\\\\n {\\footnotesize Hierarchy} & {\\footnotesize 85.37\\%} & {\\footnotesize 45.46\\%} & {\\footnotesize 59.11\\%}\\\\\n \n \\bottomrule\n \\end{tabular}\n\\caption{\\small For the CIFAR-10 dataset and the invariances described in \\S~\\ref{sec:obtaining}), observe that adversarial accuracy, denoted Adv (\\%), increases in the hierarchical classifier compared to the baseline and the benchmark published in YOPO\\cite{yopo}. The legends are (a) {\\bf Natural (\\%)}: natural accuracy, (b) {\\bf Adv (\\%):} adversarial accuracy from PGD attack where an adversary attacks the root and 2 leaf classifiers, and (c) {\\bf Budget (\\%):} restricted adversarial accuracy using PGD where an adversary is only able to attack one classifier chosen from the hierarchy (in our example: the leaf classifier for the Living objects equivalence class).}\n\\label{table:cifar_adv}\n\\end{center}\n\\vspace{-5mm}\n\\end{table}\n\n\n\\noindent{\\bf CIFAR-10:} With our hierarchical classification scheme, we achieve an increase in adversarial accuracy by 6.69 percentage points when compared to a baseline classifier. To measure the adversarial accuracy\\footnote{Adversarial accuracy for our hierarchy is defined as $\\sum \\frac{x}{X}$ where x is the number of correct predictions for each leaf classifier and X is the total number of samples in the test set.}, we imagine the following 2 white-box attack scenarios:\n\\begin{enumerate}\n \\item {\\em Worst-Case}: An unrestricted adversary with access to each of the models in the hierarchy.\n \\item {\\em Best-Case}: A restricted adversary which is limited to attacking only one of the classifiers in the hierarchy.\n\\end{enumerate}\nTo simulate the worst case adversary, we begin by attacking the root classifier (with $\\ell_\\infty$-norm perturbed adversarial examples). For those inputs which did not cause misclassification, we generate adversarial examples for the leaves. While this attack is stronger (as the adversary has multiple chances at creating the adversarial example), this attack is also computationally heavier. In the case of a budgeted adversary, the attacker would attack the classifier which results in the largest drop in accuracy; in our evaluation, this happened to be the leaf classifier related to classifying animate objects. Observe that in all scenarios described, the hierarchy produces increased gains (refer Table~\\ref{table:cifar_adv}). We compute the robustness certificates for these networks (by tweaking some components of the training process) and report the results in Appendix~\\ref{app:cifar}. We also observe an increase in robustness as measured by the certificate.\n\n\n\\iffalse\n\\begin{table}\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.25cm} | p{1.4cm} | p{1.5cm} | p{1.25cm} | p{1.5cm}}\n \\toprule\n \\textbf{} & {\\footnotesize $\\kreuz$ CR} & {\\footnotesize Vehicles CR} & {\\footnotesize $\\kreuz$ CA } & {\\footnotesize Vehicles CA}\\\\\n \\midrule\n {\\footnotesize $\\sigma=0.25$} & 0.6749 $\\pm$ 0.2798 & 0.7204 $\\pm$ 0.2692 & 88.15\\% & 90.70\\% \\\\\n {\\footnotesize $\\sigma=0.5$} & 0.9270 $\\pm$ 0.5349 & 1.0943 $\\pm$ 0.5686 & 79.58\\% & 85.85\\% \\\\\n {\\footnotesize $\\sigma=1.0$} & 1.2270 $\\pm$ 0.7789 & 1.5013 $\\pm$ 0.9828 & 65.23\\% & 74.74\\% \\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small The hierarchical approach preserving invariances (columns Vehicles CR, Vehicles CA) outperform the baseline model for inputs that belong to the vehicles equivalence class (in columns $\\kreuz$ CR, $\\kreuz$ CA) both in terms of certified radius (CR) and certified accuracy (CA).}\n\\label{table:vehicles_ca}\n\\end{center}\n\\vspace*{-6mm}\n\\end{table}\n\n\\begin{table}\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.25cm} | p{1.5cm} | p{1.5cm} | p{1.25cm} | p{1.25cm}}\n \\toprule\n \\textbf{} & \\textbf{$\\taurus$ CR} & \\textbf{Animals CR} & \\textbf{$\\taurus$ CA (\\%)} & \\textbf{Animals CA (\\%)}\\\\\n \\midrule\n $\\sigma=0.25$ & 0.5263 $\\pm$ 0.2987 & 0.5671 $\\pm$ 0.2966 & 75.93\\% & 80.80\\% \\\\\n $\\sigma=0.5$ & 0.7219 $\\pm$ 0.5037 & 0.7777 $\\pm$ 0.5140 & 61.82\\% & 67.05\\% \\\\\n $\\sigma=1.0$ & 0.9319 $\\pm$ 0.7486 & 1.0750 $\\pm$ 0.8278 & 42.68\\% & 53.80\\% \\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small The hierarchical approach preserving invariances (columns Animals CR, Animals CA) outperform the baseline model for inputs that belong to the animals equivalence class (in columns $\\taurus$ CR, $\\taurus$ CA) both in terms of certified radius (CR) and certified accuracy (CA).}\n\\label{table:animals_ca}\n\\end{center}\n\\vspace*{-6mm}\n\\end{table}\n\n\n\n\\section{Early Insights}\n\\label{sec:insights}\n\nIn \\S~\\ref{sec:robustness} and \\S~\\ref{certs}, we argue that the certificate in~\\cite{cohen2019certified} is the most suitable for measuring robustness. In this section, we show how the proposed methodology \\textit{i.e.}\\@\\xspace hierarchies used to enforce invariances can provide robustness gains using other robustness frameworks such as adversarial training~\\cite{madry-iclr2018}. We also display a proof-of-concept approach to suggest that invariances are easy to obtain in a data-guided manner. \n\n\\subsection{Experimental Setup \\& Observations}\n\nWe wish to understand if hierarchical classification coupled with invariances can improve the robustness of the DNN in comparison to the flat baseline. We perform our experiments on a server with x CPU cores, and two NVIDIA Titan Xp and one NVIDIA Quadro P6000 GPU. The device memory is 128GB of RAM. Our experiments are performed using the CIFAR10 dataset~\\cite{krizhevsky2014cifar}. \\Varun{a sentence on why we use a data guided approach - for example, the task is quite complicated so human expertise is not readily availble, or there are many distinct ways a human would perform classification for us to replicate.} Using our training parameters detailed in Table~\\ref{table:parameters}, we obtain a considerable increase in adversarial accuracy (up to a 23.70\\% increase in percentage points from the baseline) while maintaining comparable natural accuracy.\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Obtaining Invariances}\n\\label{sec:obtaining}\n\nTo obtain the invariances required for our experiment, we observe the output of intermediary layers in an adversarially trained small CNN (similar to the approach proposed by Papernot \\textit{et al.}\\@\\xspace~\\cite{papernot2018deep}) (\\Varun{@brian: specify number of layers and layer observed for confusion matrix}) to produce a confusion matrix (refer Figure~\\ref{fig:confusionmatrix1}) for the CIFAR10 dataset. From this matrix, one can clearly observe two equivalence classes partitioning the label space. Note that the clustering could potentially be different if the outputs of a different layer was observed. However, this experiment just serves as a proof-of-concept. In general, we believe that with such forms of intermediary clustering, one can automate the approach of creating such equivalence classes in a way as to avoid having domain specific knowledge. \n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/confusionmatrix.pdf}\n \\caption{}\n \\label{fig:confusionmatrix1}\n\\end{figure}\n\n\\Varun{@Brian}\n\\Brian{IDK clustering setup @Kassem}\n\\begin{enumerate}\n \\item describe the setup\n\\end{enumerate}\n\n\\subsection{Results}\n\\label{prelim_results}\n\nSince our threat model allows adversaries to add $p$-norm bounded perturbations, we need to adversarially train the top-most classifier so that it preserves the invariances through the hierarchy. From our analysis in \\S~\\ref{sec:obtaining}, we observe that the invariance partitions the label space into 2 equivalence classes. Consequently, we train a robust binary classifier as the top-most classifier with natural accuracy of 96.57\\% and an adversarial accuracy of 84.25\\%. We achieve a 23.70\\% and a 16.89\\% increase in adversarial accuracy with our retrained and renormalized hierarchical classification scheme respectively as seen in Table~\\ref{table:cifar_adv} \\footnote{We define the terms \\emph{retrained} and \\emph{renormalized} as well as our overall hierarchical classifier structure in Section~\\ref{retrainvsrenormalize}.} The results were obtained using the 10,000 test set images from CIFAR10 and passing the correctly classified inputs from the root classifier down through the leaf classifier models. More information on our hierarchy framework is detailed in Section~\\ref{sec:hier_construction}. Note that the adversarial accuracy within equivalence classes brings with it certain robustness guarantees. Against targeted attacks, the equivalence class accuracy improvements also hold true in cases where an adversary is restricted to generating targeted attacks against labels within the same equivalence class. Our hierarchical classification scheme improves empirical adversarial robustness as well as certified adversarial robustness. In Table~\\ref{},\\Varun{@brian: table missing?} certified accuracy increases by 7.27\\% for $\\sigma=0.5$. These improvements to robustness, which are gained while maintaining or even improving natural accuracy, validate our hypotheses surrounding the robustness guarantees of invariances.\n\n\\begin{table}\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm}}\n \\toprule\n \\text{\\footnotesize Model} & \\text{\\footnotesize N. Accuracy} & \\text{\\footnotesize A. Accuracy} & \\text{\\footnotesize E.C. Accuracy}\\\\\n \\midrule\n \n {\\footnotesize Baseline} & {\\footnotesize 83.68\\%} & {\\footnotesize 38.77\\%} & {\\footnotesize -}\\\\\n {\\footnotesize Retrained} & {\\footnotesize 85.37\\%} & {\\footnotesize 62.47\\%} & {\\footnotesize 53.81\\%}\\\\\n {\\footnotesize Renormalized} & {\\footnotesize 82.34\\%} & {\\footnotesize 55.66\\%} & {\\footnotesize 45.18\\%}\\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small The results on the test set for our YOPO experiments. Each PGD-20 attack has $\\sigma=\\frac{2}{255}$, $\\epsilon=\\frac{8}{255}$, and is run on the $\\ell_\\ell_\\infty$ norm. Adversarial accuracy increases in the retrained and renormalized hierarchical classifiers compared to our baseline and the benchmark published in YOPO\\cite{yopo}. N. Accuracy: natural accuracy, A. Accuracy: adversarial accuracy from PGD-20 attack, E.C. Accuracy: adversarial accuracy within equivalence classes from PGD-20 attack.}\n\\label{table:cifar_adv}\n\\end{center}\n\\end{table}\n\n\\begin{table}\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.25cm} | p{1.5cm} | p{1.5cm} | p{1.25cm} | p{1.25cm}}\n \\toprule\n \\textbf{} & \\textbf{$\\bigtriangleup$ CR} & \\textbf{Triangles CR} & \\textbf{$\\bigtriangleup$ CA (\\%)} & \\textbf{Triangles CA (\\%)}\\\\\n \\midrule\n $\\sigma=0.25$ & 1.127 $\\pm$ 0.547 & \\textcolor{red}{1.107 $\\pm$ 0.551} & 71.60 & 76.45 \\\\\n $\\sigma=0.5$ & 1.659 $\\pm$ 0.899 & \\textcolor{red}{1.641 $\\pm$ 0.919} & 50.67 & 58.94 \\\\\n $\\sigma=1.0$ & 2.233 $\\pm$ 1.172 & \\textcolor{red}{2.223 $\\pm$ 1.259} & 20.41 & 32.78 \\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small The hierarchical approach preserving invariances (columns Triangles CR, Triangles CA) outperform the baseline model for inputs that belong to the triangular equivalence class (in columns $\\bigtriangleup$ CR, $\\bigtriangleup$ CA) both in terms of certified radius (CR) and certified accuracy (CA).}\n\\label{table:cifar_ca}\n\\end{center}\n\\vspace*{-6mm}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\fi\n\\section{A General Framework}\n\\label{sec:general}\n\nIn this section, we provide a generalization of the construction required to construct DNNs for increased robustness (\\S~\\ref{sec:hier_construction}). We show how robustness can be enhanced by increasing the depth of the hierarchy with many invariances being enforced (\\S~\\ref{sec:robustness_analysis}), and the guarantees this provides (\\S~\\ref{sec:robustness_trade-off}). \n\n\\subsection{Components of a Hierarchical Classifier}\n\\label{sec:hier_construction}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/high-level.pdf}\n \\caption{\\small High-level description of the hierarchical classifier. The thin arrows highlight data flows while thick arrows indicate decision paths. The original set of labels is $\\{1, \\ldots, m\\}$. Each intermediate classifier splits the label set further. Each leaf classifier predicts within a reduced set of labels. For example, the left-most classifier assigns each label within $\\{1, \\ldots, i\\}$ a probability value while assigning the other labels $\\{i+1, \\ldots, m\\}$ a probability of 0. }\n \\label{fig:high-level}\n \\vspace{-5mm}\n\\end{figure}\n\nThe hierarchical classifier is a sequence of classifiers in a hierarchy, where each classifier at a level of the hierarchy operates over a subset of the feature space. Observe that we make no assumptions about the nature of these feature subsets, or the nature of the different classifiers in the hierarchy. Just like the conventional flat classifier, the hierarchical classifier is a function $F: \\mathcal{X} \\rightarrow \\mathcal{Y}$ However, all levels other than the leaf classifiers are useful in preserving the invariances; only the leaf classifier predicts labels. Figure~\\ref{fig:high-level} show the high-level structure of the hierarchical classifier, including the input features, classifiers, and output vectors. We now describe these components.\n\n\n\\vspace{1mm}\n\\noindent{\\em 1. Intermediate Classifiers:} As explained earlier, the objective of the intermediary classifiers is to ensure that the invariances are preserved. To do so, intermediary classifiers are either (a) robustly trained~\\cite{madry-iclr2018,cohen2019certified} such that they can forward the input to the right equivalence class induced by the invariances, or (b) accept robust features (as defined in \\S~\\ref{sec:invariances}) as inputs to ensure such forwarding. Since the classifiers are either robust themselves, or operate on robust inputs, they are hard to attack. Thus, an intermediate classifier $F_{intermediate}: \\mathcal{X} \\rightarrow \\mathcal{K}$, where $\\mathcal{K}$ denotes the equivalence regions in the label space $\\mathcal{Y}$ \\textit{i.e.}\\@\\xspace $\\mathcal{K}=\\{i: \\mathcal{Y}_i \\subset \\mathcal{Y}\\}$. We do not assume that our hierarchy is balanced, and make no assumptions about the number or type (linear vs. non-linear models) of such intermediary classifiers\\footnote{There is a correspondence between an invariance and an intermediate classifier; each intermediate classifier is required to enforce a particular invariance. However, this mapping is not necessarily bijective.} (refer Figure~\\ref{fig:high-level}). It is important to note the ordering of these intermediary classifiers has a direct impact on accuracy, but not on robustness (which is only dependent on the label equivalences, which in turn depends on the invariances causing them); independent of the ordering of these intermediary classifiers, the subset of the label space in each leaf node is the same \\textit{i.e.}\\@\\xspace the order of the intermediary classifiers is commutative. Since the robustness radius is a function of the labels predicted by the leaf, the earlier ordering does not impact the robustness calculation. Determining this exact ordering to maximize accuracy is left as future work.\n\n\\vspace{1mm}\n\\noindent{\\em 2. Leaf Classifiers:} A leaf classifier makes the final classification. More formally, for each equivalence class $i \\in \\mathcal{K}$, there exists a leaf classifier $F_{i}: \\mathcal{X} \\rightarrow \\mathcal{Y}_i$. Observe that intermediate classifiers predict the equivalence classes, while leaf classifiers predict {\\em labels} within an equivalence class. To train such a classifier, only the samples which have labels within $\\mathcal{Y}_i$ are needed. Thus, the overall inference procedure (from root to leaf) is very similar to the decision tree; each intermediate classifier chooses the next one to be invoked till the inference reaches a leaf classifier. Only one leaf classifier is invoked for the entire input. Intermediate classifiers are trained to be robust so that they can correctly predict between different equivalence classes. Leaf classifiers are also trained in a robust manner so that they can correctly predict {\\em within} a particular equivalence class. However, training different leaf classifiers (depending on the exact equivalence class) is a computationally expensive procedure. While the hierarchical classification paradigm makes no assumptions about the exact nature of the classifier used at the leaf, we make certain assumptions for our evaluation. We assume that all leaf classifiers have the same model (\\textit{e.g.,}\\@\\xspace DNN). Thus, we can utilize two strategies -- {\\em retraining} and {\\em renormalization} -- to obtain leaf classifiers. In \\S~\\ref{retrainvsrenormalize}, we discuss both these approaches in detail, as well as their pros and cons.\n\n\n\\subsection{Robustness Analysis}\n\\label{sec:robustness_analysis}\n\nThe hierarchical classifier forces the attacker into an equivalence class of labels and limits its {\\em targeted attack capability}; an attacker cannot move the input outside an equivalence class. The leaf classifier, predicting within reduced label set, improves the robustness certificate by making the classifier stable within a larger ball around the input, limiting the attacker's capability within the equivalence class. This robustness property is subtle; it arises from the observation that reducing the labels can potentially widen the margin between the best prediction and the runner-up prediction. Below, we show that this property holds for any general classifier. \n\nLet $\\alpha: \\ensuremath{\\mathbb{R}}^n \\rightarrow \\Delta (\\mathcal{Y})$, where $\\Delta(\\mathcal{Y})$ is the set of all distributions over the labels $\\mathcal{Y}$. For ease of notation, $\\alpha (x)_l$ (for $l \\in {\\mathcal Y}$) denotes the\nprobability corresponding to $l$ in $\\alpha (x)$. Fix $c \\in \\mathcal{Y}$, and for $L \\subseteq \\mathcal{Y}$, we define $H_{c,\\alpha} (x,L)$ as follows:\n\n\\[\n H_{c,\\alpha} (x,L) = \\alpha (x)_c - \\max_{l \\in L \\; \\wedge \\; l \\not= c} \\alpha(x)_l\n\\]\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/box_plot.pdf}\n \\caption{\\small Box plot containing the robustness radius as function of the size of label sets in CIFAR-10, using the randomized smoothing approach of Cohen \\textit{et al.}\\@\\xspace~\\cite{cohen2019certified}. We generate all label subsets of a particular size, and plot the robustness certificate values. Observe that as the subset size decreases, the mean robustness radius increases.}\n \\label{fig:kolter-toy-example}\n \\vspace{-5mm}\n\\end{figure}\n\nIt is easy to see that if $L_1 \\subseteq L_2$, then\n$H_{c,\\alpha} (x,L_2) \\leq H_{c,\\alpha} (x,L_1)$, or $H_{c,\\alpha}$ is anti-monotonic in the second parameter. Recall that $H_{c,\\alpha}$ is similar to ``hinge loss''. If we instantiate $\\alpha$ by the output of the softmax layer and use the argument of Hein \\textit{et al.}\\@\\xspace~\\cite{HeinA17} for any classifier, we can immediately see that robustness radius increases as the set of possible labels is decreased. A similar argument can be used for the smoothing approaches~\\cite{cohen2019certified,lecuyer2018certified}. For example, Figure~\\ref{fig:kolter-toy-example} shows the robustness radius for label subsets of different sizes from CIFAR-10.\nThe robustness radius is computed using the randomized smoothing approach of Cohen \\textit{et al.}\\@\\xspace~\\cite{cohen2019certified}. It is evident from Figure~\\ref{fig:kolter-toy-example} that as the subset size decreases, the average robustness certificate per sample increases.\n\n\\subsection{Robustness Guarantees}\n\\label{sec:robustness_trade-off}\n\n\n\n\n\nThe robustness guarantees of the entire hierarchical classifier depends on each individual classifier (the intermediate classifiers and the leaf classifiers, collectively called internal classifiers). Given a set of internal classifiers, the attacker needs to attack only one of them to change the classification output in the untargeted attack scenario (or attack a specific set of internal classifiers in the targeted attack scenario). Then, the robustness guarantee of the hierarchical classifier is the minimum of the guarantees of its constituent classifiers. \n\nTo see why the robustness guarantee of the hierarchical classifier is the minimum of the guarantees of the composing classifiers, consider the simple case of three classifiers: $f^1$, $f^2$, and $f^3$ which form a larger classifier $F$. The hierarchy is such that $f^1$ is a binary classifier deciding between passing the input to $f^2$ or $f^3$, which are the leaf classifiers. A white-box adversary aims to attack the larger classifier $F$ as usual: \n \\begin{equation}\n min \\|\\delta\\|_p \\, \\text{ s.t. } \\, \\argmax F(x+\\delta) \\neq \\argmax F(x)\n\\end{equation}\n\nUsing the internal knowledge of the classifier, the adversary's objective can be restated as: \n \\begin{equation}\n \\begin{gathered}\nmin \\|\\delta\\|_p \\, \\text{ s.t. }\\\\\n\\argmax f^1(x+\\delta) \\neq \\argmax f^1(x) \\\\\n\\text{\\bf or } (\\argmax f^1(x+\\delta) = \\argmax f^1(x) \\\\\n\\wedge \\argmax f^2(x+\\delta) \\neq \\argmax F_j(x)) \\\\\n\\text{\\bf or } (\\argmax f^1_i(x+\\delta) = \\argmax f^1(x) \\\\\n\\wedge \\argmax f^3(x+\\delta) \\neq \\argmax F(x))\n\\end{gathered}\n\\end{equation}\n\nSince only one of the constraints has to be satisfied, the problem can broken down into smaller subproblems: \n\\[min \\|\\delta\\|_p = \\min\\left( \\|\\delta_1\\|_p, \\|\\delta_2\\|_p, \\|\\delta_3\\|_p \\right),\\] where:\n \\begin{equation}\n \\begin{gathered}\n \\|\\delta_1\\|_p = min \\|\\delta\\|_p \\, \\text{ s.t. } \\argmax f^1(x+\\delta) \\neq \\argmax f^1(x) \\\\\n\\|\\delta_2\\|_p = min \\|\\delta\\|_p \\, \\text{ s.t. } (\\argmax f^1(x+\\delta) = \\argmax f^1(x) \\\\\n\\wedge \\argmax f^2(x+\\delta) \\neq \\argmax F(x)) \\\\\n\\|\\delta_3\\|_p = min \\|\\delta\\|_p \\, \\text{ s.t. } (\\argmax f^1(x+\\delta) = \\argmax f^1(x) \\\\\n\\wedge \\argmax f^3(x+\\delta) \\neq \\argmax F(x))\n\\end{gathered}\n\\end{equation}\n\nWe can take the lower bound $\\|\\delta_2\\|_p$ and $\\|\\delta_3\\|_p$ by solving the less constrained problem of:\n \\begin{equation}\n \\begin{gathered}\n\\|\\delta_2\\|_p \\geq \\|\\delta'_2\\|_p = min \\|\\delta\\|_p \\, \\text{ s.t. } \\\\\n\\argmax f^2(x+\\delta) \\neq \\argmax F(x) \\\\\n\\|\\delta_3\\|_p \\geq \\|\\delta'_3\\|_p = min \\|\\delta\\|_p \\, \\text{ s.t. } \\\\\n\\argmax f^3(x+\\delta) \\neq \\argmax F(x).\n\\end{gathered}\n\\end{equation}\n\nFinally, the lower bound of the needed perturbation to attack $F$ is the minimum of the perturbations needed to attack each network individually. In particular, $min \\|\\delta\\|_p \\geq \\min\\left( \\|\\delta_1\\|_p, \\|\\delta'_2\\|_p, \\|\\delta'_3\\|_p \\right)$. This example can be generalized for multiple and non-binary intermediate classifiers. \n\n\\section{Case Studies}\n\\label{expts}\n\n\nIn \\S~\\ref{sec:insights}, we witnessed improvements for robustness in a toy setting. In this section, we investigate our approach in more realistic settings. Specifically, we wish to obtain answers for the following questions:\n\n\\begin{enumerate}\n \\item Is the proposed approach valid (and useful) only for tasks pertaining to vision?\n \\item Does the choice of implementation of the root and intermediary classifiers (either by using robust features, or training the classifier to be robust) impact our approach in practice?\n \\item Does the number of invariances used impact the gains in robustness?\n\\end{enumerate}\n\nTo answer these questions, we implement a hierarchical classifier for classification tasks in two domains: audio and vision. Traditionally, both tasks use CNNs; we do not deviate from this approach, and use CNNs for our leaf classifiers as well. Our experimental ecosystem is detailed in Appendix~\\ref{app:setup}. We measure robustness and certified accuracy as defined by Cohen \\textit{et al.}\\@\\xspace~\\cite{cohen2019certified}. Through our evaluation, we show that:\n\\begin{enumerate}\n\\item Our approach is agnostic to the domain of classification. For both audio and vision tasks, the hierarchical approach shows gains in robustness (through an improvement in certified radius) and accuracy, suggesting that certified accuracy and robustness may no longer need to be at odds.\n\\item The exact choice of implementation of the root (and in general intermediate classifiers) bears no impact on the correctness of the paradigm. In \\S~\\ref{casestudy1}, we implement the root classifier as a regular DNN trained to operate on robust features (obtained from a different input modality), and in \\S~\\ref{casestudy2}, we implement the root classifier as a smoothed classifier~\\cite{cohen2019certified}. Both approaches result in a hierarchical classifier with increased robustness and certified accuracy.\n\\item By adding one additional invariant (location), we observe that there are significant gains in robustness for the vision task (refer \\S~\\ref{sec:morerobustfeatures}).\n\\end{enumerate}\n\n\n\n\n\\subsection{Road Sign Classification}\n\\label{casestudy1}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/road-signs.pdf}\n \\caption{\\small The hierarchy over the road signs from the GTSRB dataset.}\n \\label{fig:road-sign-setup}\n\\end{figure}\n\nFor road sign classification, we wish to preserve the shape invariance. By doing so, we are able to partition the label space into equivalence classes based on shape (circular, triangular, octagonal, inverse triangular, or rectangular). We first classify road signs first based on their shapes at the root level (using a classifier trained using the ResNet-20 architecture~\\cite{DBLP:journals\/corr\/HeZRS15}). Within each equivalence class \\textit{i.e.}\\@\\xspace per-shape, we perform classification using a smoothed classifier\\footnote{Classifier obtained after using the smoothing approach proposed by Cohen \\textit{et al.}\\@\\xspace~\\cite{cohen2019certified}} (which also has the ResNet-20 architecture) to obtain the exact road sign. Note that we set the smoothing noise parameter $\\sigma$ set to different values \\textit{i.e.}\\@\\xspace $\\sigma=0.25,0.5,1$ to better understand the trade-offs between robustness and certified accuracy. Figure~\\ref{fig:road-sign-setup} contains a visual summary of our approach. \n\nIn this particular case study, the hierarchical classification encodes various priors about the classification task which are obtained from domain knowledge about the classification task. To ensure that the invariances (\\textit{i.e.}\\@\\xspace shape) are preserved, we use {\\em robust input features} that are obtained from a different sensor. {\\em Why shape?} We wished to showcase how domain expertise, and a human understandable invariant improves the robustness certificate. Shape was a natural candidate for such an invariant. \n\n\n\\subsubsection{Datasets}\n\nFor our experiments, we use two datasets: the first is the German Traffic Sign Recognition Benchmark (GTSRB)~\\cite{Stallkamp2012} which contains 51,840 cropped images of German road signs which belong to 43 classes; Figure~\\ref{fig:road-sign-setup} shows these classes. The second is the KITTI dataset~\\cite{Geiger2013IJRR} which contains time-stamped location measurements, high-resolution images, and LiDAR\\xspace scans over a five-day recording period from an instrumented vehicle on the roads of Karlsruhe, Germany. This totalled 12919 images. We post-processed this dataset to (a) extract only cropped images of the road signs included in the GTSRB dataset, and (b) extract their corresponding LiDAR\\xspace depth maps. To do so, we spent approximately 40 hours manually cropping and annotating every GTSRB road sign found in the KITTI dataset and cropped the corresponding point clouds using the image coordinates. Thus, we obtained 3138 cropped images, their corresponding LiDAR\\xspace point clouds, and their labels (Figure~\\ref{fig:pc_scene}). \n\nWe train the root classifier using these LiDAR\\xspace point clouds of road signs to predict the shape of the corresponding road sign. The root has 98.01\\% test accuracy. We train each leaf classifier (within an equivalence class) with the road signs belonging to that particular equivalence class. Note that the entire classifier can not be trained using the point clouds as they lack information required to predict the exact road sign label. For example, the shape of 2 \\texttt{SpeedLimit} signs can be easily extracted from the corresponding point clouds, but other features required for obtaining the correct label (such as raw pixel intensities) are missing. \n\n\\subsubsection{Attacking the Root Classifier}\n\\label{improvement}\n\n\nHuman imperceptible perturbations, such as those considered in our threat model, generated for one input modality (such as the pixels obtained from the camera) do not translate over to the other (such as point clouds obtained from the LiDAR\\xspace). Thus, adversaries who can generate $p$-norm bounded perturbations for the road sign (in the pixel space), or deface the road sign with specific stickers do not impact the point clouds corresponding to the road sign. Such attacks (in the pixel space) are the state-of-the-art in literature. We verify our claim by conducting experiments with road signs made of different materials, and a \\texttt{Velodyne Puck} LiDAR\\xspace~\\cite{velodynepuck}. These experiments suggest that even {\\em large} perturbations (such as stickers) made on the road sign do not impact the road sign's point cloud. We display the results from our experiments in Figure~\\ref{fig:lidar-teaser}. Observe that our experiments contained far more drastic perturbations than considered in the threat model. Thus, we conclude that the point cloud features are robust to perturbations made to the pixel features. Note that while the LiDAR\\xspace is susceptible to active attacks~\\cite{Cao:2019:ASA:3319535.3339815}, such an attack is difficult to mount and are beyond the scope of our threat model (refer \\S~\\ref{sec:threat}). \n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/teaser-pic.pdf}\n \\caption{\\small The LiDAR\\xspace depth measurements are immune to physical perturbations, including changes in lighting, and any stickers placed on the US road sign.}\n \\label{fig:lidar-teaser}\n\\end{figure}\n\n\\vspace{1mm}\n\\noindent{\\em Passive attacks on point clouds:} Recent works show $p$-norm bounded adversarial attacks exist for point clouds as well~\\cite{point_perturb_generate}. We carry out experiments to understand if these perturbations are human (im)perceptible. We first map the coordinates of each cropped image from the KITTI dataset to its corresponding 3D point cloud, and crop out only the 3D points corresponding to the road sign. By doing so, and with data augmentation methods, we obtain 3642 cropped point clouds of road signs. We then train a PointNet~\\cite{pointnet} shape classifier ($\\sim$ 98\\% test accuracy) using a subset of the point clouds to create a target model. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/scene_sign.png}\n \\caption{\\small Left: A point cloud of a scene in a driving environment. Right: The cropped point cloud of the sign outlined in red in the scene.}\n \\label{fig:pc_scene}\n\\end{figure}\n\nWe proceed to generate adversarial examples for PointNet using the approach described in ~\\cite{point_perturb_generate}. From the results shown in Table~\\ref{table:pc_adv}, we see that generating perturbations minimizing the $\\ell_2$ distance in the point cloud space is indeed an effective attack. Explicitly attacking the point clouds generated by the LiDAR\\xspace sensor results in (potentially) perceptible perturbations, despite having a small $p$-norm. To the best of our knowledge, there exists no prior work that jointly attacks both sensors\\footnote{This approach is commonly referred to as sensor fusion~\\cite{gustafsson2010statistical}.} whilst producing small $p$-norm perturbations. We also deploy the clustering attack derived by Xiang \\textit{et al.}\\@\\xspace~\\cite{point_perturb_generate}; the attack generates a cluster of points to resemble a small ball attached to the original object. However, the adversarial clustering attack fails to successfully generate adversarial examples for each of the inputs when attacking majority of the target classes.\n\nThus, we conclude with the observation that it is possible in theory to generate attacks against the robust feature, these attacks are no longer imperceptible\\footnote{Obtained by analyzing the datasheet~\\cite{velodynepuck}}; as we state earlier, such attacks are beyond the scope of our threat model.\n\n\n\n\n\n\\begin{table}\n\\small\n\\begin{center}\n \\begin{tabular}{p{2.3cm} p{1.2cm} p{1.2cm} p{1.2cm} p{1.2cm}}\n \\toprule\n \\textbf{Output} & \\textbf{Circle} & \\textbf{Diamond} & \\textbf{Triangle} & \\textbf{Inverted Triangle}\\\\\n \\midrule\n $||\\bar{\\delta}||_2$ & 1.032 & 0.636 & 0.386 & 1.059 \\\\\n $\\Delta$ Distance & 3.096 cm & 1.908 cm & 1.158 cm & 3.177 cm \\\\\n Attack accuracy & 91.33\\% & 32.00\\% & 98.00\\% & 89.33\\% \\\\\n \n \n \\bottomrule\n \\end{tabular}\n\\caption{\\small Results of the attacks described in \\S~\\ref{improvement}. Legend: (a) $||\\bar{\\delta}||_2$: Average $\\ell_2$ perturbation distance between original and adversarial points, (b) {\\bf $\\Delta$ Distance:} Coarse upper-bound of changes in depth caused if the perturbation is realized, and (c) \\textbf{Attack accuracy:} Percentage of successfully generated perturbation attacks. Observe that some realizations are very perceptible.}\n\\label{table:pc_adv}\n\\end{center}\n\\vspace*{-5mm}\n\\end{table}\n\n\n\n\\subsubsection{Retraining vs. Renormalization} \n\\label{retrainvsrenormalize}\n\nUnder the existence of an accurate and robust root classifier, each leaf classifier only accepts as input road signs belonging to particular shape. As a running example, let us assume that the leaf classifier under discussion classifies {\\em circular} road signs. To obtain such a leaf classifier, one could (a) utilize a classifier trained on all labels (henceforth referred to as the baseline classifier) and discard the probability estimates of the labels not of interest (\\textit{e.g.,}\\@\\xspace labels of road signs which are not circular), and renormalize the remaining probability estimates; we refer to such an approach as the {\\em renormalization} approach, or (b) {\\em retrain} a leaf classifier from scratch based on the labels belonging to that particular equivalence class; \\textit{e.g.,}\\@\\xspace retrain a classifier for circular road signs in particular. Appendices~\\ref{app:circles},~\\ref{app:triangles}, and~\\ref{app:individual_circles} contains results from both these approaches. We observe that both these approaches increase the robustness certificate; while the renormalization approach can only increase the robustness certificate (by design, we discard the probability estimates of labels we are disinterested in and {\\em renormalize} the remaining, ergo widening the gap), the retraining approach can potentially decrease the robustness certificate for some inputs.\n\nRecall that the value of the robustness certificate is directly proportional to the margin $\\Phi^{-1}(\\underline{p_A})-\\Phi^{-1}(\\overline{p_B})$, which is dependent on the probability $p_A$ of the top class $c_A$, and probability $p_B$ of the runner-up class $c_B$. Two important observations guide our analysis: (a) note that $c_B$ can either belong to the same equivalence class as $c_A$ (denoted $c_A \\approx c_B$), or belong to a different equivalence class (denoted $c_A \\napprox c_B$), and (b) the probability estimates for each of the leaf classifiers (which predict only a subset of the labels), before renormalization, are the same as those in the baseline classifier (by construction). On reducing the label space, if $c_A \\approx c_B$, renormalization will further widen the margin, and increase the robustness certificate. If $c_A \\napprox c_B$, then there must exist a runner-up candidate $c_{B'} \\approx c_A$. Its corresponding runner-up estimate $\\overline{p_{B'}} \\leq \\overline{p_B}$ (or it would have been the runner-up in the first place). Consequently, the margin $\\Phi^{-1}(\\underline{p_A})-\\Phi^{-1}(\\overline{p_{B'}}) \\geq \\Phi^{-1}(\\underline{p_A})-\\Phi^{-1}(\\overline{p_B})$. \n\n\nIn the retraining scenario, however, we do not have knowledge about the {\\em ordering} of the probability estimates in the retrained classifier in comparison to the baseline classifier. It is possible that for a retrained classifier and for a given input, while the correct class' estimate remains $\\underline{p_A}$, the new runner-up's estimate $\\overline{p_{B'}}$ can be greater, lesser, or equal to the original (baseline) estimate $\\overline{p_B}$. Thus, the new robustness certificate can either be lower, greater, or the same as the baseline scenario. This problem is more fundamental; since robustness is a local property relying on the local Lipschitz constant and the structure of the decision spaces, partial or incomplete knowledge of any of these can result in spurious selection of the runner-up label. An added benefit of the renormalization approach is that it is computationally efficient; one needs to train one baseline classifier (independent of how the label space is partitioned) as opposed to several leaf classifiers (depending on the nature of the partition).\n\n\n\\vspace{1mm}\n\\noindent{\\bf Results:} In Tables~\\ref{table:case_study_11} and~\\ref{table:case_study_12}, we present the results of our evaluation. We report the mean and standard deviation of the robustness certificate obtained for different values of $\\sigma$. For each equivalence class, we use 3000 inputs belonging to that particular equivalence class (\\textit{e.g.,}\\@\\xspace we use 3000 images of circular road signs to validate the improvements in Table~\\ref{table:case_study_11}). We also report the certified accuracy for varying values of $\\sigma$ (refer Appendix D in ~\\cite{cohen2019certified} for the detailed definition and formulation). In each table, the columns pertaining to the baseline contain the certificates (and corresponding certified accuracy) for only the labels belonging to a particular equivalence class (circles in Table~\\ref{table:case_study_11} and triangles in Table~\\ref{table:case_study_12}). We can see that by simply rearchitecting the classifier to enforce invariances (in this case, obtained using auxiliary inputs), we are able to improve the certified radius and certified accuracy\\footnote{For brevity, we only report results for individual equivalence classes; the overall certified accuracy of both the baseline and our hierarchical approach is comparable.}. As stated earlier, more fine-grained results are presented in Appendices~\\ref{app:triangles},~\\ref{app:circles}, and~\\ref{app:individual_circles}.\n\n\\begin{table}\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.25cm}| p{1.4cm}| p{1.5cm}| p{1.25cm}| p{1.5cm}}\n \\toprule\n \\textbf{} & {\\footnotesize Baseline CR} & {\\footnotesize Hierarchy CR} & {\\footnotesize Baseline CA} & {\\footnotesize Hierarchy CA}\\\\\n \\midrule\n $\\sigma=0.25$ & 1.083 $\\pm$ 0.579 & 1.090 $\\pm$ 0.575 & 77.74\\% & 79.25\\%\\\\\n $\\sigma=0.5$ & 1.620 $\\pm$ 0.986 & 1.642 $\\pm$ 1.009 & 56.54\\% & 58.39\\%\\\\\n $\\sigma=1.0$ & 2.327 $\\pm$ 1.454 & 2.400 $\\pm$ 1.560 & 31.69\\% & 35.47\\%\\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small For inputs that belong to the circular equivalence class, the hierarchical approach preserving invariances outperforms the baseline model.}\n\\label{table:case_study_11}\n\\end{center}\n\\vspace*{-6mm}\n\\end{table}\n\n\\begin{table}\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.25cm} | p{1.4cm} | p{1.5cm} | p{1.25cm} | p{1.5cm}}\n \\toprule\n \\textbf{} & {\\footnotesize Baseline CR} & {\\footnotesize Hierarchy CR} & {\\footnotesize Baseline CA} & {\\footnotesize Hierarchy CA}\\\\\n \\midrule\n $\\sigma=0.25$ & \\textcolor{black}{1.107 $\\pm$ 0.551} & 1.127 $\\pm$ 0.547 & 71.60\\% & 76.45\\% \\\\\n $\\sigma=0.5$ & \\textcolor{black}{1.641 $\\pm$ 0.919} & 1.659 $\\pm$ 0.899 & 50.67\\% & 58.94\\% \\\\\n $\\sigma=1.0$ & \\textcolor{black}{2.223 $\\pm$ 1.259} & 2.233 $\\pm$ 1.172 & 20.41\\% & 32.78\\% \\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small For inputs that belong to the triangular equivalence class, the hierarchical approach preserving invariances outperforms the baseline model.}\n\\label{table:case_study_12}\n\\end{center}\n\\vspace*{-6mm}\n\\end{table}\n\n\\subsubsection{Multiple Invariants}\n\\label{sec:morerobustfeatures}\n\n\nAs discussed earlier, the hierarchical classification paradigm can support enforcing more than a single invariant. In this case study, for example, one can further partition the label space using location information. For example, a highway can not have stop signs, or an intersection will not have a speed limit sign etc. To validate this hypothesis, we performed a small scale proof-of-concept experiment to further constrain the space of labels that is obtained by splitting on the shape invariance (\\textit{i.e.}\\@\\xspace see if we can obtain a subset of the set of, say, circular labels). Using the location information from the KITTI dataset, and local map data from OpenStreetMap~\\cite{OSM}. For particular locations, we can further constrain the space of circular road signs to just \\texttt{SpeedLimit} signs. From Figure~\\ref{fig:speedlimits}, we observe that increasing the number of invariances (to 2 - shape {\\em and location}) increases the robustness certificate further without impacting accuracy.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.55\\linewidth]{figures\/speedlimits_plot.pdf}\n \\caption{\\small The percentage improvement of the robustness certificate for \\texttt{SpeedLimit} signs utilizing 2 invariances - shape and location. These results are a significant improvement on those of Table~\\ref{table:case_study_11}. Additional invariances do not impact the accuracy in this case study.}\n \\label{fig:speedlimits}\n\\end{figure}\n\n\\subsection{Speaker Classification}\n\\label{casestudy2}\n\n\nThus far, our results suggest that hierarchical classification and invariances improve robustness in the {\\em vision domain}. We carry out another case study in the audio domain to verify that our approach is general. To this end, we use speaker identification as the next case study. Conventional speaker identification systems utilize a siamese architecture~\\cite{chen2011extracting}; speech samples are first converted to discrete features, and these are then converted to an embedding (refer footnote 9). The distance between an unknown embedding (which is to be labeled), and embeddings in a known support set are computed. The label of the known embedding in the support set which is at the least distance from the unknown embedding is returned as the label for the unknown sample. Observe that such an architecture is not conducive for computing certificates using randomized smoothing; the output returned by the network is a label, and there is no associated probability estimate for this prediction. Thus, we implement speaker identification using a conventional CNN comprising of 3 convolutional layers followed by a fully connected layer and a softmax layer. For the remainder of this section, we assume that the inputs are discrete features obtained after processing the voice samples.\n\nAs before, we classify inputs based on the gender of the speaker. Since gender is binary, we utilize smoothed DNN (with the same architecture as above) for binary classification. Unlike the previous case study where the root is robust to perturbation as its inputs are robust, the robustness guarantees for this case study come from the design of the classifier (through randomized smoothing~\\cite{cohen2019certified}). While we could utilize another modality to obtain speaker gender, we use a robust classifier to highlight the flexibility of our approach. As before, the leaves are randomly smoothed classifiers. \n\n\\subsubsection{Dataset}\n\nFor our experiments, we use the LibriSpeech ASR corpus~\\cite{panayotov2015librispeech} which comprises of over 1000 hours of recordings of both male and female speakers. In particular, we use 105894 audio samples corresponding to 1172 speakers (606 male speakers and 566 female speakers). These audio samples were first downsampled (to 4 KHz) and then cut to lengths of 3 seconds each.\n\n\\noindent{\\bf Results:} As in \\S~\\ref{retrainvsrenormalize}, we renormalized the outputs of the baseline classifier (trained on inputs from both male and female speakers) to obtain the estimates required for the robustness certificate. Unlike the vision scenario, increasing the standard deviation of the noise added while training the smoothed classifier resulted in poorly performing classifiers (with very low test accuracy and certified accuracy). Thus, compared to the case study in \\S~\\ref{casestudy1}, the values of $\\sigma$ are considerably lower. For each equivalence class, we evaluate our approach with 3000 samples belonging to that class. We observe a considerable increase in the robustness certificate - 1.92$\\times$ for male speakers (refer Table~\\ref{table:case_study_22}), and 2.07$\\times$ for female speakers (refer Table~\\ref{table:case_study_21}). As before, we observe an increase in the certified accuracy as well. This suggests that the choice of the invariant is very important, and can enable significant gains in robustness radius without degrading classification accuracy. In the sections that follow, we discuss this point in greater detail.\n\n\\begin{table}\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.4cm}| p{1.25cm}| p{1.4cm}| p{1.5cm}| p{1.25cm}}\n \\toprule\n \\textbf{} & \\text{\\footnotesize Baseline CR} & \\text{\\footnotesize Hierarchy CR} & \\text{\\footnotesize Baseline CA} & \\text{\\footnotesize Hierarchy CA}\\\\\n \\midrule\n $\\sigma=0.025$ & 0.1233 $\\pm$ 0.0424 & 0.2554 $\\pm$ 0.0429 & 88.80\\% & 89.62\\%\\\\\n $\\sigma=0.05$ & 0.1855 $\\pm$ 0.0907 & 0.3843 $\\pm$ 0.0915 & 65.70\\% & 66.98\\%\\\\\n $\\sigma=0.1$ & 0.2612 $\\pm$ 0.1579 & 0.5688 $\\pm$ 0.1576 & 24.79\\% & 24.96\\%\\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small For inputs that belong to the female gender equivalence class, the hierarchical approach preserving invariances outperforms the baseline model.}\n\\label{table:case_study_21}\n\\end{center}\n\\vspace*{-6mm}\n\\end{table}\n\n\\begin{table}\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.4cm}| p{1.25cm}| p{1.4cm}| p{1.5cm}| p{1.25cm}}\n \\toprule\n \\textbf{} & \\text{\\footnotesize Baseline CR} & \\text{\\footnotesize Hierarchy CR} & \\text{\\footnotesize Baseline CA} & \\text{\\footnotesize Hierarchy CA}\\\\\n \\midrule\n $\\sigma=0.025$ & 0.1351 $\\pm$ 0.0381 & 0.2590 $\\pm$ 0.0379 & 92.67\\% & \\textcolor{black}{92.65\\%}\\\\\n $\\sigma=0.05$ & 0.2085 $\\pm$ 0.0911 & 0.4014 $\\pm$ 0.0908 & 76.58\\% & 78.85\\%\\\\\n $\\sigma=0.1$ & 0.2883 $\\pm$ 0.1563 & 0.5562 $\\pm$ 0.1592 & 36.92\\% & \\textcolor{black}{36.40\\%}\\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small For inputs that belong to the male gender equivalence class, the hierarchical approach preserving invariances outperforms the baseline model.}\n\\label{table:case_study_22}\n\\end{center}\n\\vspace*{-6mm}\n\\end{table}\n\n\n\\iffalse\n\\begin{enumerate}\n \\item Outline setup for each experiment\n\\end{enumerate}\n\n\\subsection{MNIST and CIFAR10}\n\\begin{enumerate}\n \\item Intro\n \\item Dataset and setup\n \\item Model hierarchy and layer output clustering\n \\item Results with parameters and baseline\n \\item Analysis on how based on these results, it's fair to say the approach can generalize to most other similar image classification tasks\n \\item Say that empirically, both natural and adversarial accuracy outperform an adversarially trained baseline model. When retraining we see improvements of x amount, when renormalizing, we see improvements of x amount\n\\end{enumerate}\n\n\\subsection{Road Sign Classification}\n\\begin{enumerate}\n \\item Intro\n \\item Dataset and setup\n \\item Model hierarchy and PointNet\n \\item Results with parameters and baseline\n \\item Analysis\n \\item Adding extra invariants\n\\end{enumerate}\n\n\\subsection{Speaker Identification}\n\\begin{enumerate}\n \\item Intro\n \\item Dataset and setup\n \\item Model hierarchy and non-siamese\n \\item Results with parameters and baseline\n \\item Analysis\n \\item Speaker privacy implications\n\\end{enumerate}\n\\fi\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\n\\subsection{Do invariances always exist?} \n\nRecent work \\cite{new_madry_2019} suggests that robust features are a by-product of adversarial training. However, extracting such robust features is computationally expensive. In our approach, we use a combination of domain knowledge and observing intermediary representations of input features to extract invariances. But it is unclear if these approaches work for {\\em all} classification tasks. Discarding the approaches specified in our work, it is an open problem to determine an algorithm to {\\em efficiently} extract invariances\/stable properties pertaining to any learning task.\n\nIt is also unclear if there is explicit benefit from having invariances that can be defined in a manner that is interpretable to humans. Interpretability is subjective and enforcing such subjectivity might be detrimental to learning (and generalization); such subjectivity might also induce issues pertaining to fairness.\n\n\\subsection{How do we handle multiple invariances?}\n\nIn our evaluation thus far, we have discussed how a single invariance can enable robustness without a detriment to accuracy. In the presence of multiple such invariances, it is unclear how they might interact. To begin with, it is unclear if all invariances contribute equally towards robustness, or if some invariances are preferred to others. The same question exists in terms of the interaction between the invariances and accuracy. It is also unclear how one might combine multiple invariances to create a hiearachy with increased robustness in a meaningful manner \\textit{i.e.}\\@\\xspace the ordering of invariances is an open problem. This stems from the fact that it is a challenging proposition understanding the information gain from these invariances.\n\n\\subsection{Can better classification architectures improve robustness further?} \n\nWe empirically validate the benefits of using a tree-like hierarchy to improve robustness. However, it is unclear if our approach of constructing the hierarchy is the {\\em only} way. Indeed, there are other approaches that can be used to construct classification architectures, such as those discussed in works in the neural architectural search domain~\\cite{zoph2016neural}. An interplay of these ideas (searching for architectures that explicitly improve robustness by enforcing invariances) makes for interesting future research. \n\n\\subsection{Do invariances always help?}\n\nRecent research~\\cite{DBLP:journals\/corr\/abs-1903-10484, jacobsen2018excessive} suggests that by default, DNNs are invariant to a wide range of task-related changes. Thus, while trying to enforce invariances which we believe may improve robustness, we could potentially introduce a new region of vulnerabilities. Understanding if specific types of invariances are beneficial or harmful, and where to draw the line is avenue for future work.\n\\section{Related Work}\n\\label{sec:related_work}\n\nResearchers have extensively studied the robustness of Machine Learning models through exploring new attack strategies and various defense mechanisms. These efforts are very well documented in literature~\\cite{DBLP:journals\/corr\/abs-1902-06705}. In this section, we only discuss work related to the different components of our classification pipeline. \n\n\n\\paragraph{Hierarchical Classification}\n\nRecent research casts image classification as a visual recognition task ~\\cite{HD-CNN,category_structure,treepriors_transferlearning}. The common observation is that these recognition tasks introduce a hierarchy; enforcing a hierarchical structure further improves the accuracy. Similar to our approach, Yan \\textit{et al.}\\@\\xspace ~\\cite{HD-CNN} propose a HD-CNN that classifies input images into coarse categories which then pass corresponding leaf classifiers for fine-grained labeling. They perform spectral clustering on the confusion matrix of a baseline classifier to identify the clusters of categories. This approach is optimized for natural accuracy and uses the image data at all levels of hierarchy. In contrast, we employ robust features from different modalities to construct more robust classifiers. \n\nSrivastava \\textit{et al.}\\@\\xspace ~\\cite{treepriors_transferlearning} show that leveraging the hierarchical structure can be very useful when there is limited access to inputs belonging to certain classes. They propose an iterative method which uses training data to optimize the model parameters and validation data to select the best tree starting from an initial pre-specified tree. This approach further motivates our tree-based hierarchy; in several settings, such as autonomous driving systems, a hierarchy is readily available (as displayed by our experiments with shape and location).\n\n\\paragraph{Imperceptible Adversarial Examples in the Wild}\n\nExtensive research is aimed at generating digital adversarial examples, and defenses corresponding to $p$-norm bounded perturbations to the original inputs ~\\cite{goodfellow2014explaining,papernotattack,kurakin2016adversarial,madry-iclr2018}. However, these studies fail to provide robustness guarantees for the attacks realizable in the physical world due to a variety of factors including view-point shifts, camera noise, domain adaptation, and other affine transformations.\n\nThe first results in this space were presented by Kurakin \\textit{et al.}\\@\\xspace~\\cite{kurakin2016adversarial}. The authors generate adversarial examples for an image, print them, and verify if the prints are adversarial or not. Sharif \\textit{et al.}\\@\\xspace developed a physical attack approach~\\cite{sharif2016accessorize, sharif2017adversarial} on face recognition systems using a printed pair of eyeglasses. Recent work with highway traffic signs demonstrates that both state-of-the-art machine learning classifiers, as well as detectors, are susceptible to real-world physical perturbations~\\cite{roadsigns17,object-detector-attacks}. Athalye \\textit{et al.}\\@\\xspace~\\cite{AthalyeS17} provide an algorithm to generate 3D adversarial examples (with small $p$-norm), relying on various transformations (for different points-of-view).\n\n\\paragraph{LiDAR\\xspace Attacks}\n\nSimilar to our approach, Liu \\textit{et al.}\\@\\xspace~\\cite{pointcloud} adapt the attacks and defense schemes from the 2D regime to 3D point cloud inputs. They have shown that even simpler defenses such as outlier removal, and removing salient points are effective in safeguarding point clouds. This observation further motivates our selection of point clouds as auxiliary inputs in the case study. However, Liu \\textit{et al.}\\@\\xspace ~\\cite{pointcloud} do not physically realize the generated perturbations. Other approaches consider active adversarial attacks against the LiDAR\\xspace modalities~\\cite{active_lidar}, which can be expensive to launch. In this paper, we focus on passive attacks (on sensors) through object perturbations.\n\nXiang \\textit{et al.}\\@\\xspace ~\\cite{point_perturb_generate} propose several algorithms to add adversarial perturbations to point clouds through generating new points or perturbing existing points. An attacker can generate an adversarial point cloud, but manifesting this point cloud in the physical world is a different story. There are several constraints need to be accounted for, such as the LiDAR\\xspace's vertical and horizontal resolution and the scene's 3D layout. . Still, an attacker would need to attack more than one modality to cause a misclassification. \n\n\\paragraph{Robust Features}\n\nIlyas \\textit{et al.}\\@\\xspace ~\\cite{new_madry_2019} and Tsipras \\textit{et al.}\\@\\xspace ~\\cite{tsipras2018there} distinguish robust features from non-robust features to explain the trade-off between adversarial robustness and natural accuracy. While the authors show an improved trade-off between standard accuracy and robust accuracy, it is achieved at the computational cost of generating a large, robust dataset through adversarial training ~\\cite{new_madry_2019}. We circumvent this computational overhead by adopting invariants (and consequently robust features) imposed by the constraints in the physical world. \n\n\n\\section{Conclusion}\n\n\\begin{enumerate}\n \\item Robustness vs accuracy tradeoff\n \\item invariances and domain knowledge\n \\item Reduced labelspace with equivalence classes result in improved robustness certificate\n \\item Important numbers\n \\item Useful takeaways\n\\end{enumerate}\n\\fi\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn this paper, we discuss how robust features realized through invariances (obtained through domain knowledge, or provided by real world constraints), when imposed on a classification task can be leveraged to significantly improve adversarial robustness without impacting accuracy. Better still, this is achieved at minimal computational overhead. Through a new hierarchical classification approach, we validate our proposal on real-world classification tasks -- road sign classification and speaker identification. We also show how some invariances can be used to safeguard the aforementioned classification task from physically realizable adversarial examples (in the case of road sign classification).\n\n\n\n\n\n\\section{Tables}\n\n\\subsection{Experimental Setup}\n\\label{app:setup}\n\nAll experiments were conducted on two servers. The first server has 264 GB memory, 8 NVIDIA's GeForce RTX 2080 GPUs, and 48 CPU cores. The second has 128GB memory, 2 NVIDIA's TITAN xp GPUs $\\And$ 1 NVIDIA's Quadro P6000 GPU, and 40 CPU cores.\n\n\\subsection{Settings for Experiments in \\S~\\ref{sec:insights}}\n\\label{app:insights}\n\nProjected Gradient Descent (PGD) is a white box attack bounded by the perturbation size, $\\epsilon$. The results derived in \\S~\\ref{sec:insights} were evaluated on PGD with 20 iterations and 40 iterations (PGD-20, PGD-40) on the $\\infty$-norm~\\cite{madry-iclr2018}. Since adversarial training is notoriously slow (as it attempts to solve a nested optimization), we utilize YOPO~\\cite{yopo}, a technique which accelerates adversarial training by up to 4-5 $\\times$. This reduces the amount of times the network is propagated while staying competitive with the results of typical adversarial training using PGD. We perform our adversarial training experiments using YOPO-5-3 so that only 5 full forward and backward-propagations are made instead of the 20 with PGD-20. The same is done with YOPO-5-10 for approximating PGD-40.\n\n\\begin{table}[ht]\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.4cm} p{1.4cm} p{0.4cm} p{0.4cm} p{0.4cm} p{0.5cm} p{0.5cm} p{0.4cm} p{0.3cm}}\n \\toprule\n \\text{\\footnotesize Model} & \\text{\\footnotesize Architecture} & \\text{n} & \\textbf{$\\sigma$} & \\textbf{$\\varepsilon$} & \\textbf{$\\kappa$} & \\textbf{$\\eta$} & \\text{b}\\\\\n \\midrule\n \n {\\footnotesize YOPO-5-3} & {\\footnotesize ResNet-34} & {\\footnotesize 200} & {\\footnotesize -} & {\\footnotesize $\\frac{8}{255}$} & {\\footnotesize $5\\mathrm{e}{-2}$} & {\\footnotesize $2\\mathrm{e}{-3}$} & {\\footnotesize 128} \\\\\n {\\footnotesize YOPO-5-10} & {\\footnotesize Small CNN} & {\\footnotesize 40} & {\\footnotesize -} & {\\footnotesize $0.47$} & {\\footnotesize $5\\mathrm{e}{-2}$} & {\\footnotesize $2\\mathrm{e}{-3}$} & {\\footnotesize 128} \\\\\n \\midrule\n {\\footnotesize Smoothing} & {\\footnotesize ResNet-110} & {\\footnotesize 350} & {\\footnotesize 0.25} & {\\footnotesize -} & {\\footnotesize $1\\mathrm{e}{-2}$} & {\\footnotesize $1\\mathrm{e}{-2}$} & {\\footnotesize 256} \\\\\n {\\footnotesize Smoothing} & {\\footnotesize ResNet-110} & {\\footnotesize 350} & {\\footnotesize 0.50} & {\\footnotesize -} & {\\footnotesize $1\\mathrm{e}{-2}$} & {\\footnotesize $1\\mathrm{e}{-2}$} & {\\footnotesize 256} \\\\\n {\\footnotesize Smoothing} & {\\footnotesize ResNet-110} & {\\footnotesize 350} & {\\footnotesize 1.00} & {\\footnotesize -} & {\\footnotesize $1\\mathrm{e}{-2}$} & {\\footnotesize $1\\mathrm{e}{-2}$} & {\\footnotesize 256} \\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small The training parameters used for our YOPO and randomized smoothing models. YOPO-5-3 (with step size $\\frac{2}{255}$) and smoothing models were used for obtaining results on CIFAR-10 dataset. YOPO-5-10 (with step size $0.01$) models were used for obtaining results on MNIST dataset. All of the parameters except for the number of labels were kept consistent when training our baseline, leaf, and root classifiers for our experiments in \\S~\\ref{sec:insights}. (a) n: number of samples, (b) $\\sigma$: standard deviation of noise added for smoothing, (c) $\\varepsilon$: maximum permissible noise added while adversarially training, (d) $\\kappa$: weight decay, (e) $\\eta$: learning rate, and (f) b: batch size.}\n\\label{table:parameters}\n\\end{center}\n\\end{table}\n\n\\begin{table}[ht]\n\\small\n\\begin{center}\n \\begin{tabular}{p{1cm} p{1.5cm} p{1.5cm} p{1cm}}\n \\toprule\n \\text{\\footnotesize Model} & \\text{\\footnotesize Natural (\\%)} & \\text{\\footnotesize Adv (\\%)} & \\text{\\footnotesize \\# Classes}\\\\\n \\midrule\n {\\footnotesize Root} & {\\footnotesize 98.51\\%} & {\\footnotesize 93.68\\%} & {\\footnotesize 2}\\\\\n {\\footnotesize 4Class} & {\\footnotesize 98.37\\%} & {\\footnotesize 86.95\\%} & {\\footnotesize 4}\\\\\n {\\footnotesize 6Class} & {\\footnotesize 98.52\\%} & {\\footnotesize 85.81\\%} & {\\footnotesize 6}\\\\\n {\\footnotesize Baseline} & {\\footnotesize 97.86\\%} & {\\footnotesize 82.35\\%} & {\\footnotesize 10}\\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small The results on the test set for each YOPO-5-10 model used in our evaluation for MNIST. Using the parameters from Table~\\ref{table:parameters}, we train a root classifier, 2 leaf classifiers, and 1 baseline classifier trained on the entire label space. The legends are (a) {\\bf Natural (\\%)}: natural accuracy, (b) {\\bf Adv (\\%)}: adversarial accuracy with PGD-40 attack where $\\sigma=0.01, \\epsilon=0.47$}\n\\label{table:mnist_acc}\n\\end{center}\n\\end{table}\n\nIn Table~\\ref{table:cifar_acc} and Table~\\ref{table:mnist_acc}, we report statistics for the experiments related to MNIST and CIFAR-10 in \\S~\\ref{sec:insights}. Specifically, we report the natural and adversarial accuracy of different constitutent classifiers are reported. Trained with the parameters from Table~\\ref{table:parameters}, these classifiers were used in our end to end evaluation of our hierarchical classification scheme.\n\n\\begin{table}[ht]\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.5cm} p{1.5cm} p{1.5cm} p{1cm}}\n \\toprule\n \\text{\\footnotesize Model} & \\text{\\footnotesize Natural (\\%)} & \\text{\\footnotesize Adv (\\%)} & \\text{\\footnotesize \\# Classes}\\\\\n \\midrule\n {\\footnotesize Root} & {\\footnotesize 96.57\\%} & {\\footnotesize 84.25\\%} & {\\footnotesize 2}\\\\\n {\\footnotesize Non-living} & {\\footnotesize 93.48\\%} & {\\footnotesize 69.04\\%} & {\\footnotesize 4}\\\\\n {\\footnotesize Living} & {\\footnotesize 80.72\\%} & {\\footnotesize 40.78\\%} & {\\footnotesize 6}\\\\\n {\\footnotesize Baseline} & {\\footnotesize 83.68\\%} & {\\footnotesize 38.77\\%} & {\\footnotesize 10}\\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small The results on the test set for each YOPO-5-3 model used in our evaluation for CIFAR-10. Using the parameters from Table~\\ref{table:parameters}, we train a root classifier, 2 leaf classifiers, and 1 baseline classifier trained on the entire label space. The legends are (a) {\\bf Natural (\\%)}: natural accuracy, (b) {\\bf Adv (\\%)}: adversarial accuracy with PGD-20 attack where $\\sigma=\\frac{2}{255}, \\epsilon=\\frac{8}{255}$}\n\\label{table:cifar_acc}\n\\end{center}\n\\end{table}\n\n\\subsection{Robustness and Accuracy no Longer at Odds}\n\\label{tsipras}\n\nWe consider the setting of Tsipras et al.~\\cite{tsipras2018there,new_madry_2019} of a binary classification task. In this setting, robustness and accuracy are at odds~\\cite{tsipras2018there}. Here, we show, over the same setting, that imposing an invariant on the attacker improves the defender's accuracy-robustness trade-off.\n\n\n\\input{ccs-2019\/new_madry_stuff}\n\n\\subsection{Data Driven Invariances}\n\\label{app:data}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Appendix\/confusionmatrix.pdf}\n \\caption{\\small Confusion matrix obtained by inspecting the predictions of an adversarially trained ($\\epsilon=\\frac{8}{255})$ CNN.}\n \\label{fig:confusionmatrix1}\n\\end{figure}\n\n\nBy observing the confusion matrix, one can see that for the CIFAR-10 dataset, Non-living objects (such as planes, automobiles, ships, and trucks) are more likely to be misclassified among each other, in comparison to being misclassifed as Living objects (such as animals). This motivates our split as described in \\S~\\ref{sec:obtaining}.\n\n\\subsection{CIFAR-10 Certification}\n\\label{app:cifar}\n\nInstead of retraining the leaf classifiers with adversarial training, we retrain the leaf classifiers on the CIFAR-10 dataset with randomized smoothing. We still utilize the root classifier as described in \\S~\\ref{sec:method}. From Table~\\ref{table:vehicles_ca} and Table~\\ref{table:animals_ca}, we can see that the hierarchical approach increases the robustness certificate and certified accuracy, in comparison to the flat baseline. \n\\begin{table}[ht]\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.25cm} | p{1.4cm} | p{1.5cm} | p{1.25cm} | p{1.5cm}}\n \\toprule\n \\textbf{} & {\\footnotesize Baseline CR} & {\\footnotesize Hierarchy CR} & {\\footnotesize Baseline CA } & {\\footnotesize Hierarchy CA}\\\\\n \\midrule\n {\\footnotesize $\\sigma=0.25$} & 0.6749 $\\pm$ 0.2798 & 0.7204 $\\pm$ 0.2692 & 88.15\\% & 90.70\\% \\\\\n {\\footnotesize $\\sigma=0.5$} & 0.9270 $\\pm$ 0.5349 & 1.0943 $\\pm$ 0.5686 & 79.58\\% & 85.85\\% \\\\\n {\\footnotesize $\\sigma=1.0$} & 1.2270 $\\pm$ 0.7789 & 1.5013 $\\pm$ 0.9828 & 65.23\\% & 74.74\\% \\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small For inputs that belong to the Non-Living objects equivalence class, the hierarchical approach preserving invariances outperforms the baseline model.}\n\\label{table:vehicles_ca}\n\\end{center}\n\\vspace*{-4mm}\n\\end{table}\n\n\\begin{table}[ht]\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.25cm} | p{1.4cm} | p{1.5cm} | p{1.25cm} | p{1.5cm}}\n \\toprule\n \\textbf{} & \\text{\\footnotesize Baseline CR} & \\text{\\footnotesize Hierarchy CR} & \\text{\\footnotesize Baseline CA} & \\text{\\footnotesize Hierarchy CA}\\\\\n \\midrule\n $\\sigma=0.25$ & 0.5263 $\\pm$ 0.2987 & 0.5671 $\\pm$ 0.2966 & 75.93\\% & 80.80\\% \\\\\n $\\sigma=0.5$ & 0.7219 $\\pm$ 0.5037 & 0.7777 $\\pm$ 0.5140 & 61.82\\% & 67.05\\% \\\\\n $\\sigma=1.0$ & 0.9319 $\\pm$ 0.7486 & 1.0750 $\\pm$ 0.8278 & 42.68\\% & 53.80\\% \\\\\n \\bottomrule\n \\end{tabular}\n\\caption{\\small For inputs that belong to the Living objects equivalence class, the hierarchical approach preserving invariances outperforms the baseline model.}\n\\label{table:animals_ca}\n\\end{center}\n\\vspace*{-4mm}\n\\end{table}\n\n\\subsection{Retraining vs. Renormalization for Triangular Road Signs}\n\\label{app:triangles}\n\nFor the remainder of this subsection, note that we measure the improvements in robustness for a small sample of triangular road sign inputs ($\\sim$ 200). This is the reason for the increased improvement, as in comparison to those in \\S~\\ref{expts} where the gains are reported for over $\\sim$ 3000 inputs. The results are presented in Figure~\\ref{fig:triangle_retrain} and Figure~\\ref{fig:triangle_renormalize}. Whilst providing comparable gains (on average), we reiterate the detriment caused by retraining approaches in Appendix~\\ref{app:individual_circles} (refer Figure~\\ref{fig:improve_retrain}).\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figures\/Appendix\/triangle_retrain.pdf}\n \\caption{\\small Improvements in robustness when the triangle leaf classifier is {\\em retrained}.}\n \\label{fig:triangle_retrain}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figures\/Appendix\/triangle_renormalization.pdf}\n \\caption{\\small Improvements in robustness when the triangle leaf classifier is {\\em renormalized}.}\n \\label{fig:triangle_renormalize}\n\\end{figure}\n\n\\subsection{Retraining vs. Renormalization for Circular Road Signs}\n\\label{app:circles}\n\nFor the remainder of this subsection, note that we measure the improvements in robustness for a small sample of circular road sign inputs ($\\sim$ 200). This is the reason for the increased improvement, as in comparison to those in \\S~\\ref{expts} where the gains are reported for over $\\sim$ 3000 inputs. The results are presented in Figure~\\ref{fig:circle_retrain} and Figure~\\ref{fig:circle_renormalize}. In comparison to the triangular road sign case, the improvements are relatively lower. This is because the runner-up candidates in the baseline (non-hierarchical classifier) are the same as the runner-up candidates for the hierarchical classifier. Thus, there is no significant widening in the margin. This further motivates the need to understand invariances associated with classification tasks, and obtain invariances that, by design, maximize this margin.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figures\/Appendix\/circle_retrain.pdf}\n \\caption{\\small Improvements in robustness when the circle leaf classifier is {\\em retrained}.}\n \\label{fig:circle_retrain}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{figures\/Appendix\/circle_renormalization.pdf}\n \\caption{\\small Improvements in robustness when the circle leaf classifier is {\\em renormalized}.}\n \\label{fig:circle_renormalize}\n\\end{figure}\n\n\\subsection{Analysis of Individual Circular Road Signs}\n\\label{app:individual_circles}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Appendix\/improvement_circle.pdf}\n \\caption{\\small Individual robustness improvements when the leaf classifiers are {\\em renormalized}.}\n \\label{fig:improve_renormalize}\n\\end{figure}\n\nWe report fine grained measurements of the improvements discussed in Appendix~\\ref{app:circles}. Here, we plot the percentage improvement (or degradation in the case of retraining) of robustness for each individual point. Observe that for some points, the increase in robustness is almost $7\\times$ in the renormalization scenario (refer Figure~\\ref{fig:improve_renormalize}), and about $20\\times$ in the retraining scenario (refer Figure~\\ref{fig:improve_retrain}). However, for most others, the increase in robustness is nominal. However, observe that in the retraining scenario, there are lots of points where the robustness decreases, which is why we utilize the renomralization approach for our experiments in \\S~\\ref{expts}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Appendix\/retrained_improvement_circle.pdf}\n \\caption{\\small Individual robustness improvements when the leaf classifiers are {\\em retrained}.}\n \\label{fig:improve_retrain}\n\\end{figure}\n\n\n\n\n\\section{PointNet}\n\\label{sec:pointnet}\n\nWe train a PointNet model\\cite{} as our root classifier for our sign classification case study. We crop and label points from the KITTI dataset\\cite{} for each sign with at least 32 points. The original distribution of labels for signs as seen in Table~\\ref{table:pc_distribution} accounts for the lack of diamond-shaped signs using jittering and translation as modes of data augmentation. The resulting accuracy of our root classifier holds at 98.01\\%, with the class accuracies displayed in Table~\\ref{table:pc_accuracy}. We evaluate the robustness of our PointNet model by generating adversarial attacks, an approach done by Xiang et al.\\cite{}. Using an adapted Carlini-Wagner $L_2$ attack, perturbations are generated to minimize the $L_2$ distance and the targeted class loss. We use 500 iterations on each attack batch and find the adversarial accuracy to be greater than 90\\% in some classes while keeping an average $L_2$ distance of around 1. Table~\\ref{table:pc_adv} contains the results. We find that our 32-point model is robust to the clustering attack as explored by Xiang et al. in all but one class. While the norm-bounded generated attacks are physically realizable, we argue that our approach results in the adversary's cost being increased through perceptibility and feasibility.\n\n\n\n\\begin{table}[H]\n\\small\n\\begin{center}\n \\begin{tabular}{p{1cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm}}\n \\toprule\n \\textbf{Eval Acc} & \\textbf{Eval Avg Class Acc} & \\textbf{Circle} & \\textbf{Diamond} & \\textbf{Triangle} & \\textbf{Inverted Triangle} \\\\\n \\midrule\n 98.01 & 98.33 & 98.80 & 100.0 & 100.0 & 94.50\n \\end{tabular}\n\\caption{The accuracy of our PointNet classifier trained on sign shapes.}\n\\label{table:pc_accuracy}\n\\end{center}\n\\end{table}\n\n\\begin{table}[H]\n\\small\n\\begin{center}\n \\begin{tabular}{p{1.5cm} p{1cm} p{1cm} p{1cm} p{1cm}}\n \\toprule\n \\textbf{Data source} & \\textbf{Circle} & \\textbf{Diamond} & \\textbf{Triangle} & \\textbf{Inverted Triangle} \\\\\n \\midrule\n Original & 645 & 12 & 82 & 374 \\\\\n Augmented & 1290 & 492 & 738 & 1122\n \\end{tabular}\n\\caption{The distribution of labels among our point clouds before and after data augmentation.}\n\\label{table:pc_distribution}\n\\end{center}\n\\end{table}\n\n\n\n\\section{Evaluation}\n\\label{sec:evaluation}\n\nWe demonstrate that hierarchical classification improves the robustness vs. accuracy trade-offs. In this section, we evaluate our hierarchical classification approach for two commonly used domain tasks: speaker identification and image classification. For both tasks, we utilize a feed forward architecture with the final softmax layer. Speaker identification typically utilizes a siamese architecture to compute the distance between an unknown sample and a predefined support set, though such an architecture is hard to retrofit to obtain a robustness certificate. Through our evaluation, we establish that the improvements in robustness are independent of the following:\n\\begin{enumerate}\n \\item The exact methodology used to obtain the robust invariant features (we utilize an auxiliary modality, LiDAR\\xspace, to classify shapes in the road sign classification task)\n \\item The model chosen for robust classification (we use a robust SVM for gender identification as the root classifier for speaker identification)\n \\item The domain space of the classification task (we evaluate our approach for image classification, road sign classification, and speaker identification)\n\\end{enumerate}\n\n\\textcolor{red}{add major results to the bullet points for the 3 themes suggested}\nOur experiments suggest that:\n\n\\begin{enumerate}\n \\item The hierarchical classification approach improves the robustness vs. accuracy trade-off for the same model architecture. While maintaining a comparable natural accuracy, the proposed approach produces an increase in certified robustness\\footnote{Used interchangeably with the robustness certificate of Cohen et al.~\\cite{cohen2019certified}}.\n \\item Extending the hierarchy using multiple invariants further improves robustness. In the case of road sign classification, we combine shape and location constraints to obtain an average increase of $>80\\%$ for speed limit signs (which can be thought of as circular signs at a given geographic location).\n \n \n \n \n\\end{enumerate}\n\n\n\n\\subsection{Road Sign Classification}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/road-signs.pdf}\n \\caption{The hierarchy over the road signs from the GTSRB dataset.}\n \\label{fig:road-sign-setup}\n\\end{figure}\n\nTo summarize our hierarchical approach, we classify road signs first based on their shapes at the root level (using a classifier trained using the ResNet-20 architecture~\\cite{DBLP:journals\/corr\/HeZRS15}). By doing so, we are able to partition the label space into equivalence classes based on shapes (circular, triangular, octagonal, inverse triangular, or rectangular). Within each equivalence class \\textit{i.e.}\\@\\xspace per-shape, we perform classification using a smoothed classifier (which also has the ResNet-20 architecture) to obtain the exact road sign. Note that we set the noise parameter $\\sigma$ set to different values \\textit{i.e.}\\@\\xspace $\\sigma=0.25,0.5,1$ to better understand the trade-offs between robustness and accuracy. Figure~\\ref{fig:road-sign-setup} contains a visual summary. Such a hierarchical classification encodes various priors about the classification task which are obtained from domain expertise about the classification problem at hand. By doing so, the classification pipeline is also interpretable. We train the root classifier using LiDAR\\xspace point clouds (\\textit{i.e.}\\@\\xspace a different set of input features) of road signs to predict the shape. While conventional computer vision techniques can be used to extract the shape of an object from raw pixel values (in the absence of an additional input modality), these are not fool-proof \\textit{i.e.}\\@\\xspace the inputs are easy to perturb~\\cite{}. In this case study, we demonstrate that features robust to $p$-norm perturbations exist in the wild (as motivated by our threat model in \\S~\\ref{}); such features provide us robustness for {\\em free}\\footnote{This is under the assumption that the point cloud inputs are robust, and hard to perturb. We validate this empirically in Sec.~\\ref{realizable}}. Additionally, the presence of \\lidars in most autonomous vehicles further validates our choice. \n\n{\\bf Datasets:} For our experiments, we use two datasets: the first is the German Traffic Sign Recognition Benchmark (GTSRB)~\\cite{Stallkamp2012} which contains 51,840 cropped images of German road signs which belong to 43 classes; Fig.~\\ref{fig:road-sign-setup} shows these classes. The second is the KITTI dataset which contains time-stamped location measurements, high-resolution images, and LiDAR\\xspace scans over a five-day recording period from an instrumented vehicle on the roads of Karlsruhe, Germany. We post-processed this dataset to (a) extract only cropped images of the road signs included in the GTSRB dataset, and (b) exract their corresponding LiDAR\\xspace depth maps. To do so, we retrained a YOLOv3 object detector~\\cite{yolov3} to detect the German road signs using the German Traffic Sign Detection Benchmark (GTSDB) dataset~\\cite{}. Thus, we obtained 3138 cropped images, their corresponding LiDAR\\xspace depth maps, and their labels. To verify our pipeline, we {\\em manually verified the validity of all} these labels.\n\n\\subsubsection{Improving the robustness vs. accuracy trade-off}\n\\label{improvement}\n\n\n\\para{\\bf A Robust Root Classifier:} As discussed earlier, our root classifier operates on point cloud data. Our threat model specifies the adversary's capabilities to make $p$-norm bounded perturbations to the road sign, or deface the road sign with specific stickers. Such attacks are the state-of-the-art in literature. Our experiments with road signs and a \\textcolor{red}{LIDAR NAME} LiDAR\\xspace suggest that {\\em large} perturbations (such as stickers) made on the road sign does not impact the road sign's point cloud. Thus, the point cloud features are robust to perturbations on the actual road sign. While the LiDAR\\xspace is susceptible to active attacks~\\cite{Cao:2019:ASA:3319535.3339815}, such an attack is expensive to mount and beyond the capabilities of our passive adversary. Recent works show $p$-norm bounded adversarial attacks exist in the point cloud space\\cite{point_perturb_generate}. To evaluate our approach, we attack our PointNet~\\cite{pointnet} shape classifier, which has achieved a 98.01\\% accuracy on the test set, using state of the art attacks. From the results shown in Table~\\ref{table:pc_adv}, we show that generating perturbations minimizing the $L_2$ distance in the point cloud space is an effective attack. Our model is robust against the clustering attack, a result of traffic signs having very few points. While these norm-bounded attacks are physically realizable, they increase an adversary's cost of attack by introducing an additional attack space on top of the image space. In summary, we are able to utilize robust features obtained from another input modality to obtain a robust root classifier. \n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/scene_sign.png}\n \\caption{Left: A point cloud of a scene in a driving environment. Right: The cropped point cloud of the sign outlined in red in the scene.}\n \\label{fig:pc_scene}\n\\end{figure}\n\n\\noindent{\\em Q1. Why Shape?} \\textcolor{red}{PENDING}\n\n\\para{\\bf The Leaf Classifiers:} Based on the signs in the GTSRB dataset, we observe that the diamond, octagon, and inverse-triangle shapes have a single road sign belonging to each of their equivalence classes. Thus, for these shapes, we do not require a leaf classifier; classifying based on the robust feature is sufficient. Consequently, their robustness certificate is $\\infty$. \n\n\\noindent{\\em Q2. Retraining vs. Renormalization?} Under the assumption of an accurate and robust root classifier, each leaf classifier only accepts as input road signs belonging to particular shape. As a running example, let us assume that the leaf classifier under discussion classifies {\\em circular} road signs. To obtain such a leaf classifier, one could (a) utilize a classifier trained on all labels (henceforth referred to as the baseline classifier) and discard the probability estimates of the labels not of interest (\\textit{i.e.}\\@\\xspace labels of road signs which are not circular), and renormalize the remaining probability estimates; we refer to such an approach as the {\\em renormalization} approach, or (b) {\\em retrain} a leaf classifier from scratch based on the labels belonging to that particular equivalence class; \\textit{e.g.,}\\@\\xspace retrain a classifier for circular road signs in particular. Appendix~\\ref{} contains results from both these approaches. We observe that both these approaches increase the robustness certificate; while the renormalization approach can only increase the robustness certificate (by design), the retraining approach can potentially decrease the robustness certificate for some inputs.\n\nRecall that the value of the robustness certificate is directly proportional to the margin $\\Phi^{-1}(\\underline{p_A})-\\Phi^{-1}(\\overline{p_B})$, which is dependent on the probability $p_A$ of the top class $c_A$, and probability $p_B$ of the runner-up class $c_B$. Two important observations guide our analysis: (1) Note that $c_B$ can either belong to the same equivalence class as $c_A$ (denoted $c_A \\approx c_B$), or belong to a different equivalence class (denoted $c_A \\napprox c_B$), and (2) The probability estimates for each of the leaf classifiers (which predict only a subset of the labels), before renormalization, are the same as those in the baseline classifier (by construction). On reducing the label space, if $c_A \\approx c_B$, renormalization will further widen the margin, and increase the robustness certificate. If $c_A \\napprox c_B$, then there must exist a runner-up candidate $c_{B'} \\approx c_A$. Its corresponding runner-up estimate $\\overline{p_{B'}} \\leq \\overline{p_B}$ (or it would have been the runner-up in the first place). Consequently, the margin $\\Phi^{-1}(\\underline{p_A})-\\Phi^{-1}(\\overline{p_{B'}}) \\geq \\Phi^{-1}(\\underline{p_A})-\\Phi^{-1}(\\overline{p_B})$. \n\n\nIn the retraining scenario, however, we do not have knowledge about the {\\em ordering} of the probability estimates in the retrained classifier in comparison to the baseline classifier. It is possible that for a retrained classifier and for a given input, while the correct class' estimate remains $\\underline{p_A}$, the new runner-up's estimate $\\overline{p_{B'}}$ can be greater, lesser, or equal to the original (baseline) estimate $\\overline{p_B}$. Thus, the new robustness certificate can either be lower, greater, or the same as the baseline scenario. This problem is more fundamental; since robustness is a local property relying on the local Lipschitz constant and the structure of the decision spaces, partial or incomplete knowledge of any of these can result in spurious selection of the runner-up label. An added benefit of the renormalization approach is that it is computationally efficient; one needs to train one baseline classifier (independent of how the label space is partitioned) as opposed to several leaf classifiers (depending on the nature of the partition). \\textcolor{red}{maybe more stuff to be added from german paper + tommi paper}\n\n\n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.55\\linewidth]{figures\/speedlimits_plot.pdf}\n \\caption{The percentage improvement of the robustness certificate for \\texttt{SpeedLimit} signs utilizing a 2 layer hierarchical classifier. These results are a significant improvement on those of Fig.~\\ref{fig:renormorretrain}}\n \\label{fig:speedlimits}\n\\end{figure}\n\n\n\\subsubsection{Adding More Invariants}\n\\label{sec:morerobustfeatures}\n\nUsing location information, one is able to again partition the label space. For example, a highway can not have stop signs, or an intersection will not have a speed limit sign etc. To validate this hypothesis, we performed a small scale proof-of-concept experiment to further constrain the space of labels that is obtained by splitting on the shape feature (\\textit{i.e.}\\@\\xspace see if we can obtain a subset of the set of, say, circular labels). Using the location information from the KITTI dataset, and local map data from OpenStreetMap \\cite{OSM},\nfor particular locations, we can further constrain the space of circular road signs to just \\texttt{SpeedLimit} signs. From Fig.~\\ref{fig:speedlimits}, we observe that increasing the number of robust features (to 2 - shape {\\em and location}) increases the robustness certificate further. The ordering of robust features in the hierarchy (\\textit{e.g.,}\\@\\xspace shape followed by location vs. location followed by shape) to obtain the best increase in robustness is an open question, one which we wish to tackle in future work. \n\n\\subsection{Speaker Identification}\n\nConventional speaker identification systems utilize a siamese architecture; speech samples are first converted to discrete features~\\cite{}, and these are then converted to an embedding. The distance between an unknown embedding (which is to be labeled), and embeddings in a known support set are computed. The label of the known embedding in the support set which is at the least distance from the unknown embedding is returned as the label for the unknown sample. Observe that such an architecture is not conducive for computing certificates using randomized smoothing; the output returned by the network is a label, and there is no associated probability estimate for this prediction. Thus, we implement speaker identification using a conventional feed-forward architecture \\textcolor{red}{describe specifics}. For the remainder of this section, we assume that the inputs are discrete features obtained after processing the voice samples.\n\nAs before, we classify inputs based on the gender of the speaker\\textcolor{red}{nicolas wants to talk about the privacy implications of doing so.}. Since gender is binary, we utilize a robust SVM~\\cite{} for this task. Unlike the previous case study where the root is robust to perturbation as its inputs are robust, the robustness guarantees for this case study come from the design of the classifier. We chose to use a SVM because of the nature of the classification (binary) as well as insights from prior work which suggest that the male and female speakers are linearly separable in some higher dimension~\\cite{}. While we could utilize another modality to obtain speaker gender, we use a robust classifier to highlight the flexibility of our approach. As before, the leafs are trained using a smoothed classifier whose architecture is specified earlier. \n\n{\\bf Datasets:} For our experiments, we use the LibriSpeech ASR corpus~\\cite{}\n\n\\section{Research Methods}\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSoils are the second biggest carbon pool of the planet, containing about 1500\\,PgC \n\\citep{Batjes1996,Eswaran1993,POST1982}. As such, their behaviour as a greenhouse gas source and sink needs\nto be quantified, when facing climate change induced by increasing atmospheric greenhouse gases concentrations \n\\citep{Batjes1996,Lal2004}.\nQuantifying temporal changes of this pool requires estimating its spatial distribution at different dates\nand at various scales, with the national scale being of particular importance for international\nnegotiations. \nThe reliability of such estimates depends upon suitable data in terms of organic carbon content and soil bulk \ndensity and on the methods used to upscale point data to comprehensive spatial estimates. \nThese estimates may also be used for defining the baseline state for soil organic\ncarbon (SOC) change simulations \n\\citep{van_Wesemael2010}, or setting some of \nthe parameters for models of SOC dynamics \\citep{Tornquist2009}. \n\n\n\n\n\nInterestingly, there is quite a diversity regarding the nature of the models used for upscaling SOC point\nmeasurements to the national level. The validity of each method depends on the datasets and on the scale\n(defined by its grain or precision and extent, \\citealp{Turner1989}).\nThe mapping approaches range from \\mbox{simple} statistics or pedotransfer rules, relating SOC\ncontents or stocks to soil type \\citep{Yu2007} or soil type and land use\n\\citep{Tomlinson2006, 48}, to multivariate regression models (\\citealp[with\nmultiple linear models]{Meersmans2008} and \\citealp[with generalized linear\nmodels]{Yang2008} or \\citealp[with mixed models]{Suuster2012}). Recent studies have used techniques adapted from the data\nmining and machine learning literature, with piecewise linear tree models\n\\citep{Bui2009} or multiple regression trees for regional studies\n\\citep{Grimm2008, Lo_Seen2010, Suuster2012}. \nAmong the studies considering small extent (<50 km$^2$),\nmany have considered the use of geostatistics, some including SOC predictors $via$ cokriging (CK) or regression \nkriging (RK) \\citep{Mabit2010, Don2007, Rossi2009, Wang2009, Spielvogel2009}. \nAs the extent increases, the use of geostatistics becomes less common and \ndespite the spatial dimension of such studies, few geostatistical approaches for SOC mapping have been proposed\nfor use at the national scale (but see \\citealp{CHAPLOT2009, Kerry2012, Rawlins2009})\n\\\\\n\n\nSOC mapping for France has been performed, during the last decade, by using class specific SOC means \\citep{48}\nor regression models \\citep{bg-8-1053-2011, Meersmans2012}. The most recently proposed models are still \nnot able to fully satisfactorily predict SOC stocks or contents on independent\nlocations :\nR$^2$ reached 0.50 and 0.49 and root mean squared prediction errors (RMSPE)\n 2.27~kg\/m$^2$ and 1.45\\%, for \\citet{bg-8-1053-2011} on SOC stocks and for \\citet{Meersmans2012}\non SOC contents, respectively.\n\\citet{bg-8-1053-2011} obtained unbiased predictions (the bias was estimated to be -0.002~kg\/m$^2$\nby cross-validation), which might ensure unbiased mapping of the stock at\nthe national level. Nevertheless, these R$^2$ and RMSPE results showed that there\nis potentially room for improvement, especially if one is willing to use such models for regional\nassessments. Adding spatial autocorrelation terms in these models \nmight be a way to improve their performance.\n\nRecently, new approaches have been proposed for coupling regression models, relating\nenvironmental factors to the studied property,\nwith geostatistical models, representing the spatial autocorrelation among the\nobservations \\citep[see for example][]{Marchant2010}. Such \nmethods were also designed to handle local anomalies (\\emph{i.e.} outliers).\nNevertheless, these methods do not currently include some features that other statistical \nmodels, such as boosted regression trees (BRT) used by \\citet{bg-8-1053-2011},\nhave (\\emph{i.e.} handling nonlinear relationships between\nqualitative and quantitative predictors and the independent variable, nonlinear\ninteractions between the predictors, in an automated manner).\nBoth approaches share the robustness to the presence of outliers in the dataset. As they are tackling different problems,\nthe spatial autocorrelation for the geostatistical approaches, and the modelling of the complex interactions \nbetween SOC stocks and their drivers for the regression methods, both might be considered as complementary.\n\nThe aim of this paper is to combine these recent robust\ngeostatistical approaches with the BRT models currently applied to map SOC\nstocks at the national scale for France. \nWe apply the methods to a dataset of 2166 paired observations of SOC and bulk\n densities from the French soil quality monitoring network (RMQS). We use this\n study to assess the modelling methods to determine i) how useful it is to\n combine BRT and geostatistical modelling, and ii) if any advantages are dependent\n on the number of ancillary variables included as predictors in the BRT models. \nThe aim is not specifically to study the relative importance of SOC stocks drivers for France\n\\citep[which has been done recently][]{bg-8-1053-2011, Meersmans2012}, nor to produce\na new map of SOC stocks in France.\n\n\n\\section{Materials and methods}\n\\subsection{Data}\\label{ssv}\n\nSoil Organic Carbon Stocks were computed for 2166 sites from the\nFrench soil quality monitoring network (RMQS) (Fig.~\\ref{mape}).\nThe network is based on a 16\\,km\\,$\\times$\\,16\\,km square grid.\nThe sampling sites are located at the center of each grid cell, except when \nsettling a homogeneous 20\\,m\\,$\\times$\\,20\\,m sampling area is not possible at this specific location\n (because of the soils being sealed or strongly disturbed by anthropogenic\nactivities, for instance). \nIn that case, another site is selected within 1\\,km from the\ncenter of the cell depending on soil availability for sampling \\citep[for more\ninformation, see][]{Arrouays2002}.\n Some of the 2166 sites of our dataset \nwere actually replicates of the regular cells sites : some cells had two sites\nlocated in them, one close to the center of the cell as described above, and another\n one located at another position within the cell. \n\n\n\\tc{At each site, 25 individual core samples were taken from the \n(0--30\\,cm) and the subsoil (30--50\\,cm) using a hand auger according to an unaligned sampling \ndesign within a 20\\,m\\,$\\times$\\,20\\,m area.\nIndividual samples were mixed to obtain a composite sample for each soil layer.\nIn addition to the composite sampling, a soil pit was dug 5\\,m from the south border\nof the 20\\,m\\,$\\times$\\,20\\,m area, from which 6 bulk density measurements were done, as described\npreviously \\citep{Martin2009}. }\n From these data, SOC stocks (kg\/m$^2$) were \ncomputed for the 0--30\\,cm soil layer~:\n\n\\begin{equation}\n\\mathit{\\rm SOC {\\rm stocks}}_{30\\;{\\rm cm}}=\\sum\n_{i=1}^{n}{p_{i}{\\rm BD}_{i}{\\rm SOC}_{i}(1-{rf}_{i})}\n\\end{equation}\nwhere $n$ is the number of soil horizon present in the 0--30\\,cm layer,\nBD$_i$, \\textit{rf}$_i$ and SOC$_i$ the bulk\ndensity, percentage of rock fragments (relative to the mass of soil) and the\nSOC concentration (percent) in these horizons, and $p$$_i$ the\nwidth of the horizons to take into account to reach the 30\\,cm. The horizons \nconsidered for such an analysis did not include the organic horizons (such as OH or OL).\n\n\nThe SOC stocks, the dependent variable, is the only variable which was observed at site level.\nAll other variables, the covariates (or ancillary variables) were depicted using available maps\ncovering the French territory. This allowed us to consider models for mapping SOC\ndistributions at the national scale, which relies on the exhaustiveness of\nthe ancillary information.\nThese ancillary maps were thus sampled at RMQS sites locations in order to estimate\nclimatic, pedological, land use and management related, and biological variables.\n\nThe map of pH was derived from two sources. For the forest soils, the forest soils surface pH map\n\\citep{pHMapForest2008} was used. For the other soils, the median pH per district from the national \nsoil testing database was used \\citep[BDAT, ][]{Lemercier2006}.\nLand use was estimated from from Corine Land Cover 2006 database \nand further reclassed into an adapted IPCC land use classification (various\ncrops, permanent grasslands, woodlands, orchards and shrubby perennial crops,\nwetlands, others and vineyards) \\citep{UE-SOeS2006}.\nClay content was estimated from the 1:1 000 000 scale European\nSoil Geographical Database \\citep{King1995}. As each polygon (or soil unit) from the\n1:1~000~000 scale European Soil Geographical map is linked to possibly several soil types (hence clay levels),\nwe used in the models the clay levels of the 3 most important (in terms of surface) \nsoil types within each soil unit associated to each RMQS site, namely $clay1$, \n$clay2$ and $clay3$, ranked according to the percentage of their occurrence. Surface percentages \nof these soil types were also included as predictors within the models ($pc1$, $pc2$\nand $pc3$). For \ninstance, let us consider a given RMQS site $i$ belonging to soil unit $j$ of the soil map.\nThe soil unit $j$ may have two soil types associated to it ($st1$ and $st2$) with the \noccurring probabilities of 70\\% and 30\\% and clay levels of 45\\% and 35\\% . For this site $i$, the values of the \n$clay1$, $clay2$ and $clay3$ variables would be 45\\%, 35\\% and $NA$ (not available) respectively and\nfor $pc1$, $pc2$ and $pc3$, 70\\% and 30\\% and $NA$ respectively.\nOrganic matter additions (\\emph{oma}), such as slurry and farmyard manure were estimated.\nWe used manure application and animal excrement production departmental statistics \\citep{ADEME2007}.\nThese statistics were combined with dry matter C concentration values,\n\\citep[37.7~\\% for farm yard manure and 36.6~\\% for slurry,]{Meersmans2012}.\n\nClimatic data were monthly precipitation (mm\\,month$^{-2}$), potential evapotranspiration (PET,\nmm\\,month$^{-2}$), and temperature ($^\\circ$C) at each node of a\n8\\,$\\times$\\,8\\,km$^2$ grid, averaged for the 1992--2004\nperiod. This climatic map was obtained by interpolating observational data\nusing the SAFRAN model \\citep{Quintana-Segui2008}. Again, for the \nmodelling study presented here, climatic variables were estimated at \neach RMQS site by performing a spatial join between the RMQS grid and the climatic map.\n\nAgro-pedo-climatic variables were also derived from the\nprimary soil, climate and land use data estimated at each RMQS site: \nwe used the ($a$) temperature and ($b$) soil moisture mineralization modifiers,\nas modelled in the RothC model \\citep{coleman1997, bg-8-1053-2011}. The $b$ variable \nwas calculated by combining, for each RMQS site, rainfall and PET data\nobtained from the climatic grid, with site observation of land use and clay\ncontent. Since three possible clay contents were estimated for each site,\nthe three corresponding estimates of the $b$ variables were also included, when relevant,\nin our BRT models.\nLastly, the Moderate Resolution Imaging Spectroradiometer Net Primary Productivity\n(MODIS NPP, gC\\,m$^2$\\,yr) was used to get NPP estimates\n at each of the RMQS sites, as in \\citep{bg-8-1053-2011}.\n\nThe GIS processing was carried out using the GRASS GIS (GRASS Development Team, 2012)\nand further mapping was carried out using Generic Mapping Tools software\n\\citep{Wessel1991}.\\\\\n\n\n \n\\begin{figure\n \\includegraphics[width=80mm]{stocksC.pdf}\n \\captionof{figure}{SOC stocks (0-30cm) values on the French monitoring network, which were used in the present study.\nAreas from 1 to 7 represent various different areas that are mentioned later in the text. 1:~south-west Brittany.\n2:~part of Basse Normandie. 3:~Alsace and part of Lorraine. 4:~part of French Alps. 5:~Massif Central. 6:~French\nPyrenean mountain range. 7:~ part of Aquitaine.}\n\\label{mape}\n\\end{figure}\n \n\n\n\\subsection{Statistical modelling}\n\\subsubsection{Boosted Regression Trees (BRT) modelling}\nBoosted regression trees belong to the Gradient Boosting Modelling (GBM) family.\nThe objective is to estimate the function $F$ that maps the values of a set of predictor\nvariables $x = \\{x1, .., xp\\}$ onto the values of the output variable $y$,\nby minimizing a specified loss function $L$. \nThis $L$ function is applied at each iteration in order to fit so-called \nbase learners.\nThe final prediction of the BRT model is a linear combination of each\nbase learner prediction. The constant weight associated with these base learner\npredictions is called the learning rate and is one of the important parameters\nof this boosting algorithm \\citep{Freund96experimentswith}. This \nkind of algorithm is also referred to as a forward ``stagewise'' procedure.\nThe base learners of BRT are classification and regression trees \\citep{Brei1984}. \nFurthermore, BRT uses a specialized form (for regression trees) of Stochastic Gradient\nBoosting \\citep{Friedman2001}.\nThe stochastic characteristic of the algorithm relies on the fact that only a subset \nof the dataset is used for fitting the base learner on a given iteration. The subset \nis produced in each iteration using a uniform random draw without replacement. \nBesides the learning rate, other parameters are important when applying this\nkind of model. Two of them determines the characteristics of each base learner :\nthe tree size (which gives the size of individual regression trees) and the minimum number of observations in the\nterminal leaves of the trees. Several options are available for deciding\nwhen to stop adding base leaners to the model. One of them, \nbased on an internal cross-validation, was shown to be the most efficient one\n\\citep{Ridgeway2006} for avoiding overfitting and was used for the present study.\nBRT was shown to have improved accuracy compared with simple regression trees,\nthanks to its stochastic gradient boosting procedure aimed at minimizing the risk of overfitting\n and improving its predictive power \\citep{Lawrence2004}. It can handle non-linear\ninteractions among predictors and the dependent variable, quantitative and qualitative\npredictors and missing data.\nLastly, several tools are available for interpreting the behavior and characteristic\nof the resulting BRT models, such as the variable importance index\n for assessing the contribution of the predictors and the partial dependence plots\nfor assessing the relationships between predictors and the predicted variable \\citep{bg-8-1053-2011}.\n\nA thorough description of\nthe method is given in \\citet{Friedman2001} and a practical guide for using\nit in \\citet{Elith2008}.\nThe BRT models were fitted and used for prediction using the ``gbm'' R \\citep{RCore}\npackage \\citep{Ridgeway2006}\n\n\n\\subsubsection{Three BRT Models for SOC stocks}\\label{mods}\nThree models for predicting SOC stocks in the 0--30\\,cm layer were tested. The models,\nwhich we refer to as the LU, L and F models, have increasing levels of complexity\n (see below for their full description of these models). These three models\nwere chosen as they represent cases where either very little or a lot of information \non ancillary variables is known on sites where SOC stocks are to be predicted.\nAdditionally, the first model (LU), with two covariates (landuse and clay\ncontent) commonly\nused for predicting SOC within the geostatistical framework. The second one (L, see\nbelow the full description) is indeed the $Extra$ model presented in \\cite{bg-8-1053-2011}. The use of the most complex model (F) enabled us to \ninclude all the ancillary data available for France at the national level at the time of\nthe present study.\n\nThe predictors used for each model were:\n\\begin{itemize}\n\\item \\emph{LU}: \\textit{lu\\_ipcc} (land use classification adapted from the IPCC\nguidelines, 2006), $clay1$.\n\\item \\emph{L}:\n\\textit{lu\\_ipcc}, $clay1$, $clay2$ and $clay3$, $pc1$, $pc2$\nand $pc3$, the clay and corresponding probability of occurrences at each RMQS site,\n(\\textit{pet}, mm\\,month$^{-2}$), monthly precipitation (\\textit{rain}, mm\\,month$^{-2}$), temperature\n(\\textit{temp}, $^\\circ$C), the two RothC mineralization\nmodifiers, \\textit{a} and $b1$, $b2$ and $b3$ and the net primary productivity \\textit{npp}\n(gC\\,m\\textsuperscript{$-$2}\\,yr\\textsuperscript{$-$1}).\n\\item \\emph{F}: same predictors as the model L with the addition of \\textit{pH}, \\textit{oma} (\\emph{i.e.} \norganic matter addition, slurry and farmyard manure) and \\textit{pm}, the parent material.\n\\end{itemize}\n\nThe ``gbm'' R package requires the specification of several important parameters : \nthe tree size, the learning rate, the minimum number\nof observations in the terminal leaves of the trees and the bag fraction.\nFor our three models, the values for these parameters were set to $(12, 0.01, 3, 0.7)$. These values were \nchosen according to recommendations found in the literature \\citep{Elith2008, Ridgeway2006}.\n\n\n\\subsubsection{Geostatistical models}\\label{geost}\nWe further investigated whether a robust geostatistical method, similar to the one presented by \\cite{saby2011}, \ncould be used to represent errors and improve predictions from each of the BRT models.\nIn their work, Saby et al (2011) divided the spatial variation of a soil property into\n fixed and random effects. The fixed effects were a different constant mean soil property\nfor each of 12 parent material classes and the random effects described the spatially\ncorrelated residual soil property variation. In the present work, \neach of the three BRT models was used alone as presented in the previous section and\nas a fixed effect within a robust geostatistical method. This combination of\na BRT and geostatistical model can be summarised as:\n\n\\begin{equation}\n\\label{eq1}\n\\mathbf{Z}=H(\\mathbf{X})+\\mathbf{u} \n\\end{equation}\n\nwhere $\\mathbf{Z}=ln(\\mathbf{Y})$ with $\\mathbf{Y}$ being a length n vector of observations of the SOC stocks,\n $\\mathbf{X}$ the matrix (n~x~q) containing values of the covariates (or predictors) at each\nobservation site and the $H$ function representing the boosted regression tree model\n(fitted to the log-transformed data). \\\nWe note that the log-transform was necessary for the geostatistical approach due to\nskewness of the observed SOC stocks distribution.\nThus the vector $\\mathbf{u}$ of length n contains the residuals of the BRT model predictions of\nthe log-transformed data, compared to the log transformed response variable.\nIn the conventional geostatistical approach, these residuals are assumed to be a realization\nof a second order stationary random process \\citep{Webster2007}.\nWe applied a robust geostatistical approach, in which the spatial correlation of residuals was modelled using\na Mat\\`{e}rn equation based on the Dowd robust estimator of the experimental variogram \\citep{Dowd1984}.\n\tMoreover, outlying observations were identified and Winsorized using the algorithm\n proposed by \\cite{Hawkins1984}. \nWinsorizing is a method by which extreme values in the statistical data are limited\n to reduce the effect of possibly spurious outliers.\nNote that Winsorizing is not equivalent to simply excluding data. \nRather, in a Winsorized procedure, the extreme values are replaced by a certain \nvalue predicted by a statistical model. \nThis algorithm provided for each observed residual $u_i$ an interval $[U_i^-, U_i^+]$.\n$u_i$ is then\nidentified as being an outlier when $u_i \\notin [U_i^-, U_i^+]$ and its value is replaced\nby the closest limit of the interval. As in \\cite{Lacarce2012}, \nobservations were confirmed as being outliers, and transformed, conditionally on a measurement\nerror of SOC stocks of $\\epsilon Y$, with $\\epsilon=0.112$, recently estimated for the RMQS\ndataset (unpublished data) \n\n\n\\begin{equation}\n\\label{eqWin}\n u^*_i =\n \\begin{cases}\n u_i^- & \\text{if } ln(Y_i(1+\\epsilon)) - H(\\mathbf{X}_i) < U_i^- \\\\\n u_i^+ & \\text{if } ln(Y_i(1-\\epsilon)) - H(\\mathbf{X}_i) > U_i^+ \\\\\n u_i & otherwise \n \\end{cases}\n\\end{equation}\n\nwhere ui* represents the resulting Winsorized data. \nOne should note that the geostatistical modelling is performed on the log scale, but\nthe measurement error is valid on the original scale, hence the terms \non the left-hand side of the\ninequalities in equation \\ref{eqWin}. These inequalities mean that the observed\nresiduals may exceed the $[U_i^-, U_i^+]$ interval limits, but not by more than the possible\nmeasurement error on the observed values. If they do, they are Winsorized to $U_i^-$\nor $U_i^+$ depending on the case. \nThe $U_i^-$ and $U_i^+$ values are defined so that the validity of the spatial term\n$\\mathbf{u}$ in equation \\ref{eq1}\nis verified, which, without Winsorizing, was rarely the case in previous\nstudies \\citep{Marchant2010, Lacarce2012}. This check is performed using a\nleave-one-out cross validation (LOOCV). \nWhen the covariance model is a valid representation of the spatial variation of the property (in our case the\nresiduals), the distribution of the squared standardized prediction errors (noted $\\theta$) derived\nfrom the cross validation will be a $\\chi^{2}$ with mean $\\overline{\\theta}=1$ and median\n$\\breve{\\theta}=0.455$ \\citep{Marchant2010} for which confidence intervals may be\ndetermined. This LOOCV procedure aims solely at checking the validity \nof the geostatistical model and should not be mistaken with the global validation\nframework presented in the next section, aimed at estimating the predictive\nperformance of the models (BRT models and their spatial counterparts)\n\n\nAs a result the variation of the soil property is decomposed in a threefold\nmodel described by \\citet{Marchant2010}: 1) variation modelled by the BRT models, 2) spatially correlated\nvariation represented by the random effect of the residuals of the BRT models and estimated by variograms\nusing Dowd's estimator to which Mat\\`{e}rn equations were fitted,\n3) variation due to circumscribed anomalies.\nOnce the BRT and geostatistical models were fitted, the property was predicted at each unsampled \n(\\textit{i.e} not used for fitting the models) location of the dataset by lognormal\nordinary kriging.\nThis method consists of predicting the residual for the log-transformed variable by ordinary kriging based on \nWinsorized data $\\mathbf{u}^*$ (equation \\ref{eqWin}),\nand back-transforming the predicted value to the original SOC stocks scale through:\n\\begin{equation}\n\\label{eqlok}\n\\hat{Y}(\\boldsymbol{x}_i)=exp(H(\\mathbf{X}_i)+ \\hat{u}_i + var[\\hat{u}_i]\/2 - \\psi(\\boldsymbol{x}_i))\n\\end{equation}\n\nwhere $\\hat{u}_i$ is the ordinary kriging prediction of u at a given prediction location $\\boldsymbol{x}_i$,\n$var[\\hat{u}_i]$ is the associated kriging variance and $\\psi$ the Lagrange multiplier;\nboth the kriging variance and Lagrange multiplier are needed to yield unbiased estimates in case of lognormal\nordinary kriging \\citep{Webster2007}.\n\n\n\n\\subsubsection{Validation procedure}\nWe thus considered six models: three models without a spatial term (the LU, L and F BRT models) and \nwhat is hereafter referred to as their spatial counterparts\n(the LU$_g$, L$_g$ and F$_g$ models), i.e. the same three models with an additional spatial term (Eq. \\ref{eq1}). \nThese six models were\nvalidated using cross-validation.\nThis validation procedure involves validation against independent data and enables\nestimation the predictive power \nof the proposed models.\n\nComparison between observed and predicted values of\nSOC stocks was carried out on the original scale using several complementary\nindices, as is\ncommonly suggested \\citep{31}: the mean\nprediction error (MPE, kg\/m$2$), the root mean square prediction\nerror (RMSPE, kg\/m$2$) and the\ncoefficient of determination ($R^{2}$) measuring\nthe strength of the linear relationship between predicted and observed\nvalues. Additionally, the ratio of performance to inter-quartile distance,\nRPIQ \\tc{\\citep{Bellon-Maurel2010}} was estimated as \n\\begin{equation}\n\\label{eq2}\nRPIQ = \\frac{IQ_y}{RMSPE}\n\\end{equation}\nwhere IQ$_y$ is the inter-quartile distance, calculated on observed SOC values from\nthe whole dataset. RPIQ index accounts much better for the spread of the population\nthan indexes such as RPD \\citep{Bellon-Maurel2010} and was used for comparing the prediction accuracy between the six different models. \nMedian prediction error and root of median of squared prediction errors \nwere also calculated (hereafter named MedPE and RMedSPE respectively). \nThese additions to MPE and RMSPE respectively, provide a more complete picture of the\nerrors in case of a skewed error distribution.\\\\\n\nThe validation procedure was done using a Monte Carlo \n10-fold cross-validation \\citep{XU2001}, enabling us to perform what will be referred to in\nthe following as external validation. It was preferred to simple data-splitting\nbecause the estimate of a model's performance then does not rely on the choice \nof a single sub-sample. We preferred a k-fold procedure \ninstead of a leave-one-out cross-validation as leave-one-out cross-validation results \nin a high variance of the estimate of the prediction error \\citep{hastie2001}. Each step of the \ncross-validation procedure can be summarized as shown in algorithm \\ref{alg1} and \nwas repeated 200 times for each model.\n\n\n\n\n\\begin{algorithm}\n\\caption{cross-validation repetition:}\n\\label{alg1}\n\\algsetup{indent=2em}\n\\begin{algorithmic}[1]\n\\STATE Split the dataset into Learning $(\\mathbf{X}, \\mathbf{Y})_{L}$ and Validation $(\\mathbf{X}, \\mathbf{Y})_{V}\n\\STATE Compute $\\mathbf{Z}_L = ln(\\mathbf{Y}_L)$\n\\STATE Fit the $H$ BRT model and estimate $\\hat{\\mathbf{Z}}_{L}=H(\\mathbf{X}_L)$\n\\STATE Fit a variogram on $\\mathbf{u}_{L}=\\mathbf{Z}_L - \\hat{\\mathbf{Z}}_L$\n\\IF {$\\overline{\\theta}$ and $\\breve{\\theta}$ are not valid} \n \\STATE Winsorize the dataset until valid $\\overline{\\theta}$ and $\\breve{\\theta}$ are obtained\n\\ENDIF\n\\STATE Estimate $\\hat{\\mathbf{u}}_{V}$ by ordinary kriging at $\\mathbf{Z}_V$ locations using the fitted variogram and\nthe Winsorized residuals $\\mathbf{u}^*_{L}$\n\\STATE Calculate the lognormal kriging estimate, $\\hat{\\mathbf{Y}}^{g}_{V}$ using equ.\\ref{eqlok}\n\\STATE Calculate $\\hat{\\mathbf{Y}}_{V}=\\exp(H(\\mathbf{X}_V))$\n\\STATE Compute PERF on $(\\mathbf{Y}_{V}, \\hat{\\mathbf{Y}}_{V})$\n\\STATE Compute PERF on $(\\mathbf{Y}_{V}, \\hat{\\mathbf{Y}}^{g}_{V})$\n\\end{algorithmic}\n\\end{algorithm}\n\nSteps 4 to 9 are performed as detailed in section \\ref{geost}. More specifically, the spatial component\nof the spatial models is validated at step 6 as presented in \nsection \\ref{geost} in order to make sure that these were valid representations of \nthe residuals of the BRT models. \nThis check was performed for each geostatistical model fitted during each repetition of the\ncross-validation procedure.\n\n\n$\\hat{\\mathbf{Y}}$ represents the prediction provided by one given BRT\n model and $\\hat{\\mathbf{Y}}^{g}$ \nthe prediction provided by its spatial counterpart model (the BRT and the \ngeostatistical model).\n$PERF$ indicates the computation of the performance metrics (R$^2$, RMSPE, RPIQ, MPE,\n MedPE, RMedSPE).\nWe should note that the last step of the algorithm represents a true external validation of the spatial\nmodel because the model fitting is performed while masking the observations $Y_{V}$ used for\nvalidation, both during the variogram fitting (step 4) and the kriging procedure\n(step 8). \nA similar procedure has recently been used and advocated by \\citet{Goov2010}, \nusing leave-one-out cross-validation.\nIt should be distinguished from other approaches where cross-validation embeds only\nthe kriging, and not the fitting of variograms parameters (\\textit{e.g.} \n\\citealp{Wu2009, Mabit2010, Xie2011}). In these cases, observations used \nfor validation have already been used for fitting the variogram and the resulting model is not independent\nfrom these observations.\nWe tested for differences between the performances of the six models in terms of each performance metric.\n The distributions of a performance metric were compared using a t-test with a Bonferroni\nadjustment. In the following, we use the terms MPE, RPIQ, RMSPE, R$^2$, MedPE and RMedSPE names\nto refer to their mean value over the 200 repetitions of the cross-validation.\nThe algorithm procedure was programmed with R software using functionalities of geoR and sp packages \n\\citep{Ribeiro2001, Bivand2008}. \\\\\n\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=85mm]{gbmGeostats3RevisForOAccess-Variogrames}\n\\end{center}\n\\caption{Dowd variograms of the residuals of the LU, L and F models to estimate random effects\nof the LU$_g$, L$_g$ and F$_g$ models. Residuals where calculated as the difference between \nthe log transformed response variable and the BRT model predictions. The variograms\nwere obtained by fitting the Mat\\`ern models on the full dataset.}\n\\label{variograms}\n\\end{figure}\n\n\n\\section{Results}\n\\subsection{Variogram fitting on BRT residuals}\nThe degree of spatial correlation of residuals from BRT models depended on the\ncomplexity of BRT models (Fig.\\ref{variograms})~:\nthe residuals resulting from the LU$_g$ model were spatially structured with a spatial dependence (defined as \npartial sill\/(nugget + partial sill), \\citealp{Lark2004})\nof 0.34. \nContrary to the residuals of the LU$_g$ model, the residuals of the more complex BRT models exhibited \nvery limited spatial structure (Fig.\\ref{variograms}). \nFor the F$_g$ model, the residuals had a spatial dependence of \n0.057\nand for the L$_g$ model of \n0.1.\nFig.\\ref{variograms} indicated that from the simplest model (LU$_g$) to the most complex one (F$_g$),\nthe part of the spatial variability not accounted for by the deterministic spatial trend decreased.\n\nFor the three models, Winsorizing was needed in order to produce valid models regarding the assumption\non the modelled variable. The $\\overline{\\theta}_w$ and $\\breve{\\theta}_w$ values obtained \nafter Winsorizing belonged to the confidence interval estimated for\neach model. The percentage of outliers ranged from 1.9\\% to\n2.8\\%. \nThese sites present extremely low or high SOC stocks that cannot be modelled by the spatial term and the BRT\nmodels only. \nThe number of such locations was halved between the LUg and the Fg models (Table \\ref{varioFit})\nas this latter model, the most complex one, was more able to model these extreme values.\nFor this model, outliers appeared to be evenly distributed over the studied area (Fig.\\ref{residMaps}h).\n\n\n\\subsection{Cross-validation analysis \\& performance of the proposed models}\nCross-validation yielded valid spatial models for 100\\% of the cross-validation repetitions.\nFig. \\ref{CVPlot} shows that the F and L models and their spatial counterparts performed \nglobally similarly to other and differently\nfrom the LU and LU$_g$ model. Average prediction performance of the models, expressed by the RPIQ index,\nranged for our six models between 1.27 and 1.42.\nIncreasing the complexity of BRT models resulted in improving the prediction performance and \nthe best R$^2$ value was obtained for the F$_g$ model with a value of 0.36.\n \\\\\n\n\n\n\n\nPredictions with the LU$_g$ model exhibited, on average, limited bias (Fig.\\ref{CVPlot}c). \nImportant differences appeared when comparing mean prediction errors or mean root squared\nerrors to median prediction errors or median squared errors (Fig. \\ref{CVPlot}c-f).\t\nThis indicated a skewed distribution of errors. When assessed by the median of the \nerror distribution (Fig. \\ref{CVPlot}e), the geostatistical predictions are\nshown to have a positive median-bias, whereas the standalone BRT predictions have\nmedian-bias close to zero.\nSimilarly, the skewness of the distribution resulted in considerably larger root mean\nsquared errors (Fig. \\ref{CVPlot}d),\nwith a lowest value of 2.83~kg\/m$^2$ compared to root median squared errors\n(Fig. \\ref{CVPlot}f) with a lowest value of 1.43~kg\/m$^2$.\\\\\n\n\\subsection{Performance comparisons of BRT models with or without spatial component}\nFor our French dataset, adding a spatial term to the models resulted in improvements in\nterms of R2, RPIQ and the mean measures of prediction, RMSPE and MPE. These improvements\nwere not significant for the L\/L$_g$ and F\/F$_g$ model comparisons, for the R2,\nRPIQ and RMSPE.\nHowever, in terms of the median measures, RMedSPE and MedPE, the standalone BRT predictions\ngenerally gave the better results.\n\n\n\\begin{table*} \n\\caption{Fitted variogram parameters in transformed units and cross-validation statistics.\n Mat\\`{e}rn parameters: $C_0$ is the nugget variance, $C_1$ is the partial sill variance, $\\varphi$ is a spatial\n parameter expressed in km and $\\kappa$ is a smoothness parameter.\n$\\overline{\\theta}$ and $\\breve{\\theta}$ are the validation statistics before Winsorizing and \n$\\overline{\\theta}_w$ and $\\breve{\\theta}_w$ after Winsorizing (for the three models within the 95\\% confidence interval). \n$N$ is the number of plots Winsorized and $c$ is the Winsorizing constant.}\n\\begin{tabular}{lcccccccccc}\n \\hline\n & $C_{0}$ & $C_{1}$ & $\\varphi$ & $\\kappa$ & $\\overline{\\theta}$ & $\\breve{\\theta}$ & N & c & $\\overline{\\theta}_w$ & $\\breve{\\theta}_w$ \\\\ \n \\hline\nLU$_g$ & 0.112 & 0.059 & 95.99 & 0.40 & 1.18 & 0.46 & 61 & 2.18 & 1.000 & 0.445 \\\\ \n L$_g$ & 0.086 & 0.010 & 11.99 & 10.00 & 1.15 & 0.46 & 46 & 2.28 & 1.000 & 0.452 \\\\ \n F$_g$ & 0.082 & 0.005 & 16.18 & 10.00 & 1.12 & 0.44 & 42 & 2.31 & 1.000 & 0.433 \\\\ \n \\hline\n\\end{tabular\n\\label{varioFit}\n\\end{table*}\n\n\n\n\n\\begin{figure}\n\\includegraphics[width=95mm]{gbmGeostats3RevisForOAccess-R2RMSPERPIQMPE}\n\\caption{\nPerformance of the six different models assessed using the 6 performance indices.\nOn each diagram, the values on the x-axis correspond to the aspatial models\n(BRT only): the LU, L and F models. Values on the y-axis correspond to the LU, L and F \nmodels plus a spatial term, \\emph{i.e.} the LU$_g$, L$_g$ and F$_g$ models. Horizontal and\nvertical bars represent the 95\\% confidence intervals around mean values\nover the cross validation repetitions, for the BRT models only and the BRT \nwith a spatial term models, respectively. \nThe dotted lines correspond to the $y=x$ function and for the c and e diagrams the $y=0$ and \nthe $x=0$ lines were added.}\n\\label{CVPlot}\n\\end{figure}\n\n \n\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics{cvResiduals_globe_gray87.pdf}%\n\\caption{Average prediction error (predicted minus observed values) on each RMQS site over cross-validation repetitions where it was considered as independent data, with the LU, LU$_g$, L, L$_g$, F and F$_g$ models on maps a, b, d, e, g and h respectively. Positive values indicate a positive bias and \\textit{vice versa}. Improvements from LU to LU$_g$, L to L$_g$ and F to F$_g$ models are given on maps c, f, and i respectively. For instance, map c gives the absolute error of the LU model minus the absolute error of the LU$_g$ model. Positive values indicate that adding a spatial component improved predictions at this location. Size of the dots is an increasing function of absolute errors (or absolute improvement for maps c, f and i). Crosses are outliers of spatial models fitted on the whole dataset.\n}\n\\label{residMaps}\n\\end{center}\n\\end{figure*}\n\n\nThe improvement resulting from the addition of the spatial component was a\ndecreasing function of the complexity of the BRT model. \nThis is shown clearly on Fig.\\ref{CVPlot}a, b, d and f.\nThe R$^2$ for the LU model was improved from 0.17 to 0.28\nwhen adding a spatial term. The map of errors (Fig.\\ref{residMaps}a)\nreveals regions where the LU model exhibited a strong negative bias,\nsuch as south west Brittany (area 1; for reference of area numbers, see Fig. \\ref{mape}),\nand mountainous areas such as the Massif Central (area 5), Alps (area 4), and Vosges\non the eastern part of the French territory (area 3). In other regions, it exhibited a\npositive bias, such as some of the parts of the south-west of France. \nThe map of improvement between the LU and LU$_g$ models\n(map of differences, for each RMQS site, between the absolute errors of prediction with one BRT model and\nthe absolute error with its spatial counterpart, Fig.\\ref{residMaps}c for the LU\/LU$_g$ models)\nshows areas with a dramatic improvement of predictions, and more specifically where the BRT predictions were\nstrongly biased.\nIt should be noted that the strongly biased predictions almost disappeared with the most complex model (model F, Fig.\\ref{CVPlot}g).\nSome under-estimations remained, although much smaller than for the LU model, in western coastal areas.\n\nMeasured using the R$^2$ index, the improvement yielded by adding a spatial component to the F model\nwas not significant, with R$^2$ values going from 0.35 to 0.36.\nNoticeably, the root of the median squared prediction errors exhibited a limited \nbut significant degradation (from 1.35 to 1.43 for the F and the F$_g$ models, respectively).\nThe spatial distribution of improvements (Fig.\\ref{residMaps}i) for this model\nwas clear for the south west Britanny region.\nIn many other areas the improvement was even more limited with some sites \nwhere prediction was improved and others where prediction was degraded (Fig.\\ref{residMaps}i).\nThese were areas with high absolute errors (\\emph{e.g.} the Massif Central Fig.\\ref{residMaps}g).\nInterestingly, there was no significant difference in the performance of the for F$_g$ and L$_g$\nmodels. This result indicated that adding a spatial component to the intermediate BRT model yielded\nsimilar results to adding a spatial component to the most complex model. \n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{rrrrrrr}\n \\hline\n & LU & LUg & L & Lg & F & Fg \\\\ \n \\hline\nRPIQ & 1.27 & 1.38 & 1.42 & 1.46 & 1.42 & 1.45 \\\\ \n R2 & 0.17 & 0.28 & 0.33 & 0.35 & 0.34 & 0.35 \\\\ \n RMSPE & 3.25 & 2.97 & 2.90 & 2.81 & 2.89 & 2.83 \\\\ \n RMedSPE & 1.61 & 1.59 & 1.40 & 1.45 & 1.35 & 1.43 \\\\ \n MPE & -0.60 & -0.02 & -0.49 & -0.16 & -0.48 & -0.19 \\\\ \n MedPE & 0.09 & 0.51 & 0.06 & 0.36 & 0.07 & 0.34 \\\\ \n \\hline\n\\end{tabular}\n\\caption{Performance of the six different models assessed using the 6 performance indices.\nValues given are the mean index values over the 200 repetitions of the\ncross-validation procedure. All values but for the R$^2$ and RPIQ indices are given in kg\/m$^2$.} \n\\label{perfIndexes}\n\\end{table\n\n\n\n\n\n\\section{Discussion}\n\\subsection{Spatial dependence of SOC stocks}\n\n\n\n\nThe spatial dependence of the BRT residuals decreased as the complexity of the BRT models was increased. \nThe variogram parameters provide some information about SOC controlling factors not included in the BRT\n model. For instance, when land use and clay content is included (in the LU model), the correlation range of model\n residuals lies between 300 and 400 km (Fig.\\ref{variograms}). This gives an indication of the\n correlation range of the most important SOC controlling factors missing from the LU model. \nHence, when attempting to improve the LU model of SOC stocks spatial dependence, one should\n look for controlling factors whose correlation range is less than 300 to 400 km.\nThe L model included other controlling factors, such as clay content, which decreased\nboth the total variance of residuals and their correlation range,\nto around 100 to 200~km. Lastly, the F model handled \nmost of the spatial dependence by including three more drivers, the pH, the parent material and the regional \nstatistics regarding organic matter additions. \nHowever, the high nugget in the variograms from the residuals of each BRT model,\nincluding the F model, indicated\nthat other controlling factors greatly influence SOC spatial distributions at ranges below the resolution\nof our dataset, \\textit{i.e.} 16~km. This is consistent with the results of many other studies.\nFor instance, in \\citet{Ungar2010}, the residuals of a model of SOC (\\%) taking into account administrative zonation\nand soil functional types were analysed by variography. They also found that most of the\nspatially structured variance\nwas accounted for by a short range component (in their case 1500-2000~m). \nAnother possible explanation for the high\nnugget could be that the uncertainty attached to most of the covariates (drivers)\nmaps is high, especially for the covariates derived from the 1:1000~000 soil map.\n\\subsection{Assessing the performance of one single model}\nIt is difficult to draw conclusions regarding the performance of the present models compared to those of other studies dealing with SOC prediction and mapping. \nSome deal with SOC contents when other deal with SOC densities or stocks. When\nworking on SOC stocks, the bulk densities are required, and if these are estimated\n(rather than measured), then the methodology\nfor estimating bulk density might have great consequences \\citep{Liebens2003}. Many studies use pedotransfer\nfunctions (PTFs) for estimating bulk density without accounting for the associated errors \\citep{Schrumpf2011}.\n\\citet{Ungar2010} estimated through a Monte Carlo approach\nthat uncertainty resulting from their PTF ranged between 0.55 and 7.72~T ha$^{-2}$ \ndepending on the SOC content. \\cite{Schrumpf2011} showed that the use of PTFs for\nestimating bulk densities can lead to wrong or biased estimates of SOC stocks. \nHowever it is currently not entirely clear to what extent measuring bulk densities is worth, considering the cost. \nThis cost could alternatively be used to collect further SOC concentration data and thus improve calibration \nand validation datasets. \nComparison between the studies is also made more complex by the differences between\nvalidation procedures (validation with an independent dataset, k-fold cross-validation, leave-one-out\ncross-validation). Furthermore, as quoted by \\citet{Minasny2013} and\n\\citet{Grunwald2009}, it is quite common that validation of SOC model predictions is missing entirely from a\nstudy. \nThe best models presented in our study (the F and F$_g$ models, see Fig.\\ref{CVPlot})\nperformed comparably to\nthose of \\citet{Lo_Seen2010}, fitted on soil data from the Western Ghats biodiversity hotspot (India). The models yielded,\nusing a cross-validation scheme similar to the one applied here, RMSPE of 2.6~kg~m$^{-2}$ and R$^2$ of 0.45, to be \ncompared to the RMSPE of 2.83~kg~m$^{-2}$ and R$^2$ of 0.36 obtained here for the F$_g$ model, along\nwith a MPE value of -0.19 kg~m$^{-2}$. Considering\nnational SOC prediction, \\citet{Phachomphon2010} produced 0-100~cm estimates using\ninverse distance weighting with 12 neighbours and ordinary cokriging, \nyielding MPEs of -0.2 and -0.1~kg~m$^{-2}$ and a RMSPE of 2.2 and 2.1~kg~m$^{-2}$,\nrespectively.\n\\citet{Mishra2009} produced estimates, for the Indiana state (USA) with a MPE of -0.59~kg~m$^{-2}$\nand RMSPE of 2.89~kg~m$^{-2}$. This study involved the fitting of SOC depth distributions, as \nrecently proposed by \\citet{Kempen2011}. This last study, is among the most successful, with a \nrigorous validation scheme and an moderate extent (125~km$^2$), giving an R$^2$ of 0.75 for prediction\non independent locations. Other studies on areas of the same order of\nmagnitude as the area of the French territory ($i.e.$ >50~000~km$^2$) are referenced in the comprehensive\nreview of \\citet{Minasny2013}. The R$^2$ values of our models\nare towards the lower end of R$^2$ values in studies mentioned in this review. Their\nperformance is also remarkably lower compared to similar models\npresented in \\citet{bg-8-1053-2011} and \\citet{Meersmans2012}. \nThis drop in model performance is likely a result of the\nuncertainty of the clay content estimated from the 1:1000~000 soil map (as compared to \nthe measured clay contents used in the previous studies). This is indicated by the\nimportance (quantified using the BRT variable importance index) of clay related\nvariables in the L and F models of the present study. These variables ranked at best\n 7$^{th}$ and 7$^{th}$\nin the L and F models, respectively. This is to be compared to the first rank\nobtained by the $clay$ variable in the $Extra$ model presented in\n\\citet{bg-8-1053-2011}, fitted and validated with measured clay contents.\nThus, for the two previous studies for the French territory,\nthe model performance for mapping might have been\noverestimated because some variables used for validation were observed at site level.\nIn the present study, models are validated using data\nestimated from ancillary maps, providing a more realistic assessment of model\nperformance for mapping. Such small differences in the model validation schemes\nare difficult to trace and might further complicate comparison between different\nstudies.\n\n\n\\subsection{Distribution of predictions errors}\nAnother issue worth commenting on here is the distribution of SOC stocks\npredictions errors.\nBRT modelling of log transformed SOC stocks gave residuals that were close to normal, with outliers.\nThese residuals were modelled using a robust geostatistical approach, and a back transformation \nproposed for log-normal ordinary kriging was applied.\nThe final predictions exhibited a limited bias (MPE=-0.19~kg~m$^{-2}$ for the F$_g$\nmodel, Table \\ref{perfIndexes}), a problem that can arise in lognormal kriging due to the sensitivity of the back-transform\n to the variogram parameters and to the assumption of a lognormal distribution \\citep{Webster2007}. \nAlthough we currently have no ready solution for providing unbiased predictions,\nespecially for the L$_g$ and F$_g$ models, we note that the MPE is small in comparison \nto the RMSPE (less than 5 \\% of the RMSPE), which compares favourably with results of other studies reported above.\nWithout the spatial component, the BRT predictions (back-transformed with a simple exponential,\nsee Algorithm 1), showed negative mean-bias (i.e. under-prediction on average).\nThis is logical because the BRT method ensures unbiased\npredictions on its predicted variable, here the log transformed SOC stocks; therefore, \nback-transformation of the BRT predictions through the exponential function results \nin a negative mean-bias for SOC stocks on the original scale.\n\nFurther insight is provided by examining the behaviour of other performance\nindices, such as the median prediction error or the median squared prediction\nerror.\nThe lognormal kriging back-transformation aims to provide mean-unbiased predictions on the original\n scale, hence the reasonably small MPE. However, with a skewed distribution of errors, the predictions\n cannot also be made to be median-unbiased, hence the MedPE of the geostatistical predictions\n is positive. Without the geostatistical component, the back-transform of the BRT predictions\n (through the exponential function) preserves the median-unbiased property, giving low values\n of MedPE, but introduces mean-bias. Comparisons between the results of our BRT predictions\n and their spatial counterparts should be made with this in mind; the differences could be at\n least partly due to the different objectives of the back-transformed predictors. \nSince SOC distributions are most commonly \nas log-normal, prediction error distributions are also skewed, and perhaps these\nfurther measures (MedPE and RMedPE, which are robust\nto extreme prediction errors at a small number of locations) can add useful information about model performance.\n\n\nWe note here that the BRT approach could be applied to model the SOC stocks directly, without\n the need for any transformation (as shown by \\citealp{bg-8-1053-2011}). We would expect the resulting\n predictions to have low MPE, but a positive MedPE (a similar pattern to the results of the\n geostatistical approach). We tested this direct BRT modelling approach with the F BRT model;\n mean prediction errors were improved from -0.48 to 0.01~kg~m$^{-2}$,\n whilst median prediction errors were\n increased from 0.07 to 0.45~kg~m$^{-2}$.\n In terms of squared errors, the RMSPE improved slightly from\n 2.89 to 2.82~kg~m$^{-2}$, \nwhilst the RMedSPE increased from 1.35 to 1.5~kg~m$^{-2}$.\nIn this work, we applied BRT\n modelling to the log-transformed data so that residuals would be approximately normal, thus\n allowing the robust geostatistical approach to be applied. However, if all that was required\n was predictions of the SOC stock through a BRT approach, then it may be better to model the \nraw SOC stock data directly. \n\n\n\n\\subsection{Relevance of the models for SOC mapping}\n\nModels comparisons enable one to come up with recommendations regarding the best models \nfor assessing a specific question. Of course, the quality of the models should be assessed using several criteria\nas the question of interest is asked within a specific context (data availability, nature of the considered systems, available\nstatistical and modelling knowledge, computing cost). Several comparison criteria may be defined : the \nSeveral comparison criteria may be defined : the \ntechnical knowledge (Know-Q) and the pedological knowledge (Know-P) needed for fitting,\nvalidating and applying models \\citep{Grunwald2009}. \nWe may add a criterion related to the nature of the required datasets, again, for fitting, \nvalidating and applying models, and another one related to the performance of the models, assessed through validation\nprocedures. Although other criteria might be defined, those might be considered as the main ones for predictive models.\nThe best models would be those which, given the available Know-Q, Know-P and the datasets, yield the best performance.\n\nSeveral studies of SOC mapping include model comparison in order to provide the best performing model and advices regarding which\nmodel should be used in a specific context. Comparing the results of these different studies is not straightforward since the \npedological contexts change from one study to another.\nIn studies based on the application of geostatistics, model comparison\nis usually carried out by comparing simple geostatistical models with more advanced\napproaches designed to incorporate covariate data (e.g. cokriging,\n\\citealp{MCBRATNEY1983}, linear mixed models \\citealp{Lark2004}, or more generally\nscorpan-kriging models \\citealp{McBratney2003}).\nThe conclusion is consistently that including variables representing SOC drivers in geostatistical models improves model\nperformance \\citep[for instance see][]{Kempen2011, Vasques2010, Ungar2010}. The cost of such an improvement is that it\nleads to an increase of the Know-P and the Know-Q. On one hand, such models might involve a great \namount of technicality. On the other hand, the availability at observational sites of information regarding \nthe included drivers\nis then also required for the fit, the validation and later on the prediction. \\\\\n\nFewer studies considered the question the other way around by including geostatistics in regression-based scorpan\n models, such as the BRT models considered here or by comparing regression models to regression-kriging models. \nOn a 187,693 km$^2$ area, \\citet{Zhao2010} showed that simple regression trees (RT) exhibited the best performance\n when compared to regression kriging and artificial neural network-kriging, among other methods. They concluded\n that their predictive models mostly rely on their ability to integrate secondary information into spatial \nprediction. In our case, the conclusions are contrasted. The LU$_g$ model applied a robust geostatistical approach\n to the residuals of the simplest BRT model (the LU model, which included land use and clay content as the only\n fixed effects, among the most important SOC drivers at the national scale of France,\n\\citealp{bg-8-1053-2011}).\n This approach exhibited comparable but lower performance, in terms of R$^2$, RPIQ, and RMSPE compared to the \nmore complex regression models (L and F) processing all the available ancillary data. Therefore, we conclude\n that adding a spatial component to a simple regression model can give similar improvements to adding more \npredictors to the model.\n\n\nUnbiased predictions might be achieved either by BRT\nmodelling on the original scale \\citep[as shown by][]{bg-8-1053-2011} or by BRT\nmodelling of the log-transformed response and applying a geostatistical\ntreatment. When it comes to mapping, one may wonder if preserving the mean of\nSOC stock distributions is more important than preserving the median. The mean might\nbe more imporant in order to report total SOC stocks at the national scale, but preserving the\nmedian might result in more realistic maps. It is essentially a modelling choice, as to whether\nmean-unbiasedness or median-unbiasedness is required. \n\n\\subsection{Further recommendations for SOC mapping at the national scale}\nOur best model (the F$_g$ model) only explained 36\\% of the SOC variation. \nIt is possible that local kriging methods, rather than the global kriging applied here, \ncould lead to improved predictions in some areas, although the choice of appropriate local \nneighbourhood sizes then provides an additional issue. Other regression models could be tested,\nsuch as support vector machines (SVM), random forests \\citep{hastie2001} or the\nCubist modelling approach (\\textit{e.g.} \\citealp{Bui2009}). \nThese models could result in different residual distributions but in our opinion,\nthe consequences on the performance of their spatial couterpart are likely to\nbe limited. Some of them, such as SVM require more technical knowledge, thus increasing the Know-Q factor, \ncompared to the BRT models proposed here, for which efficient working guides have been proposed \\citep{Elith2008}.\n\\citet{Grunwald2009} stated that the future improvements in the prediction of soil properties\ndoes not rely on more \nsophisticated quantitative methods, but rather on gathering more useful and higher quality data.\nChoosing between gathering more data or improving the modelling is indeed the choice modellers are\nfacing when attempting to improve SOC maps. We show here that the choice might not be\nas straightforward as stated by \\citet{Grunwald2009} : at the national scale, even a\nsimple model based solely on landuse and clay, when complemented by geostatistics,\nperformed comparably to a model where all the available ancillary data was included\n(for France, at the time of the study, these were land use, soil, climate and npp\nmaps). Therefore, for a country where only landuse and clay maps were available, the most\nefficient way to improve predictions in the short term would certainly be to consider\ngeostatistical modelling of residuals (\\textit{i.e.} improving the modelling, rather\nthan gathering new ancillary data). \nFurthermore, other datasets, on the same extent (\\textit{i.e.} national extent)\nbut with different resolution might be more suited to geostatistics. Here,\nthe 16x16km$^2$ does not allow for modelling spatial autocorrelations\noccurring at small scales. Many studies have demonstrated such an autocorrelation when more \"local\"\nneighbourhoods can be studied (\\citealp{Mabit2010, Don2007, Rossi2009, Wang2009, Spielvogel2009} \nwith an extent <50km$^2$ and \\citealp{Mishra2009} at coarser extents and using a\nnon-systematic sampling scheme).\\\\\nOn the other hand, adding spatial terms to the most complex models only increased know-Q\nto our data-analysis scheme. \nMore generally, the higher the uncertainty in maps of ancillary variables, the more likely it is that models based solely on \nSOC spatial dependency or including only few good quality (in terms of data uncertainty) predictors will outperform\ncomplex models using many ancillary variables. \n\n\nFor France, other SOC predictors could be included in our regression models,\nand result in significant improvements. There are different possibilities \\citep{bg-8-1053-2011}, but of course, these\nimprovements depend on the increase in Know-P and data-collection one is willing to consider.\nHaving a better soil map is obviously a very good candidate. This is exemplified here\nby the drop in the F model performance between the present study and the work by\n\\citet{bg-8-1053-2011}. \\\\\n\nIt is also worth noting that an advantage of using multiple regression tools, such as\nthe BRT models, comes from studying the fitted relationships between the response \nand the predictors, which may in turn bring \nadditional knowledge. For instance, BRT was used in \\citet{bg-8-1053-2011} to rank the effects\nof the SOC stocks driving factors.\n\n\n\n\\section{Conclusion}\nBased on the results of the present study, and others found in the literature, we formulate the following recommendations.\n These recommendations apply for France but the French diversity in terms of pedoclimatic conditions might make these\n recommendations valid for other countries as well. If the information contained in the relationships between\n the ancillary variables and the SOC stocks are strong enough, then standalone robust regression models such as\n BRT - which enable one to take into account in a flexible way non-linearities and interactions exhibited by the\n datasets - could prove sufficient for SOC mapping at the national scale. This conclusion is valid provided that i)\n care is exercised in model fitting \\citep{Elith2008} and validating, ii) the dataset does not allow for modelling\n local spatial autocorrelations, as it is the case for many national systematic sampling schemes, and iii) the ancillary\n data are of suitable quality. However, the results in this paper demonstrate that it should also be prudent to use\n geostatistical methods to check for spatial autocorrelation in the BRT residuals. If found, which was the case\n for the simpler of our BRT models (which failed to capture all the important SOC drivers at a\n national scale), then a kriging approach\n applied to the BRT residuals can provide a more accurate map of SOC stocks. Furthermore, even if the spatial correlation\n fails to significantly improve SOC predictions globally, it is possible that by mapping the BRT model residuals we\n can highlight regional errors in the BRT model, and thereby provide information to guide research into further SOC\n model development. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgements}\nThe sampling and soil analyses were supported by a French Scientific Group of Interest \non soils: the GIS Sol, involving the French Ministry of Ecology, Sustainable Development\nand Energy (MEDDE), the French Ministry of Agriculture, Food and Forestry (MAAF), the\nFrench Agency for Environment and Energy Management (ADEME), the Institute for Research\nand Development (IRD), the National Institute\nof Geographic and Forest Information (IGN) and the National Institute\nfor Agronomic Research (INRA). This work was supported by the EU project ``Greenhouse\ngas management in European land use systems (GHG-Europe)''\n(FP7-ENV-2009-1-244122. The authors thank all the soil surveyors and technical\nassistants involved in sampling the sites. Special thanks are addressed to the technical\nassistants from the National French Soil Bank for sample handling and preparation. \n\\section*{References}\n\n\\bibliographystyle{elsarticle-harv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{Motivations} \n\n The way a granular bed responses to an applied shear stress reveals\n many of the peculiarities of this poorly comprehended \"state\" of\n matter~\\cite{jaeger2000}. When a granular bed is sheared slowly\n enough by an elastic medium driven at constant velocity, nor the\n shear stress neither the shear rate can be directly controlled from\n outside \\cite{annunziata16}. Rather, the system sets itself in a state at the edge\n between jamming and\n mobility~\\cite{Dalton2001,pica10,dalton05,baldassarri06,petri08,geller2015,Zadeh18},\n exhibiting intermittent flow also called stick-slip. This is an\n instance, among many others, of phenomena displaying intermittent and\n erratic activity, in the form of bursts, or {\\it avalanches},\n characterized by wild fluctuations of physical quantities, and for\n this reason named {\\it crackling noise}~\\cite{sethna01}. Examples\n include earthquakes~\\cite{main96}, fractures~\\cite{petri94},\n structural phase transitions~\\cite{cannelli93} and plastic\n deformation~\\cite{dimiduk06}. These diverse phenomena share several\n common statistical features. In particular physical quantities\n display often long range correlations and self-similar\n distributions, i.e. power laws, over a wide range of values. Such\n properties are usually ascribed to the vicinity of some critical\n transition~\\cite{bak88,sethna01}, which in granular media could be\n the jamming transition~\\cite{biroli2005}. Consistently, critical\n transitions bring about the existence of universality classes:\n systems that are microscopically very different, can display similar\n and universal statistical properties in their critical dynamics.\nWithin this spirit we have designed an experimental setup suitable to\nobserve such an irregular granular dynamics~\\cite{dalton05},\ncharacterized by critical fluctuations and reminiscent of that\ndisplayed by the aforementioned wide class of physical systems.\n\nIn order to compare different systems exhibiting\ncritical dynamics, several quantities can be analyzed. Recent literature witnesses a surge of interest for the average avalanche (or burst) shape (or profile). Introduced in the context of Barkhausen noise in ferromagnetic materials~\\cite{Kuntz2000}, the average avalanche shape can provide a much sharper tool to test theory against experiments than the simple comparison of\ncritical exponents characterizing probability distributions. As shown for simple stochastic processes, the geometrical and scaling properties of the average shape of a fluctuation depends on the temporal correlations of the dynamics~\\cite{baldassarri:060601,colaiori:041105,colaiori08}. Such observation has allowed, for instance, to evidence a (negative) effective mass in magnetic domain walls~\\cite{zapperi05}. \n\nAverage avalanche shapes have been investigated for a variety of materials, well beyond magnetic systems~\\cite{Papanikolaou2011}. Among the others: dislocation avalanches in plastically deformed intermetallic\ncompounds~\\cite{chrzan1994} and in gold and niobium crystals~\\cite{sparks2017}; stress drop avalanches at the yielding\ntransition in metallic glasses~\\cite{antonaglia14} and, via numerical\nsimulations, in amorphous systems~\\cite{ferrero16,Lagogianni2018}; bursts of load redistribution in heterogeneous materials under a constant external load~\\cite{PhysRevLett.111.084302}. Many biological studies have also measured average burst shape in cortical bursting activity~\\cite{Roberts2014,Wikstro2015}, in transport processes in living cells~\\cite{PhysRevLett.111.208102}, as well as in ants~\\cite{Gallotti2018} and human~\\cite{Chialvo2015} activity. \nMany other bursty dynamics have been investigated by means of this general tool, as stellar processes~\\cite{PhysRevLett.117.261101} or Earth's magnetospheric dynamics~\\cite{Consolini2008}, and earthquakes~\\cite{metha06}. The dependence of the avalanche shape from the interaction range has been studied in elastic depinning models ~\\cite{laurson13}. \n\nIn this paper we acquire and analyze for the first time the average shapes of {\\it slip velocity} and of {\\it friction force} in a sheared granular system, directly in the deep critical phase where it displays intermittent flow. Our findings also shed light on apparently contradictory recent observations \\cite{bares17,denisov17}, and supply new essential elements to improve the formulation of new and more effective dynamical models, with important impact on the understanding of related natural and technological issues.\n\n\n\n\\subsection{Introduction}\n\nWe study the stick-slip dynamics at the level of the single slip event, as illustrated in Fig.~\\ref{fig1}.\nThe left panel reproduces the angular velocity, during a slip, of a slider that rotates while in contact with the granular bed. The middle panel shows the corresponding frictional torque experienced by the slider. \nThe motion can be described as a function of the {\\it internal avalanche time} $t$, which starts at the beginning of the slip and ends when the system sticks. Each slip event has its own duration $T$ and its size $S$ (the grey area in Fig.~\\ref{fig1}, left panel). The average velocity shape is performed considering many slips with the same total duration $T$ as function of internal avalanche time $t$. A similar averaging procedure is followed to obtain the average friction shape: i.e. the average friction torque exerted by the granular medium at the internal time $t$ during a slip event of total duration $T$. The right panel of Fig.~\\ref{fig1} shows the intricate, complex relation between friction and velocity during the intermittent, stick-slip dynamics. \n\nIn our study we observe the existence of a cross-over from small to large slips. We identify it as a breakdown of the critical scaling and show that such transition is in turn related to a change in the frictional properties of the system. Specifically, we find that the average velocity of the cross-over avalanches corresponds to a characteristic value marking a dynamical transition from weakening to thickening frictional behavior of the system. \nAverage shape for avalanches of stress drop~\\cite{denisov17} and energy drop rate~\\cite{bares17} have been recently investigated in slow but continuous flow, where velocity never drops to zero and the stress is the relevant fluctuating quantity. While in ~\\cite{denisov17} average avalanches have been found to display symmetrical and self-similar shapes, in \\cite{bares17} these properties have been observed only in avalanches sufficiently small. Our investigations, conducted in the critical state, contribute to clarify the origin of these contradictory behavior observed in a different situation.\n\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\linewidth]{Figures\/Fig1}\n\\caption{Sample of raw data for a slip event. Left: instantaneous velocity of the slider versus time. Upper axis of the graph reports the total time elapsed from the beginning of the experiment, while bottom axis indicate the internal avalanche time, starting from $0$ when the slip begins, and ending at slip duration $T$. The area below the curve is the total slip size $S$. Center: Friction torque experienced by the slider in the same time window. Right: Friction torque vs instantaneous slider velocity in the same time window.}\n\\label{fig1} \n\\end{figure}\n\n\n\\subsection{Experimental set up} \n\nThe experimental set up (see Fig.~\\ref{fig-expsetup}) is similar to that\nemployed in~\\cite{dalton05,baldassarri06,petri08,leoni10,pica12}\nand described in more detail in the Appendix.\nThe apparatus consists of an assembly of glass spheres\nlaying in an annular channel and sheared by a horizontally rotating top\nplate driven by a motor. The instantaneous angular position of the\nplate and of the motor, respectively $\\theta_p$ and $\\theta_0$ are acquired \nby means of two optical encoders.\n\nThe plate is coupled to the motor through a soft torsion spring of elastic constant $k$, so the instantaneous frictional torque, $\\tau$, exerted by the granular medium can be derived from the equation of motion for the plate:\n \\begin{equation}\n \\label{motion}\n\\tau = -k (\\theta_0-\\theta_p) - I \\ddot{\\theta}_p,\n\\end{equation}\nwhere $I$ is the inertia of the plate-axis system. The motor angular speed $\\omega_0$ is kept constant, so that $\\theta_0(t)=\\omega_0 t$, but the interaction between the top plate and the granular medium is crucial in determining the instantaneous plate velocity, leading to different possible regimes. When both the driving speed and spring constant are low enough the critical dynamics, in which the plate performs highly irregular and intermittent motion, is approached.\n \n\n\\subsection{Scaling analysis}\n\n\nWe have performed long experimental runs in the critical, stick-slip, regime measuring the angular coordinate of the plate $\\theta_p(t)$, from which we have derived the plate angular velocity $\\omega_p(t)$. We have collected statistics for a large number of avalanches: the distribution of corresponding durations $T$ and sizes $S=\\int_0^T \\omega_p(t) dt$ are shown in Fig.~\\ref{fig2}. Both distributions exhibit a slow decay, roughly close to a power law, terminating by a bulging cutoff for large sizes. Similar broad distributions are shared by other quantities, e.g. the plate velocity, $\\omega_p$~\\cite{baldassarri06} (not shown).\n \\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\linewidth]{Figures\/Fig2}\n \\caption{Left: Probability distribution of slip extensions ($S$). Right: Probability distribution of slip durations.}\n \\label{fig2} \n \\end{figure}\n As recalled in the introduction, power law decay in distributions are generally considered the hallmark for criticality. If this is the correct scenario, one should observe self-similar scaling relations in average quantities too. In particular, we consider the average shape of velocity during an avalanche of a fixed duration, defined as:\n\\[\n\\langle \\omega_p(t)\\rangle_T = \\frac 1{N_T} \\sum_i \\omega^{(i)}_p(t),\n\\] \nwhere $\\omega^{(i)}_p$ is the plate velocity during the $i_{th}$ observed avalanche of duration $T$, whose total number is $N_T$, and $t$ is the internal time within the slip: $0_T$ depends on both $t$ and $T$, criticality should imply that an invariant function $\\Omega$ exists, such that it can be expressed as:\n\\begin{equation}\n<\\omega_p(t)>_T = g(T) \\Omega(t\/T).\n\\label{scaling}\n\\end{equation}\nThe function $g(T)$ determines how the average event size $$ scales with respect to\nthe slip duration $T$. In fact, integrating the above equation with respect to $t$ one gets:\n\\begin{equation}\\label{eq-2}\n_T \\, = T g(T)\n\\end{equation}\n(where without loss of generality we have assumed $\\int_0^1 \\Omega(x) dx = 1$). The function $\\Omega$ represents the average invariant\npulse shape, which is expected not to depend on the slip duration and can be computed via the above equations as\n\\begin{equation}\n\\Omega(t\/T) = T\\,\\frac{<\\omega_p(t)>_T}{_T}.\n\\label{faverage}\n\\end{equation} \n\nThe previous scaling scenario is produced by several theoretical models for critical dynamics. One paradigmatic model for crackling noise is the so called ABBM model~\\cite{Alessandro1990}, proposed to describe the intermittent statistics of electric noise recorded during hysteresis loops in ferromagnetic materials (Barkhausen noise). It is simple enough to allow exact analytical results~\\cite{Alessandro1990,Feller1951,colaiori08,Papanikolaou2011,Dobrinevski2012}, and it predicts power law distributions for avalanche sizes and durations, as well as parabolic average avalanche shape, in the scaling regime. In the conclusive section we will discuss the connections between this model and what we observe in our study.\n\n\\begin{figure\n\\includegraphics[width=\\linewidth]{Figures\/Fig3}\n\\caption{\\label{fig3} Scatter plot of size $S$ vs duration $T$ of each single slip. Symbols (colors online) correspond to the different duration classes employed for the average shape analysis. Inset: Average slip velocity $\\langle S \\rangle_j\/\\langle T \\rangle_j$ for each class as a function of average duration $\\langle T \\rangle_j$ of the class. Lines (both in main plots and in inset), are guide to the eyes for: power law behaviour $S\\approx T^{2.2}$, and linear behaviour $S = \\omega_c T$ (where $\\omega_c=0.04$, see text and Fig.~\\ref{fig5} for definition).}\n\\end{figure}\n\n\n\\subsection{Average shape of slip velocity}\n\n\n\nTo investigate the properties of the average pulse shape, and to test its invariance and the scaling hypothesis, we have divided all the avalanches observed in the experiments into classes according to their duration (see Appendix). Figure~\\ref{fig3} (main panel) shows the slip size as function of its duration for all the slips considered in the statistics, and the different colors correspond to the different classes of duration. For each class, $j$, we have computed the average slip size $_j$ and duration $_j$, and \nthe average velocity $<\\omega_p(t)>_j$ measured as function of the internal time $t$. According to Eq.~(\\ref{faverage}), in order to obtain $\\Omega(t\/T)$, this average velocity has then been normalized to the ratio $_j\/_j$.\n\nThe resulting average shapes for each class of duration are shown in Fig.~\\ref{fig4} (light, grey points in ~\\ref{fig3}, corresponding to very short slips at the limit of the system resolution, have been discarded). All classes exhibit comparable values of the rescaled maximum velocity implying that longer avalanches are also faster. However, rescaled average shapes unveil that there are two kinds of avalanches. Some of them, say {\\it short}, have the shape described by a unique function $\\Omega(t\/T)$, visible in Fig.~\\ref{fig4} (left panel). That is, their size and duration are related by the well defined scaling law Eq.~\\ref{scaling}. On the contrary, the average velocity shapes of {\\it long} avalanches (right panel) change with the duration and cannot be reduced to a universal form by a homogeneous rescaling of the variables. Moreover, they do not display the almost symmetric shape characterizing small avalanches.\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=\\linewidth]{Figures\/Fig4} \n\\end{center}\n\\caption{\\label{fig4} Average velocity profile of slips from experiments\n rescaled by their duration $T$, and size $S$ according to Eqs.~(\\ref{scaling}) and~(\\ref{faverage}). Different curves correspond to different slip duration ranges. Colors refers to the duration class, as shown in Fig.~\\ref{fig3}. Left panel shows classes of ``short'' avalanches, right panel classes of ``large'' avalanches (see also Appendix). }\n\\end{figure}\n\n\n\nAs anticipated, Bar\\'es and coworkers~\\cite{bares17} have recently measured the average shape of stress drop rate avalanches in a bidimensional granular system driven at constant shear rate. Similarly to the present findings, they observe that larger slips develop left asymmetries. They have hypothesized a possible role of the static friction between particles and supporting glass, and of nonlinear elasticity, given by the relatively soft nature of the grain material employed in their experiments. We can however exclude these factors in our experiments, where the interface grain-wall is small with respect to the bulk and the beads are made of glass.\nThe leftwards asymmetry observed in experiments represents a very interesting phenomenon, which in general is expected from non trivial dynamical effects, and cannot be due to the simple inertia of the moving plate (which should produce opposite asymmetry~\\cite{baldassarri:060601,colaiori:041105}). \n\nIn some magnetic materials, a leftwards asymmetry has been observed and related to memory effects acting as an effective negative mass of domain walls~\\cite{zapperi05}. In our experiments we cannot exclude the existence of such an ''effective'' inertia of the system, due to some memory introduced by the underlying granular. For instance, in some experiments~\\cite{nasuno98} researchers noted an increased inertia of the slider moving on a granular bed, due to the grains dragged by the slider itself and a similar augmented inertia has been observed also in our previous experiments~\\cite{baldassarri06}. Since the quantity of grains dragged by the disk during its motion could change during the irregular motion of the system one should consider the inertia as a dynamical quantity, rather than a constant, and this could in principle be one origin of the asymmetries. A left asymmetry has been also observed in earthquakes~\\cite{Houston1998,metha06}. \n\n\n\\subsection{Breaking of scaling}\n \nMore insight into the mechanisms leading to the scaling breakdown can be gained by looking again at the plot relating $S$ and $T$ shown in Fig.~\\ref{fig3}. The first information coming from this plot is that there exists a definite statistical scaling between slip size and duration, as shown by the scattering of data. The white squared symbols in the main plot represent the average slip size and duration of each class (statistical errors are negligible on these averages). It is seen that, at least for the four lower classes, they follow an algebraic relation: $_j \\simeq _j^{\\alpha}$ (red continuous line). The value of the exponent turns out to be $\\alpha \\simeq 2.2$. This is close to, but clearly different from, the value of $\\alpha = 2$ expected from extant models (for instance the ABBM model mentioned above~\\cite{Alessandro1990}). \n\n\nThe other information supplied by the scatter plot of Fig.~\\ref{fig3} is that this behavior changes at large slips, where a linear dependence, $ \\propto $ looks more appropriate (yellow dashed line). Interestingly, the crossover between the two behaviors takes place around the fourth class, exactly where the scaling of the average pulse shape, shown in Fig.~\\ref{fig4}, breaks down. \nThe inset of Fig.~\\ref{fig3} puts into better evidence this cross-over. There, we have plotted the quantity $_j\/_j$ as function of $_j$. We observe a weakly superlinear relation for small slips, followed by\na plateau at large slips. Note that the ratio between $S$ and $T$ is nothing but the plate average velocity during the slip. This observation allows to identify a critical velocity, as the ratio between the average slip size, $ \\approx 0.063 $ rad, and the average duration, $ \\approx 1.57$ s, of the fourth class: $\\omega_c = \/ \\approx 0.04$ rad\/s. We speculate that during large slips ($T>T_c$), when the plate reaches high velocities $\\omega_p>\\omega_c$, it could experience some sudden increase of friction.\nIn the next section it will be seen that this increase indeed appears, as a dynamical effect. \n\n\\subsection{Stochastic friction}\n\nThe forces ruling the slip dynamics are the spring torque and the granular friction. While the first one\njust depends linearly on the instantaneous angle, the second displays a complex behavior (see Fig.~\\ref{fig1}, central and right panel) from which interesting features emerge.\n\nThe classical Mohr--Coulomb criterion predicts constant friction at low shear, and increasing values when the system enters the Bagnold's regime~\\cite{bagnold54,bagnold66}, a behavior well observed experimentally at constant shear (see e.g. \\cite{savage84}). \n \\begin{figure\n \\includegraphics[width=\\linewidth]{Figures\/Fig5}\n \\caption{\\label{fig5}\n Conditioned average friction torque (see Eq.~\\ref{avefric}) as a function of the instantaneous plate velocity in experiments\n \n }\n \\end{figure}\n However, it is doubtful whether this behavior could be relevant to the stick-slip dynamics observed in the critical regime. More generally, friction in granular systems is usually measured under controlled shear strain or stress, but its properties can be dramatically different when observed in the self-organized state, as exemplified in Fig.~\\ref{fig1}, right panel. Some statistical features of friction in this state have been investigated in~\\cite{dalton05,baldassarri06,petri08}, but despite this quantity plays a crucial role for the system dynamics, it has never been systematically measured to date during stick slip.\n\nIn the critical regime friction is a random quantity. It depends on the details of the network of contacts\n between particles in the granular bed. Fluctuations in the frictional response of the granular medium result from the stress propagation on the evolving network of grain contacts, and are at the very origin of the motion stochasticity.\nThis fact has a number of consequences and some subtleties. A random friction force, as a stochastic quantity, can be described by statistical estimators like averages, moments, correlators, etc. Nevertheless, several averages can be defined, which depend on the driving protocol and can be very different from each other. \nMore specifically, one can consider the time average of the friction over the full dynamics, but this is not always really meaningful, especially in the critical regime. As shown in~\\cite{dalton05} the statistical distribution of friction in this regime is characterized by fat tails, as opposite with the continuous sliding where it is normal. Another possible average,~\\cite{baldassarri06,leoni10} is the average friction {\\em conditioned} to the (instantaneous) plate velocity: \n\n\n \\begin{equation}\n \\label{avefric}\n \\tau_f(\\omega) = \\lim_{T\\to \\infty} \\frac{\\int_0^T \\tau(t) \\delta(\\omega-\\omega_p(t))dt}{\\int_0^T \\delta(\\omega(t)-\\omega_p)dt}.\n \\end{equation}\n \nIn Fig.~\\ref{fig5} we plot such conditioned average friction during the stick slip critical regime.\nAs noted in~\\cite{baldassarri06}, an interesting Stribeck-shaped (that is, a shear weakening followed by a thickening) friction curve appears, featuring weakening for small velocities and recovering the Bagnold behavior at high velocities. However, this velocity weakening arises as a dynamical effect. In fact, a different driving protocol can give different results: For instance, at constant shear~\\cite{savage84} the average friction is constant at low and intermediate speeds.\n\n\nThe analysis of Fig.~\\ref{fig5} allows to identify a velocity corresponding to the position of the minimum of the average friction $\\tau_f$. Our experiments clearly indicate that the position of this minimum does not depends on the drive velocity (Fig.~\\ref{Fig6SeveralDrives} in Appendix) and it is always attained near the velocity $\\omega\\approx 0.04$ rad\/s. This value is very close to the value $\\omega_c$ marking the crossover in the scaling of $S$ vs $T$ (see Fig.~\\ref{fig3}), which in turn is related to the breakdown of scaling of the average avalanche shape shown in Fig.~\\ref{fig4}.\nThis corroborates the previous interpretation of the crossover phenomena and the breaking of the critical scaling of the dynamics as due to the weakening followed by the increase of friction experienced by the plate during larger, faster avalanches Fig.~\\ref{fig5}.\n\n\nIn order to better investigate whether and how friction dynamical behavior can influence the average velocity shape we have also analyzed the average shape of friction along the slip. \nIn an analogous fashion to what done for computing the velocity shapes, one can define $\\langle \\tau(t)\\rangle_T$ as the average frictional torque for slips of the same duration $T$. In practice, we have computed the average value of the friction torque over slips of similar duration $T$, according to the same classes of duration adopted for velocities (see Fig.~\\ref{fig3} and Appendix). \nThe results, presented in Fig.~\\ref{fig6}, show that the breaking of scaling of the velocity shapes corresponds to a change in the frictional properties of $\\langle \\tau(t)\\rangle_T$. \nFor small avalanches, i.e. those corresponding to the cases in which average velocity shape obeys scaling (curves plotted in the left graph of Fig.~\\ref{fig6}),\nthe average friction maintains an almost constant value along the whole slip, whose value is independent fron the slip duration.\nOn the contrary, the curves corresponding to longer slips (shown in the right t plot of Fig.~\\ref{fig6}) display different shapes that, as in the case of velocity (Fig.~\\ref{fig4}), strongly depend on $T$, and cannot be collapsed. Note also that higher frictions are experienced during longer slips.\n\nLet us stress here the difference between the two average procedures considered in this work. The average $\\langle \\tau \\rangle_T$ shown in Fig.~\\ref{fig6}, are performed over slips of similar duration, at the same internal avalanche time $t$. Instead, the (conditional) average $\\tau_f$, defined in Eq.~\\ref{avefric} and shown in Fig.~\\ref{fig5}, mixes events of any duration, and it depends on the instantaneous plate velocity $\\omega_p$. The two quantities give different aspects of the same (stochastic) physical phenomenon.\nNevertheless, the combination of the two analysis suggests that the quite complex friction weakening behavior of $\\tau_f$ is mainly due to large slips, which show a non constant average friction $\\langle \\tau \\rangle_T$ in time (see Fig.~\\ref{fig6}, right panel), in contrast with small avalanches, where the average keeps mainly constant. \n\n\nBy combining the analysis of friction and velocity shapes, one can consider \nthe curves resulting from plotting the average friction as a function of the average velocity, in slips of similar duration, as shown in Fig.~\\ref{fig7}. \nThey show that while, as anticipated, friction has a low velocity dependence in small slips (left panel), in large ones it splits into a two-valued function (right panel), \ndisplaying an hysteresis, with well different dependencies on the (average) plate velocity in the accelerating and decelerating phases of the slips. \nThis evolution is very similar to that observed in periodic stick-slip~\\cite{nasuno97,nasuno98}, where all slips have identical extension, duration, and velocity profile.\n\n\n\n\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=\\linewidth]{Figures\/Fig6} \n\\end{center}\n\\caption{\\label{fig6} Average friction torque along slips of different duration as function of rescaled time in experiments.\n Colors refers to the duration class, as shown in Fig.~\\ref{fig3}. Left panel shows classes of ``short'' avalanches, right panel classes of ``large'' avalanches (see also Appendix).}\n\\end{figure}\n\n\n\n\\begin{figure\n\\begin{center}\n\\includegraphics[width=\\linewidth]{Figures\/Fig7} \n\\end{center}\n\\caption{\\label{fig7} Average friction torque along slips of different duration as function of the normalized average slip velocity in experiments\n Colors refers to the duration class, as shown in Fig.~\\ref{fig3}. Left panel shows classes of ``short'' avalanches, right panel classes of ``large'' avalanches (see also Appendix).}\n\\end{figure}\n\n\n\n\n\n\\subsection{Considerations and conclusions}\n\nOur experiments show a good scaling of the average velocity shape for small avalanches, with an almost symmetric average shape.\nFor larger avalanches however, scaling~(Eq.~\\ref{scaling}) is broken:\nfor large slips the shape takes a clear leftwards shape in agreement with what observed in seismic data~\\cite{Houston1998,bares17} (and recently in~\\cite{bares14}).\n\n\nOur analyses show that the breakdown of velocity scaling goes along with changes in the friction behavior, pointing out a strict relation between the two phenomena. On the opposite, spring-block models with only Coulomb friction generate symmetric slips~\\cite{aharonov04,bizzarri16}. Effective friction laws accounting for elapsed time and\/or space have been incorporated in solid-on-solid interface models, through the dependence on so called state variables~\\cite{ruina83,dieterich94,baumberger06}. \n These {\\it rate-and-state} laws are often adopted for studying and modeling co-seismic fault shearing, together with their other simpler forms \\cite{scholz98,bizzarri2003,kawamura2012,bizzarri16}. They are essentially phenomenological and can describe both velocity weakening and hardening, depending on the adopted parameters (which are not derivable from microscopic principles). These laws \n have shown to work to some extent also for interstitial granular matter \\cite{marone98}, but with some inconsistencies \\cite{mair99}. Moreover they have been drawn from experiments where velocity is forced to change in sudden steps and they don't seem to have been never investigated in the critical stick-slip. Attempts to do this with smoothly varying velocity have been done in\t\\cite{leoni10}. In a very recent work \\cite{degiuli17} both friction weakening and hysteresis\n have been numerically investigated during granular shear cycles, showing that these are due to contact instabilities induced by the acoustic waves generated during granular fluidization. It is thus clear that granular flow cannot be effectually modeled without the inclusion of more refined and realistic friction laws.\n\n\nAn effective modeling approach to the critical granular dynamics cannot as well exclude a stochastic description of friction, which generates the slip unpredictability and their range of variability, with the following change in the slip shapes. To our knowledge, the only few attempts in this direction~\\cite{baldassarri06,leoni10,dahmen11} are inspired to the aforementioned {\\it ABBM model}~\\cite{Alessandro1990}, which represents the mean field approximation for the motion of a driven elastic interface in a random environment~\\cite{Zapperi1998,LeDoussal2012}.\nFrom the dynamical point of view, it describes a spring--slider model subjected to a friction where both viscous and a random pinning components are present, in the overdamped (i.e. negligible inertia) approximation. At small, but finite driving rate, the ABBM model predicts an intermittent, critical dynamics for the block motion. Similarly to our observations, avalanche statistics show a scaling regime for short slips, whose average velocity has parabolic shapes.\nHowever, an exponent $\\alpha =2$ relates $\\langle S \\rangle$ to $\\langle T \\rangle$, which is different from what observed in our experiment. Moreover, for longer slips, ABBM predicts flatter symmetical shapes, witnessing a cut-off in the velocity correlation. No inertial effects are present, due to the overdamped approximation. \n\nA variant of the ABBM model for critical granular dynamics has been introduced in~\\cite{baldassarri06}, where, based on empirical observations, a simple Stribeck-like rate dependence, showing a minimum, of the average granular friction was adopted. Moreover, more physically, a cut off in the spatial correlation of the random force was considered and, at odds with the original model, inertia was taken in account. Later on, attempts to introduce in the model a state dependent weakening friction have been done~\\cite{leoni10}), and further investigations are in progress.\n\nWe think that the insights provided by the present study can explain the contradictory recent observations in \\cite{bares17,denisov17} and can be useful to advance such efforts to improve models. In particular, they show that inertia can play an important role in both weakening-hysteresis~\\cite{degiuli17} and in the determining scaling exponent $\\alpha$ (an inertial ABBM model has been studied in~\\cite{LeDoussal2012inertial}). \nEven at the microscopic level, grain inertia can influence the avalanche statistics. For instance, in sandpile models, largely studied in the context of SOC (Self Organized Criticality), the tendency of real sand grains to keep moving once they start facilitate the emergence of huge avalanches. Recent theoretical developments propose, in the presence of such facilitation effects, a scenario called Self-Organised Bistability~\\cite{DiSanto2016}, where again a breaking of scaling is associated to the appearance of large avalanches (\"kings\").\n\n\nWe conclude that weakening is a genuine property exhibited by granular dynamics at variable shear rate, and that randomness and memory are a general features of friction that cannot be overlooked in the formulation of effectual models. Such models can have impact on the understanding of many phenomena occurring in the large realm of granular systems, \nand in particular of self organized natural phenomena like landslides, and earthquake, where it is not yet clear the way different mechanisms can contribute to the shear weakening observed in coseismic fault shearing~\\cite{ditoro11}. \nFurther investigation on theoretical models incorporating more realistic, specifically memory dependent friction laws, and new experiments will allow to better understand the mechanism for which criticality breaks down.\n\n\n\\section*{Acnokwlegments}{This work has been supported by the grant FIRB RBFR081IUK\\_003\nfrom the Italian Ministry for Education and Research \n}\n\n\n\\section[A]{Appendix}\n\n\\subsection{The experimental set up}\n\n\\begin{figure}[h]\n \\begin{minipage}{0.4\\textwidth}\n \\includegraphics[width=\\textwidth]{AppFigures\/p-figure-1.png}\n \\end{minipage}\n \\hfill\n \\begin{minipage}{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{AppFigures\/img-Ciambellone3-63447.png}\n \\end{minipage}\n \\caption{Photo (up) and schema (down) of the experimental set up.}\n \\label{fig-expsetup}\n \\end{figure}\n\n \n\nThe experimental apparatus utilized for this research consists of a\ncircular PPMI channel of outer and inner radii $R$ = 19.2 cm and $r$ =\n12.5 cm respectively. The channel is 12 cm height and is almost\nfilled with a bidisperse mixture 50\\%-50\\% of glass beads, with radii $r_1$=1.5\nmm $\\pm$ 10\\% and $r_2$=2 mm $\\pm$ 10\\%. \n\nA top plate, fitting the channel, can be rotated and has a few layer of grains glued\nto its lower face in order to better drag the underlying granular medium. The plate has mass $M$ = 1200 g and moment of inertia $I$ = 0.026 kg\nm$^2$, and it is free to move vertically, implying that in our experiments\nthe medium can change volume under a nominal pressure of $p= M g \/\n[\\pi (R^{2} - r^{2} )] \\approx $176 Pa. The plate is connected to an end of torsion spring, of elastic constant $\\kappa =$ 0.36 Nm\/rad, while the other end of the spring is rotated by a motor at constant angular velocity $\\omega_d$. \n The angular positions of\nmotor and plate are supplied by two optical encoders positioned on\neither side of the torsion spring, each one having a spatial\nresolution of $3 \\cdot 10^{-5}$ rad and being sampled at 50 Hz. These measures provides the plate instantaneous position and velocity, \n$\\theta_p$ and $\\omega_p$, as well as the friction torque, which is proportional to the angular difference between motor and plate(see Eq. (\\ref{motion})).\n\n\n\n\n\n\n\\subsection{Experimental analysis}\n\nIn principle each single slip event, or {\\it avalanche}, begins when\n$\\omega_p$ starts to differ from zero and ends when $\\omega_p$ goes\nback to zero. However, in practice it is necessary to choose a\nthreshold value $\\omega_{th}$ to cross, in order to get rid of the\ninstrumental noise. This choice is to some extent arbitrary, however\nall the results have been observed to be independent from\nthe chosen threshold, as long as it is small and different from 0. For our analysis we have set $\\omega_{th}=0.00175$ rad\/s, and considered the seven time series reported in table~\\ref{tablediffdrives}. \n\n\n \n \n \\begin{table}\n \\begin{center}\n \\small\n \\begin{tabular}{c|cccccc}\n series\n & duration &\\# of points & driving $\\omega_d$ & \\# of slips \\\\\n & \n (minutes) & & (rad\/s) & used in analysis \\\\\n \\hline\n $(EA)$\n & 3900 & 5849962 &\n 0.0015 & 6014 \\\\\n $(EB)$\n & 673 & 2020079 &\n 0.0022 & 1625 \\\\\n $(EC)$\n & 1200 & 3600060 &\n 0.0044 & 5826 \\\\\n $(ED)$\n & 4080 & 12240020 &\n 0.0055 & 2451 \\\\\n $(EE)$\n & 360 & 1079977 &\n 0.011 & 3725 \\\\\n $(EF)$ \n & 240 & 720007 &\n 0.021 & 3973\\\\\n $(EG)$\n & 210 & 630014 &\n 0.033 & 4300 \\\\\n \\end{tabular}\\\\\n \\caption{\\label{tablediffdrives} Features of the analyzed series of experiments with different drives}\n \\end{center}\n \\end{table}\n \n\nThe avalanches of each series have been grouped into classes on the base of their duration, according to the first column of table~\\ref{tableclassesEA}.\nFor each class $j$ the average avalanche duration $_j$ and size $_j$ have been evaluated, and instantaneous average velocity has been computed at a set $\\{t_i\\}$ of discrete times, $0 \\le t_i \\le~_j$ (see main text). \n\nAvalanches at the extremes of distributions have been dropped out.\nFor example duration and size for avalanches from the series (EA) are plotted in Fig. 2 of the main text, with different colors for each interval. Avalanches in gray, shorter than 0.31 s, are too small to perform meaningful analysis (less than 15 points at $50$Hz of sampling rate). The total number of\navalanches employed in this statistics has then been 6014, distributed according to the second column of table~\\ref{tableclassesEA}. \n\nThe main text presents results from the series $(EA)$. The results from the other datasets, with the different drives reported in Table~\\ref{tablediffdrives}, display similar behaviors and are shown \n in Figs.~\\ref{Fig2SeveralDrives}-\\ref{Fig7SeveralDrives} of this Appendix, to be compared with the correspondig Figs.~\\ref{fig2}-\\ref{fig7} in the main text. Analogous results were obtained adopting different sampling frequencies and threshold values. \n\n\n\n\n\\begin{table}\n\\begin{center}\n \\begin{tabular}{ccc}\n duration & \\# of avalanches \\\\\n \\hline\n 0.309 $\\le T < 0.489 $& 929\\\\\n 0.489 $\\le T < 0.722 $& 866\\\\\n 0.772 $\\le T < 1.219 $& 987\\\\\n 1.219 $\\le T < 1.925 $& 1694\\\\\n 1.925 $\\le T < 3.04 $& 1380\\\\\n 3.04 $\\le T < 4.8 $& 158\\\\\n \\end{tabular}\n\\caption{\\label{tableclassesEA} Classes of avalanche duration adopted for the analysis, and the resulting number of avalanches for the data set (EA) discussed in the main text.}\n\\end{center}\n\\end{table}\n\n\n\n\n\n\n\\begin{figure}[]\n\\begin{center}\n\\includegraphics[width=\\linewidth]{AppFigures\/Fig1-040225_v2.png}\n\\caption{Avalanche size distributions for different drive velocities (see Fig.~\\ref{fig3} in the main text). }\n\\label{Fig2SeveralDrives}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[]\n\\begin{center}\n\\includegraphics[width=\\linewidth]{AppFigures\/Fig3-040225_v2.png}\n\\caption{Avalanche sizes vs durations, and class definitions, for different drive velocities (see Fig.~\\ref{fig2} in the main text) }\n\\label{Fig3SeveralDrives}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[]\n\\begin{center}\n\\includegraphics[width=\\linewidth]{AppFigures\/Fig2-040225_v2.png}\n\\caption{Average velocity shapes for different drive velocities (see Fig.~\\ref{fig4} in the main text) }\n\\label{Fig4SeveralDrives}\n\\end{center}\n\\end{figure}\n\n\n\n\\begin{figure}[]\n\\begin{center}\n\\includegraphics[width=\\linewidth]{AppFigures\/Fig4-040225_v2.png}\n\\caption{Average friction shapes for different drive velocities (see Fig.~\\ref{fig5} in the main text) }\n\\label{Fig5SeveralDrives}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[]\n\\begin{center}\n\\includegraphics[width=\\linewidth]{AppFigures\/Fig5-040225_v2.png}\n\\caption{Average friction vs average velocity for different drive velocities (see Fig.~\\ref{fig6} in the main text) }\n\\label{Fig6SeveralDrives}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[]\n\\begin{center}\n\\includegraphics[width=\\linewidth]{AppFigures\/Fig6-040225_v2.png}\n\\caption{Conditional average friction vs instantaneous plate velocity for different drive velocities (see Fig.~\\ref{fig7} in the main text) }\n\\label{Fig7SeveralDrives}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nExact solution of a given many-body model in quantum mechanics is usually expressed in terms of eigenvalues and eigenfunction of its Hamiltonian\n\\begin{equation}\n\\hat H=\\sum_{i=1}^M \\frac{\\hat{\\mathbf p}^2_i}{2m_i}+\\hat V(\\hat{\\mathbf q}_1,\\ldots, \\hat{\\mathbf q}_M)\\, ,\n\\end{equation}\nbut it can be also expressed through analytic solution for general transition amplitude $A(\\mathbf a, \\mathbf b;T)=\\langle \\mathbf b|e^{-iT\\hat H\/\\hbar}|\\mathbf a\\rangle$ from the initial state $|\\mathbf a\\rangle$ to the final state $|\\mathbf b\\rangle$ during the time of propagation $T$. Calculation of transition amplitudes is more suitable if one uses path integral formalism \\cite{feynman, feynmanhibbs, kleinertbook}, but in principle, if eigenproblem of the Hamiltonian can be solved, one should be able to calculate general transition amplitudes, and vice versa. However, mathematical difficulties may prevent this, and even more importantly, exact solutions can be found only in a very limited number of cases. Therefore, use of various analytic approximation techniques or numerical treatment is necessary for detailed understanding of the behavior of almost all models of interest.\n\nIn numerical approaches it could be demanding and involved to translate numerical knowledge of transition amplitudes to (or from) eigenstates, but practically can be always achieved. It has been implemented in various setups, e.g. through extraction of the energy spectra from the partition function \\cite{feynmanhibbs, kleinertbook, danicapla, ivanapla}, and using the diagonalization of space-discretized matrix of the evolution operator, i.e. matrix of transition amplitudes \\cite{sethia1990, sethia1999, ivanapre1, ivanapre2, becpla}. All these applications use the imaginary-time formalism \\cite{feynmanstat, parisi}, typical for numerical simulations of such systems.\n\nRecently introduced effective action approach \\cite{prl-speedup, prb-speedup, pla-euler, pre-ideal, pre-recursive} provides an ideal framework for exact numerical calculation of quantum mechanical amplitudes. It gives systematic short-time expansion of amplitudes for a general potential, thus allowing accurate calculation of short-time properties of quantum systems directly, as has been demonstrated in Refs.~\\cite{ivanapre1, ivanapre2, becpla}. For numerical calculations that require long times of propagation to be considered using e.g. Monte Carlo method, effective action approach provides improved discretized actions leading to the speedup in the convergence of numerically calculated discretized quantities to their exact continuum values. This has been also demonstrated in Monte Carlo calculations of energy expectation values using the improved energy estimators \\cite{jelapla, ivanapla}.\n\nIn this paper we present SPEEDUP codes \\cite{speedup} which implement the effective action approach, and which were used for numerical simulations in Refs.~\\cite{danicapla, ivanapla, ivanapre1, ivanapre2, becpla, prl-speedup, prb-speedup, pla-euler, pre-ideal, pre-recursive}. The paper is organized as follows. In Section \\ref{sec:theory} we briefly review the recursive approach for analytic derivation of higher-order effective actions. SPEEDUP Mathematica codes capable of symbolic derivation of effective actions for a general one- and many-body theory as well as for specific models is described in detail in Section \\ref{sec:Mathematica}, while in Section \\ref{sec:C} we describe SPEEDUP Path Integral Monte Carlo C code, developed for numerical calculation of transition amplitudes for 1D models. Section \\ref{sec:conclusions} summarizes presented results and gives outlook for further development of the code.\n\n\\section{Theoretical background}\n\\label{sec:theory}\n\nFrom inception of the path integral formalism, expansion of short-time amplitudes in the time of propagation was used for the definition of path integrals through the time-discretization procedure \\cite{feynmanhibbs, kleinertbook}. This is also straightforwardly implemented in the Path Integral Monte Carlo approaches \\cite{ceperley}, where one usually relies on the naive discretization of the action. Several improved discretized actions, mainly based on the Trotter formula and its generalizations, were developed and used in the past \\cite{takahashiimada, libroughton, deraedt2}. A recent analysis of this method can be found in Jang et al \\cite{jangetal}. Several related investigations dealing with the speed of convergence have focused on improvements in short-time propagation \\cite{makrimiller,makri} or the action \\cite{alfordetal}. More recently, split-operator method has also been developed \\cite{chinkrotscheck, hernandez, ciftja, sakkos, janecek}, later extended to include higher-order terms \\cite{bandrauk, chinchen, omelyan, bayepre}, and systematically improved using the multi-product expansion \\cite{chinarxiv, krotscheck, chin}.\n\nThe effective action approach is based on the ideal discretization concept \\cite{pre-ideal}. It was introduced first for single-particle 1D models \\cite{prl-speedup, prb-speedup, pla-euler} and later extended to general many-body systems in arbitrary number of spatial dimensions \\cite{ivanapla, pre-recursive}. This approach allows systematic derivation of higher-order terms to a chosen order $p$ in the short time of propagation.\n\nRecursive method for deriving discretized effective actions, established in Ref.~\\cite{pre-recursive}, is based on solving the underlying Schr\\\" odinger equation for the amplitude. It has proven to be the most efficient tool for treatment of higher-order expansion. In this section we give brief overview of the recursive method, which will be implemented in Mathematica in the next section. We start with the case of single particle in 1D, used in the SPEEDUP C code. Throughout the paper we will use natural system of units, in which $\\hbar$ and all masses are set to unity.\n\n\\subsection{One particle in one dimension}\n\\label{sec:P1D1}\n\nIn the effective action approach, transition amplitudes are expressed in terms of the ideal discretized action $S^*$ in the form\n\\begin{equation}\nA(a, b; T)=\\frac{1}{\\sqrt{2\\pi T}}\\, e^{-S^*(a, b; T)}\\, ,\n\\end{equation}\nwhich can be also seen as a definition of the ideal action \\cite{pre-ideal}. Therefore, by definition, the above expression is correct not only for short times of propagation, but for arbitrary large times $T$. We also introduce the ideal effective potential $W$,\n\\begin{equation}\nS^*(a, b; T)=T\\left[\\frac{1}{2}\\left(\\frac{b-a}{T}\\right)^2+W\\right]\\, ,\n\\end{equation}\nreminiscent of the naive discretized action, with the arguments of the effective potential ($a$, $b$, $T$) usually written as $W\\left(\\frac{a+b}{2}, \\frac{b-a}{2}; T\\right)$, to emphasize that we will be using mid-point prescription.\n\nHowever, ideal effective action and effective potential can be calculated analytically only for exactly solvable models, while in all other cases we have to use some approximative method. We use expansion in the time of propagation, assuming that the time $T$ is small. If this is not the case, we can always divide the propagation into $N$ time steps, so that $\\varepsilon=T\/N$ is small. Long-time amplitude is than obtained by integrating over all short-time ones,\n\\begin{equation}\nA(a, b; T)=\\int dq_1\\cdots dq_{N-1}\\ A(a, q_1; \\varepsilon)\\,\nA(q_1, q_2; \\varepsilon)\\cdots A(q_{N-1}, b; \\varepsilon)\\, ,\n\\label{eq:AMC}\n\\end{equation}\npaving the way towards Path Integral Monte Carlo calculation, which is actually implemented in the SPEEDUP C code.\n\nIf we consider general amplitude $A(q,q';\\varepsilon)$, introduce the mid-point coordinate $x=(q+q')\/2$ and deviation $\\bar x=(q'-q)\/2$, and express $A$ using the effective potential,\n\\begin{equation}\nA(q, q'; \\varepsilon)=\\frac{1}{\\sqrt{2\\pi \\varepsilon}}\\, e^{-\\frac{2}{\\varepsilon}\\bar x^2-\\varepsilon W(x, \\bar x; \\varepsilon)}\\, ,\n\\end{equation}\nthe time-dependent Schr\\\" odinger equation for the amplitude leads to the following equation for $W$\n\\begin{equation}\nW+\\bar x\\,\\bar\\partial W+\\varepsilon\\,\\partial_\\varepsilon W\n-\\frac{1}{8}\\,\\varepsilon\\,\\partial^2 W-\n\\frac{1}{8}\\,\\varepsilon\\,\\bar\\partial^2 W\n+\\,\\frac{1}{8}\\,\\varepsilon^2\\,(\\partial W)^2\n+\\frac{1}{8}\\,\\varepsilon^2\\,(\\bar\\partial W)^2= \\frac{1}{2}\\, (V_++V_-)\\, ,\n\\label{eq:eqW}\n\\end{equation}\nwhere $V_\\pm=V(x\\pm\\bar x)$, i.e. $V_-=V(q)$, $V_+=V(q')$. The short-time expansion assumes that we expand $W$ to power series in $\\varepsilon$ to a given order, and calculate the appropriate coefficients using Eq.~(\\ref{eq:eqW}). We could further expect that this results in coefficients depending on the potential $V(x)$ and its higher derivatives. However, this scheme is not complete, since the effective potential depends not only on the mid-point $x$, but also on the deviation $\\bar x$, and the obtained equations for the coefficients cannot be solved in a closed form. In order to resolve this in a systematic way, we make use of the fact that, for short time of propagation, deviation $\\bar x$ is on the average given by the diffusion relation $\\bar x^2\\propto\\varepsilon$, allowing double expansion of $W$ in the form\n\\begin{equation}\n\\label{eq:ansatz}\nW(x,\\bar x;\\varepsilon)=\\sum_{m=0}^{\\infty}\\sum_{k=0}^{m}c_{m,k}(x)\\,\\varepsilon^{m-k}\\bar x^{2k}\\, .\n\\end{equation}\nRestricting the above sum over $m$ to $p-1$ leads to level $p$ effective potential $W_p(x,\\bar x;\\varepsilon)$ which gives expansion of the effective action $S^*_p$ to order $\\varepsilon^p$, and hence the level designation $p$ for both the effective action and the corresponding potential $W_p$. Thus, if the diffusion relation is applicable (which is always the case in Monte Carlo calculations), instead of the general double expansion in $\\bar x$ and $\\varepsilon$, we are able to obtain simpler, systematic expansion in $\\varepsilon$ only. \n\nAs shown previously \\cite{prl-speedup, prb-speedup, pla-euler}, when used in Path Integral Monte Carlo simulations for calculation of long time amplitudes according to Eq.~(\\ref{eq:AMC}), use of level $p$ effective action leads to the convergence of discretized amplitudes proportional to $\\varepsilon^p$, i.e. as $1\/N^p$, where $N$ is the number of time steps used in the discretization.\n\nIf we insert the above level $p$ expansion of the effective potential to Eq.(\\ref{eq:eqW}), we obtain the recursion relation derived in Ref.~\\cite{pre-recursive},\n\\begin{eqnarray}\n8(m+k+1)\\, c_{m,k}&=&(2k+2) (2 k+1)\\, c_{m,k+1}\n+c_{m-1,k}''-\\sum_{l=0}^{m-2}\\, \\sum_{r} c_{l,r}'\\,c_{m-l-2,k-r}'\\nonumber\\\\\n&&-\\sum_{l=1}^{m-2}\\,\\sum_{r}2\\,r(2k-2r+2)\\,c_{l,r}\\,c_{m-l-1,k-r+1}\\, ,\n\\label{eq:1Drec}\n\\end{eqnarray}\nwhere the sum over $r$ goes from ${\\rm max}\\{0,\\ k-m+l+2\\}$ to ${\\rm min}\\{k,\\ l\\}$. This recursion can be used to calculate all coefficients $c_{m,k}$ to a given level $p$, starting from the known initial condition, $c_{0, 0}=V$. The diagonal coefficients can be calculated immediately,\n\\begin{equation}\n\\label{eq:diagonal}\nc_{m,m}=\\frac{V^{(2m)}}{(2m+1)!}\\, ,\n\\end{equation}\nand for a given value of $m=0,\\ldots p-1$, the coefficients $c_{m,k}$ follow recursively from evaluating (\\ref{eq:1Drec}) for $k=m-1,\\ldots,1,0$, as illustrated in Fig.~\\ref{fig:order}.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=6cm]{fig1}\n\\caption{Order in which the coefficients $c_{m,k}$ are calculated: diagonal ones\nfrom Eq.~(\\ref{eq:diagonal}), off-diagonal from recursion (\\ref{eq:1Drec}).}\n\\label{fig:order}\n\\end{figure}\n\n\\subsection{Extension to many-body systems}\n\\label{sec:mb}\n\nThe above outlined approach can be straightforwardly applied to many-body systems. Again the amplitude is expressed through the effective action and the corresponding effective potential, which now depends on mid-point positions and deviations of all particles. For simplicity, these vectors are usually combined into $D\\times M$ dimensional vectors $\\mathbf x$ and $\\bar{\\mathbf x}$, where $D$ is spatial dimensionality, and $M$ is the number of particles. In this notation,\n\\begin{equation}\nA(\\mathbf q, \\mathbf q'; \\varepsilon)=\\frac{1}{(2\\pi \\varepsilon)^{DM\/2}}\\, e^{-\\frac{2}{\\varepsilon}\\bar{\\mathbf x}^2-\\varepsilon W(\\mathbf x, \\bar{\\mathbf x}; \\varepsilon)}\\, ,\n\\end{equation}\nwhere initial and final position $\\mathbf q=(\\mathbf q_1, \\ldots, \\mathbf q_M)$ and $\\mathbf q'=(\\mathbf q'_1, \\ldots, \\mathbf q'_M)$ are analogously defined $D\\times M$ dimensional vectors. Here we will not consider quantum statistics of particles. The required symmetrization or antisymmetrization must be applied after transition amplitudes are calculated using the effective potential.\n\nMany-body transition amplitudes satisfy $D\\times M$-dimensional generalization of the time-dependent Schr\\\" odinger equation, which leads to the equation for the effective potential similar to Eq.~(\\ref{eq:eqW}), with vectors replacing previously scalar quantities,\n\\begin{equation}\nW+\\bar{\\mathbf x}\\cdot \\bar{\\boldsymbol\\partial} W+\\varepsilon\\,\\partial_\\varepsilon W\n-\\frac{1}{8}\\,\\varepsilon\\,\\boldsymbol\\partial^2 W-\n\\frac{1}{8}\\,\\varepsilon\\,\\bar{\\boldsymbol\\partial}^2 W\n+\\,\\frac{1}{8}\\,\\varepsilon^2\\,(\\boldsymbol\\partial W)^2\n+\\frac{1}{8}\\,\\varepsilon^2\\,(\\bar{\\boldsymbol\\partial} W)^2= \\frac{1}{2}\\, (V_++V_-)\\, .\n\\label{eq:eqWMB}\n\\end{equation}\nThe effective potential for short-time amplitudes again can be written in the form of the double expansion in $\\varepsilon$ and $\\bar{\\mathbf x}$. However, it turns out to be advantageous to use the expansion\n\\begin{equation}\n\\label{EXP}\nW(x,\\bar x;\\varepsilon) = \\sum_{m=0}^{\\infty} \\, \\sum_{k=0}^{m}\\varepsilon^{m-k}\\,W_{m,k}(x,\\bar x)\\ ,\n\\end{equation}\nand work with fully contracted quantities $W_{m,k}$\n\\begin{equation}\nW_{m,k}(x,\\bar x)= \\bar x_{i_1} \\bar x_{i_2} \\cdots \\bar x_{i_{2k}}c_{m,k}^{i_1,\\ldots, i_{2k}}(x)\\, ,\n\\end{equation}\nrather than with the respective coefficients $c_{m,k}^{i_1,\\ldots, i_{2k}}$. In this way we avoid the computationally expensive symmetrization over\nall indices $i_1,\\ldots, i_{2k}$. After inserting the above expansion into the equation for the effective potential, we obtain the recursion relation which represents a generalization of previously derived Eq.~(\\ref{eq:1Drec}) for 1D case, and has the form\n\\begin{eqnarray}\n\\label{recursion}\n8\\, (m+k+1)\\,W_{m,k}&=&\\boldsymbol\\partial^2 W_{m-1,k}+\\bar{\\boldsymbol\\partial}^2 W_{m,k+1}\n-\\sum_{l=0}^{m-2}\\,\\sum_{r}(\\boldsymbol\\partial W_{l,r})\\cdot(\\boldsymbol\\partial W_{m-l-2,k-r})\\nonumber\\\\\n&&-\\sum_{l=1}^{m-2}\\,\\sum_{r}(\\bar{\\boldsymbol\\partial} W_{l,r})\\cdot(\\bar{\\boldsymbol\\partial} W_{m-l-1,k-r+1})\\, .\n\\label{eq:MBrec}\n\\end{eqnarray}\nThe sum over $r$ runs from ${\\rm max}\\{0,\\ k-m+l+2\\}$ to ${\\rm min}\\{k,\\ l\\}$, while diagonal quantities $W_{m,m}$ can be calculated directly,\n\\begin{equation}\n\\label{DIA}\nW_{m,m}=\\frac{1}{(2m+1)!}(\\bar x\\cdot{\\boldsymbol\\partial})^{2m}\\,V\\, .\n\\end{equation}\nThe above recursion disentangles, in complete analogy with the previously outlined case of one particle in 1D, and is solved in the order shown in Fig.~\\ref{fig:order}.\n\n\\section{SPEEDUP Mathematica codes for deriving the higher-order effective actions}\n\\label{sec:Mathematica}\n\nThe effective action approach can be used for numerically exact calculation of short-time amplitudes if the effective potential $W_p$ can be analytically derived to sufficiently high values of $p$ such that the associated error is smaller than the required numerical precision. The error $\\varepsilon^p$ for the effective action, obtained when level $p$ effective potential is used, translates into $\\varepsilon^{p-DM\/2}$ for a general many-body short-time amplitude. However, when amplitudes are calculated using the Path Integral Monte Carlo SPEEDUP C code \\cite{speedup}, which will be presented in the next section, the errors of numerically calculated amplitudes are always proportional to $\\varepsilon^p\\sim 1\/N^p$, where $N$ is number of time-steps in the discretization of the propagation time $T$.\n\nTherefore, accessibility of higher-order effective actions is central to the application of this approach if it is used for direct calculation of short-time amplitudes \\cite{ivanapre1, ivanapre2, becpla}, as well as in the case when PIMC code is used \\cite{danicapla, jelapla, ivanapla}. However, increase in the level $p$ leads to the increase in complexity of analytic expressions for the effective potential. On one hand, this limits the maximal accessible level $p$ by the amount of memory required for symbolic derivation of the effective potential. On the other hand, practical use of large expressions for $W_p$ may slow down numerical calculations, and one can opt to use lower than the maximal available level $p$ when optimizing total CPU time required for numerical simulation. The suggested approach is to study time-complexity of the algorithm in practical applications, and to choose optimal level $p$ by minimizing the execution time required to achieve fixed numerical precision.\n\nWe have implemented efficient symbolic derivation of higher-order effective actions in Mathematica using the recursive approach. All source files described in this section are located in the {\\tt Mathematica} directory of the SPEEDUP code distribution.\n\n\\subsection{General 1D Mathematica code}\n\\label{sec:M1D1Mathematica}\n\nSPEEDUP code \\cite{speedup} for symbolic derivation of the effective potential to specified level $p$ is implemented in Mathematica \\cite{mathematica}, and is available in the {\\tt EffectiveAction-1D.nb} notebook. It implements the algorithm depicted in Fig.~\\ref{fig:order} and calculates the coefficients $c_{m,k}$ for $m=0,\\ldots, p-1$ and $k=m,\\ldots,0$, starting from the initial condition $c_{0,0}=V$. For a given value of $m$, the diagonal coefficient $c_{m,m}$ is first calculated from Eq.~(\\ref{eq:diagonal}), and then all off-diagonal coefficients are calculated from the recursion (\\ref{eq:1Drec}).\n\nIn this code the potential $V(x)$ is not specified, and the effective potential is derived for a general one-particle 1D theory. The resulting coefficients $c_{m,k}$ and the effective potential are expressed in terms of the potential $V$ and its higher derivatives. Level $p$ effective potential, constructed as\n\\begin{equation}\nW_p(x,\\bar x;\\varepsilon)=\\sum_{m=0}^{p-1}\\sum_{k=0}^{m}c_{m,k}(x)\\,\\varepsilon^{m-k}\\bar x^{2k}\\, ,\n\\end{equation}\ncontains derivatives of $V$ to order $2p-2$.\n\nThe only input parameter of this Mathematica code is the level $p$ to which the effective potential should be calculated. As the code runs, it prints used amount of memory (in MB) and CPU time. This information can be used to estimate the required computing resources for higher values of $p$. The calculated coefficients can be exported to a file, and later imported for further numerical calculations. As an illustration, the file {\\tt EffectiveAction-1D-export-p5.m} contains exported definition of all the coefficients $c_{m,k}$ calculated at level $p=5$, while the notebook {\\tt EffectiveAction-1D-matching-p5.nb} contains matching output from the interactive session used to produce the above $p=5$ result.\n\nThe execution of this code on a typical 2 GHz CPU for level $p=10$ requires 10-15 MB of RAM and several seconds of CPU time. We have successfully run this code for levels as high as $p=35$ \\cite{speedup}. SPEEDUP C code implements effective actions to the maximal level $p=18$, with the size of the corresponding C function around 2 MB. If needed, higher levels $p$ can be easily implemented in C and added to the existing SPEEDUP code.\n\n\\subsection{General 2D and 3D Mathematica code}\n\\label{sec:M1D2D3Mathematica}\n\nAlthough we have developed Mathematica code capable of deriving effective actions for a general many-body theory in arbitrary number of spatial dimensions, in practical applications in 2D and 3D it can be very advantageous to use simpler codes, able to produce results to higher levels $p$ than the general code \\cite{becpla, ivanapre2}.\n\nThis is done in files {\\tt EffectiveAction-2D.nb} and {\\tt EffectiveAction-3D.nb}, where the recursive approach is implemented directly in 2D and 3D. Execution of these codes requires more memory: for $p=10$ effective action one needs 60 MB in 2D case, while in 3D case the needed amount of memory increases to 860 MB. On the other hand, the execution time is several minutes for 2D code and around 30 minutes for 3D code.\n\nThe distribution of the SPEEDUP code contains exported $p=5$ definitions of contractions $W_{m, k}$ for both 2D and 3D general potential, as well as matching outputs from interactive sessions used to generate these results.\n\n\\subsection{Model-specific Mathematica codes}\n\\label{sec:D1modelsMathematica}\n\nWhen general expressions for the effective actions, obtained using the above described SPEEDUP Mathematica codes, are used in numerical simulations, one has to specify the potential $V$ and its higher derivatives to order $2p-2$ in order to be able to calculate transition amplitudes. Such approach is justified for systems where the complexity of higher derivatives increases. However, for systems where this is not the case, or where only a limited number of derivatives is non-trivial (e.g. polynomial interactions), it might be substantially beneficial to specify the potential at the beginning of the Mathematica code and calculate the derivatives explicitly when iterating the recursion.\n\nUsing this approach, one is able to obtain coefficients $c_{m,k}$ and the effective potential $W$ directly as functions of the mid-point $x$. This is implemented in the notebooks {\\tt EffectiveAction-1D-AHO.nb} and {\\tt EffectiveAction-2D-AHO.nb} for the case of anharmonic oscillators in 1D and 2D,\n\\begin{eqnarray}\n&&\\hspace*{-1.2cm}\nV_{1D-AHO}(x)=\\frac{A}{2}\\, x^2+\\frac{g}{24}\\,x^4\\, ,\\\\\n&&\\hspace*{-1.2cm}\nV_{2D-AHO}(x)=\\frac{A}{2}\\, (x^2+y^2)+\\frac{g}{24}\\, (x^2+y^2)^2\\, .\n\\end{eqnarray}\nThese codes can be easily executed within few seconds and with the minimal amounts of memory even for $p=20$. For 1D anharmonic oscillator we have successfully calculated effective actions to excessively large value $p=144$, and in 2D to $p=67$ \\cite{speedup}, to illustrate the advantage of this model-specific method.\n\nSimilar approach can be also used in another extreme case, when the complexity of higher derivatives of the potential $V$ increases very fast, so that entering the corresponding expressions to the code becomes impractical. Even in this situation expressions for effective actions can be usually simplified using some appropriate model-specific ansatz. The form of such ansatz can be deduced from the form of model-specific effective potentials, and then used to simplify their derivation. Such use-case is illustrated in the SPEEDUP Mathematica code for the modified P\\\" oscl-Teller potential,\n\\begin{equation}\nV_{1D-MPT}(x)=-\\frac{\\lambda}{(\\cosh \\alpha x)^2}\\, .\n\\label{eq:1D-MPT}\n\\end{equation}\nFor this potential, the coefficients $c_{m,k}$ of the effective potential can be expressed in the form\n\\begin{equation}\nc_{m, k}(x)=\\sum_{l=0}^m d_{m, k, l}\\, \\frac{(\\tanh \\alpha x)^{2l}}{(\\cosh \\alpha x)^{2m-2l+2}}\\, ,\n\\label{eq:1D-MPTansatz}\n\\end{equation}\nand newly introduced constant coefficients $d_{m,k,l}$ can be calculated using the model-speci\\-fic recursion in {\\tt EffectiveAction-1D-MPT.nb}. The form of the ansatz (\\ref{eq:1D-MPTansatz}) is deduced from the results of executing general 1D Mathematica code, with the model-specific potential (\\ref{eq:1D-MPT}) defined before the recursion calculation of the coefficients is performed. Using this approach, we were able to obtain maximal level $p=41$ effective action \\cite{speedup}.\n\n\\subsection{General many-body Mathematica code}\n\\label{sec:mbMathematica}\n\nSPEEDUP Mathematica code for calculation of effective action for a general many-body theory is implemented using the MathTensor \\cite{mathtensor} package for tensorial calculations in Mathematica. This general implementation required some new functions related to the tensor calculus to be defined in the source notebook {\\tt EffectiveAction-ManyBody.nb} provided with the SPEEDUP code.\n\nThe function {\\tt GenNewInd[n]} generates the required number {\\tt n} of upper and lower indices using the MathTensor function {\\tt UpLo}, with the assigned names {\\tt up1}, {\\tt lo1}, \\ldots, as well as lists {\\tt upi} and {\\tt loi}, each containing {\\tt n} strings corresponding to the names of generated indices. These new indices are used in the implementation of the recursion for calculation of derivatives of $W_{m,k}$, contractions of the effective potential, and for this reason had to be explicitly named and properly introduced.\n\nThe expressions obtained by iterating the recursion contain large numbers of contractions, and function {\\tt NewDefUnique[contr]} replaces all contracted indices with the newly-introduced dummy ones in the contraction {\\tt contr}, so that they do not interfere with the calculation of derivatives in the recursion. This is necessary since the derivatives in recursion do not distinguish contracted indices from non-contracted ones if their names happen to be generated by the function {\\tt GenNewInd}. Note that the expression {\\tt contr} does not have to be full contraction, i.e. function {\\tt NewDefUnique} will successfully act on tensors of any kind if they have contracted indices, while it will leave them unchanged if no contractions are present.\n\nThe function {\\tt NewDerivativeVec[contr, vec, ind]} implements calculation of the first derivative of the tensor {\\tt contr} (which may or may not contain contracted indices, but if it does, they are supposed to be uniquely defined dummy ones, which is achieved using the function {\\tt NewDefUnique}). The derivative is calculated with respect to vector {\\tt vec} with the vectorial index {\\tt ind}. The index {\\tt ind} can be either lower or upper one, and has to be generated previously by the function {\\tt GenNewInd}.\n\nFinally, the function {\\tt NewLaplacianVec[contr, vec]} implements the Laplacian of the tensor {\\tt contr} with respect to the vector {\\tt vec}, i.e. it performs the calculation of contractions of the type\n\\begin{equation}\n\\frac{\\partial}{\\partial\\mathtt{vec}_i}\\,\\frac{\\partial}{\\partial\\mathtt{vec}^i}\\,\\, \\mathtt{contr}\\, .\n\\end{equation}\n\nAfter all described functions are defined, the execution of the code proceeds by setting the desired level of the effective action {\\tt p}, generating the needed number of named indices using the function call {\\tt GenNewInd[2 p + 2]}, and then by performing the recursion according to the scheme illustrated in Fig.~\\ref{fig:order}. The use of MathTensor function {\\tt CanAll} in the recursion ensures that the obtained expressions for {\\tt W[m, k]} will be simplified if possible. This is achieved in MathTensor by sorting and renaming all dummy indices using the same algorithm and trying to simplify the expression obtained in such way. By default, Mathematica will distinguish contracted indices in two expressions if they are named differently, and MathTensor works around it using the renaming scheme implemented in {\\tt CanAll}.\n\nThe computing resources required for the execution of the many-body SPEEDUP Mathematica code depend strongly on the level of the effective action. For example, for level $p=5$ the code can be run within few seconds with the minimal memory requirements. The notebook with the matching output of this calculation is available as {\\tt EffectiveAction-ManyBody-matching-p5.nb}, and the exported results for {\\tt W[m, k]} are available in {\\tt EffectiveAction-ManyBody-export-p5.m}. We were able to achieve maximal level $p=10$ \\cite{speedup}, with the CPU time of around 2 days on a recent 2 GHz processor. The memory used by Mathematica was approximately 1.6 GB.\n\nNote that exporting the definition of the effective potential from Mathematica to a file will yield lower and upper indices named {\\tt ll1}, {\\tt uu1}, etc. In order to import previous results and use them for further calculations with the provided Mathematica code, it is necessary to replace indices in the exported file to the proper index names used by the function {\\tt GenNewInd}. This is easily done using find\/replace feature of any text editor. Prior to importing definition of the effective potential, it is necessary to initialize MathTensor and all additional functions defined in the notebook {\\tt EffectiveAction-ManyBody.nb}, and to generate the needed number of named indices using the function call {\\tt GenNewInd[2p+2]}.\n\n\\section{SPEEDUP C codes for Monte Carlo calculation of 1D transition amplitudes}\n\\label{sec:C}\n\nFor short times of propagation, the effective actions derived using the above described Mathematica codes can be directly used. This has been extensively used in Refs.~\\cite{ivanapre1, ivanapre2}, where SPEEDUP codes were applied for numerical studies of several lower-dimensional models and calculation of large number of energy eigenvalues and eigenfunctions. The similar approach is used in Ref.~\\cite{becpla}, where SPEEDUP code was used to study properties of fast-rotating Bose-Einstein condensates in anharmonic trapping potentials. The availability of a large number of eigenstates allowed not only precise calculation of global properties of the condensate (such as condensation temperature and ground state occupancy), but also study of density profiles and construction of time-of-flight absorption graphs, with the exact quantum treatment of all available eigenfunctions.\n\nHowever, in majority of applications the time of propagation cannot be assumed to be small. The effective actions are found to have finite radius of convergence \\cite{ivanapre1}, and if the typical propagation times in the considered case exceed this critical value, Path Integral Monte Carlo approach must be used in order to accurately calculate the transition amplitudes and the corresponding expectation values \\cite{danicapla, jelapla}. As outlined earlier, in this case the time of propagation $T$ is divided into $N$ time steps, such that $\\varepsilon=T\/N$ is sufficiently small and that the effective action approach can be used. The discretization of the propagation time leads to the following expression for the discretized amplitude\n\\begin{equation}\nA_N^{(p)}(a,b;T)=\\int\\frac{dq_1\\cdots dq_{N-1}}{(2\\pi\\varepsilon)^{N\/2}}\\, e^{-S_N^{(p)}}\\, ,\n\\end{equation}\nwhere $S_N^{(p)}$ stands for the discretized level $p$ effective action,\n\\begin{equation}\nS_N^{(p)}=\\sum_{k=0}^{N-1}\\left[ \\frac{(q_{k+1}-q_k)^2}{2\\varepsilon}+\\varepsilon\\, W_p(x_k, \\bar x_k; \\varepsilon)\\right],\n\\label{eq:SNp}\n\\end{equation}\nand $q_0=a$, $q_N=b$, $x_k=(q_{k+1}+q_k)\/2$, $\\bar x_k=(q_{k+1}-q_k)\/2$.\n\nLevel $p$ discretized effective action is constructed from the corresponding effective potential $W_p$, calculated as power series expansion to order $\\varepsilon^{p-1}$. Since it enters the action multiplied by $\\varepsilon$, this leads to discretized actions correct to order $\\varepsilon^p$, i.e. with the errors of the order $\\varepsilon^{p+1}$. The long-time transition amplitude $A_N^{(p)}(a,b;T)$ is a product of $N$ short-time amplitudes, and its errors are expected to scale as $N\\cdot\\varepsilon^{p+1}\\sim 1\/N^p$, as has been shown in Refs.~\\cite{prl-speedup, prb-speedup, pla-euler, ivanapla} for transition amplitudes, and in Refs.~\\cite{jelapla, ivanapla} for expectation values, calculated using the corresponding consistently improved estimators.\n\n\\subsection{Algorithm and structure of the code}\n\\label{sec:algorithm}\n\nSPEEDUP C source is located in the {\\tt src} directory of the code distribution \\cite{speedup}. It uses the standard Path Integral Monte Carlo algorithm for calculation of transition amplitudes. The trajectories are generates by the bisection algorithm \\cite{ceperley}, hence the number of time-steps $N$ is always given as a power of two, $N=2^s$. When the amplitude is calculated with $2^s$ time steps, we can also easily calculate all discretized amplitudes in the hierarchy $2^{s-1}$, \\ldots, $2^0$ at no extra cost. This requires only minor additional CPU time and memory, since the needed trajectories are already generated as subsets of maximal trajectories with $2^s$ time-steps.\n\nThe trajectory is constructed starting from bisection level $n=0$, where we only have initial and final position of the particle. At bisection level $n=1$ the propagation is divided into two time-steps, and we have to generate coordinate $q$ of the particle at the moment $T\/2$, thus constructing the piecewise trajectory connecting points $a$ at the time $t=0$, $q$ at $t=T\/2$, and $b$ at $t=T$. The coordinate $q$ is generated from the Gaussian probability density function centered at $(a+b)\/2$ and with the width $\\sigma_1=\\sqrt{T\/2}$. The procedure continues iteratively, and each time a set of points is added to the piecewise trajectory. At each bisection level $n$ the coordinates are generated from the Gaussian centered at mid-point of coordinates generated at level $n-1$, with the width $\\sigma_n=\\sqrt{T\/2^n}$. To generate numbers $\\eta$ from the Gaussian centered at zero we use Box-M\\\" uller method,\n\\begin{equation}\n\\eta = \\sqrt{-2\\sigma_n^2\\ln\\xi_1}\\, \\cos 2\\pi\\xi_2\\, ,\n\\end{equation}\nwhere numbers $\\xi_1$ and $\\xi_2$ are generated from the uniform distribution on the interval $[0, 1]$, using the SPRNG library \\cite{sprng}. If the target bisection level is $s$, then at bisection level $n\\leq s$ we generate $2^{n-1}$ numbers using the above formula, and construct the new trajectory by adding to already existing points the new ones, according to\n\\begin{equation}\nq[(1+2i)\\cdot 2^{s-n}]=\\eta_i+\\frac{q[i\\cdot 2^{s-n+1}]+q[(i+1)\\cdot 2^{s-n+1}]}{2}\\, ,\n\\end{equation}\nwhere $i$ runs from 0 to $2^{n-1}-1$. This ensures that at bisection level $s$ we get trajectory with $N=2^s$ time-steps, consisting of $N+1$ points, with boundary conditions $q[0]=a$ and $q[N]=b$. At each lower bisection level $n$, the trajectory consists of $2^n+1$ points obtained from the maximal one (level $s$ trajectory) as a subset of points $q[i\\cdot 2^{s-n}]$ for $i=0,1,\\ldots, 2^n$.\n\nThe use of trajectories generated by the bisection algorithm requires normalization factors from all Gaussian probability density functions with different widths to be taken into account. This normalization is different for each bisection level, but can be calculated easily during the initialization phase.\n\nThe basic C code is organized in three source files, {\\tt main.c}, {\\tt p.c} and {\\tt potential.c}, with the accompanying header files. The file {\\tt potential.c} (its name can be changed, and specified at compile time) must contain a user-supplied function {\\tt V0()}, defining the potential $V$. For a given input value of the coordinate, {\\tt V0()} should initialize appropriate variables to the value of the potential $V$ and its higher derivatives to the required order $2p-2$. When this file is prepared, SPEEDUP code can be compiled and used. The distributed source contains definition of 1D-AHO potential in the file {\\tt potential.c}, the same as in the file {\\tt 1D-AHO.c}.\n\nThe execution of the SPEEDUP code starts with the initialization and allocation of memory in the {\\tt main()} function, and then the array of amplitudes and associated MC error estimates for each bisection level $n=0,\\ldots, s$ is calculated by calling the function {\\tt mc()}. After printing the output, {\\tt main()} deallocates used memory and exits. Function {\\tt mc()} which implements the described MC algorithm is also located in the file {\\tt main.c}, as well as the function {\\tt distr()}, which generates maximal (level $s$) trajectories.\n\nThe function {\\tt mc()} contains main MC sampling loop. In each step new level $s$ trajectory is generated by calling the function {\\tt distr()}. Afterwards, for each bisection level $n$, function {\\tt func()} is invoked. This function is located in the file {\\tt p.c}, and returns the value of the function $e^{-S}$, properly normalized, as described earlier. This value (and its square) is accumulated in the MC loop for each bisection level $n$ and later averaged to obtain the estimate of the corresponding discretized amplitude and the associated MC error.\n\nThe function {\\tt func()} makes use of C implementation of earlier derived effective actions for a general 1D potential. For a given trajectory at the bisection level $n$, {\\tt func()} will first initialize appropriate variables with the values of the potential and its higher derivatives (to the required level $2p-2$) by calling the user-supplied function {\\tt V0()}, located in the file {\\tt potential.c}. Afterwards the effective action is calculated according to Eq.~(\\ref{eq:SNp}), where the effective potential is calculated by the function {\\tt Wp()}, located in the file {\\tt p.c}. The desired level $p$ of the effective action is selected by defining the appropriate pre-processor variable when the code is compiled.\n\nIn addition to this basic mode, when SPEEDUP code uses general expression for level $p$ effective action, we have also implemented model-specific mode, described earlier. If effective actions are derived for a specific model, then user can specify an alternative {\\tt p.c} file to be used within the directory {\\tt src\/models\/}, where {\\tt } corresponds to the name of the model. If this mode is selected at compile time, the compiler will ignore {\\tt p.c} from the top {\\tt src} directory, and use the model-specific one, defined by the user. The distributed source contains model definitions for 1D-AHO and 1D-MPT potentials in directories {\\tt src\/models\/1D-AHO} and {\\tt src\/models\/1D-MPT}. Note that in this mode the potential is specified directly in the definition of the effective potential, and therefore the function {\\tt V0()} is not used (nor the {\\tt potential.c} file).\n\n\\subsection{Compiling and using SPEEDUP C code}\n\\label{sec:compiling}\n\nSPEEDUP C source can be easily compiled using the {\\tt Makefile} provided in the top directory of the distribution. The compilation has been thoroughly tested with GNU, Intel and IBM XLC compilers. In order to compile the code one has to specify the compiler which will be used in the {\\tt Makefile} by setting appropriately the variable {\\tt COMPILER}, and then to proceed with the standard command of the type {\\tt make }, where {\\tt } could be one of {\\tt all}, {\\tt speedup}, {\\tt sprng}, {\\tt clean-all}, {\\tt clean-speedup}, {\\tt clean-sprng}.\n\nThe SPRNG library \\cite{sprng} is an external dependency, and for this reason it is located in the directory {\\tt src\/deps\/sprng4.0}. In principle, it has to be compiled only once, after the compiler has been set. This is achieved by executing the command {\\tt make sprng}. Afterwards the SPEEDUP code can be compiled and easily linked with the already compiled SPRNG library. Note that if the compiler is changed, SPRNG library has to be recompiled with the same complier in order to be successfully linked with the SPEEDUP code.\n\nTo compile the code with level $p=10$ effective action and user-supplied function {\\tt V0()} located in the file {\\tt src\/1D-AHO.c}, the following command can be used:\\\\\n\\hspace*{6mm}{\\tt make speedup P=10 POTENTIAL=1D-AHO.c}\\\\\nIf not specified, {\\tt POTENTIAL=potential.c} is used, while the default level of the effective action is {\\tt P=1}. To compile the code using a model-specific definition of the effective potential, instead of the {\\tt POTENTIAL} variable, we have to appropriately set the {\\tt MODEL} variable on the command line. For example, to compile the supplied {\\tt p.c} file for 1D-MPT model located in the directory {\\tt src\/models\/1D-MPT} using the level $p=5$ effective action, the following command can be used:\\\\\n\\hspace*{11mm}{\\tt make speedup P=5 MODEL=1D-MPT}\\\\\nAll binaries compiled using the {\\tt POTENTIAL} mode are stored in the {\\tt bin} directory, while the binaries for the {\\tt MODEL} mode are stored in the appropriate {\\tt bin\/models\/} directory. This information is provided by the {\\tt make} command after each successful compilation is done.\n\nThe compilation is documented in more details in the supplied {\\tt README.txt} files. The distribution of the SPEEDUP code also contains examples of compilation with the GNU, Intel and IBM XLC compilers, as well as matching outputs and results of the execution for each tested compiler, each model, and for a range of levels of the effective action $p$.\n\n\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[width=9.5cm]{fig2}\n\\caption{Convergence of SPEEDUP Monte-Carlo results for the transition amplitude $A_N^{(p)}(-0.5, 0.5; 1)$ of 1D-MPT potential as a function of the number of time steps $N$, calculated with level $p=1, 2, 10$ effective actions, with the parameters of the potential $\\lambda=\\alpha=1$. The full lines give the fitted functions (\\ref{eq:fit}), where the constant term $A_p$ corresponds to the continuum-theory amplitude $A(-0.5, 0.5; 1)$. The number of Monte-Carlo samples was $N_{\\rm MC}=10^{6}$.}\n\\label{fig:conv}\n\\end{figure}\n\nOnce compiled, the SPEEDUP code can be used to calculate long-time amplitudes of a system in the specified potential $V$. If executed without any command-line arguments, the binary will print help message, with details of the usage. The obligatory arguments are time of propagation {\\tt T}, initial and final position {\\tt a} and {\\tt b}, maximal bisection level {\\tt s}, number of MC samples {\\tt Nmc} and {\\tt seed} for initialization of the SPRNG random number generator. All further arguments are converted to numbers of the {\\tt double} type and made available in the array {\\tt par} to the function {\\tt V0()}, or to the model-specific functions in the file {\\tt src\/models\/\/p.c}. The output of the execution contains calculated value of the amplitude for each bisection level $n=0,\\ldots,s$ and the corresponding MC estimate of its error (standard deviation). At bisection level $n=0$, where no integrals are actually calculated and the discretized $N=1$ amplitude is simply given by an analytic expression, zero is printed as the error estimate.\n\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[width=7.2cm]{fig3a}\n\\includegraphics[width=7.2cm]{fig3b}\n\\caption{(left) The anharmonic potential 1D-AHO, its energy eigenvalues (horizontal lines) and eigenfunctions, obtained by direct diagonalization of the space-discretized matrix of the evolution operator with level $p=21$ effective action and parameters $A=1$, $g=48$. The discretization cutoff was $L=8$, spacing $\\Delta=9.76\\cdot 10^{-4}$, and time of propagation $t=0.02$. (right) Results for the double-well potential, $A=-10$, $g=12$, $L=10$, $\\Delta=1.22\\cdot 10^{-3}$, $t=0.1$. On both graphs, left $y$-axis corresponds to $V(x)$ and energy eigenvalues, while scale on the right $y$-axis corresponds to values of eigenfunctions, each vertically shifted to level with the appropriate eigenvalue.}\n\\label{fig:phi4states}\n\\end{figure}\n\nFig.~\\ref{fig:conv} illustrates the typical results obtained from the SPEEDUP code on the example of 1D-MPT theory. In this figure we can see the convergence of numerically calculated amplitudes with the number of time-steps $N$ to the exact continuum value, obtained in the limit $N\\to\\infty$. Such convergence is obtained for each level $p$ of the effective action used. However, the convergence is much faster when higher-order effective action is used. Note that all results corresponding to the one value of level $p$ on the graph are obtained from a single run of the SPEEDUP code with the maximal bisection level $s=10$. The simplest way to estimate the continuum value of the amplitude is to fit numerical results from single run of the code to the appropriate level $p$ fitting function \\cite{prl-speedup, prb-speedup, pla-euler},\n\\begin{equation}\nA_N^{(p)}=A^{(p)}+\\frac{B^{(p)}}{N^p}+\\frac{C^{(p+1)}}{N^{p+1}}+\\cdots\n\\label{eq:fit}\n\\end{equation}\nThe constant term obtained by fitting corresponds to the best estimate of the exact amplitude which can be found from the available numerical results.\n\nAs mentioned earlier, the effective action approach can be used for accurate calculation of a large number of energy eigenstates and eigenvalues by diagonalization of the space-discretized matrix of transition amplitudes \\cite{sethia1990, sethia1999, ivanapre1, ivanapre2, becpla}. Fig.~\\ref{fig:phi4states} illustrates this for the case of an anharmonic and double-well potential. The graph on the left gives several eigenvalues and eigenstates for 1D-AHO potential with $A=1$ and quartic anharmonicity $g=48$, while the graph on the right gives low-lying spectrum and eigenfunctions of the double-well potential, obtained for $A=-10$, with the moderate anharmonicity $g=12$. More details on this approach, including study of all errors associated with the discretization process, can be found in Refs.~\\cite{ivanapre1, ivanapre2}.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nIn this paper we have presented SPEEDUP Mathematica and C codes, which implement the effective action approach for calculation of quantum mechanical transition amplitudes. The developed Mathematica codes provide an efficient tool for symbolic derivation of effective actions to high orders for specific models, for a general 1D, 2D and 3D single-particle theory, as well as for a general many-body systems in arbitrary number of spatial dimensions. The recursive implementation of the code allows symbolic calculation of extremely high levels of effective actions, required for high-precision calculation of transition amplitudes.\n\nFor calculation of long-time amplitudes we have developed SPEEDUP C Path Integral Monte Carlo code. The C implementation of a general 1D effective action to maximal level $p=18$ and model-specific effective actions provide fast $1\/N^p$ convergence to the exact continuum amplitudes.\n\nFurther development of the SPEEDUP C codes will include parallelization using MPI, OPENMP and hybrid programming model, C implementation of the effective potential to higher levels $p$, as well as providing model-specific effective actions for relevant potentials, including many-body systems.\n\n\\section*{Acknowledgements}\nThe authors gratefully acknowledge useful discussions with Axel Pelster and Vladimir Slavni\\' c.\nThis work was supported in part by the Ministry of Education and Science of the Republic of Serbia, under project No. ON171017, and bilateral project NAD-BEC funded jointly with the German Academic Exchange Service (DAAD), and by the European Commission under EU FP7 projects PRACE-1IP, HP-SEE and EGI-InSPIRE.\n\n\\begin {thebibliography}{00}\n\n\\bibitem{feynman}\nR. P. Feynman,\nRev. Mod. Phys. {\\bf 20}, 367 (1948).\n\\bibitem{feynmanhibbs}\nR. P. Feynman and A. R. Hibbs,\n\\emph{Quantum Mechanics and Path Integrals} (McGraw-Hill, New York, 1965).\n\\bibitem{kleinertbook}\nH. Kleinert,\n\\emph{Path Integrals in Quantum Mechanics, Sta\\-tistics, Polymer Physics, and Financial Markets},\n5th ed. (World Scientific, Singapore, 2009).\n\\bibitem{danicapla}\nD. Stojiljkovi\\' c, A. Bogojevi\\' c, and A. Bala\\v z,\nPhys. Lett. A {\\bf 360}, 205 (2006).\n\\bibitem{ivanapla}\nA. Bogojevi\\' c, I. Vidanovi\\' c, A. Bala\\v z, and A. Beli\\' c,\nPhys. Lett. A {\\bf 372}, 3341 (2008).\n\\bibitem{sethia1990}\nA. Sethia, S. Sanyal, and Y. Singh,\nJ. Chem. Phys. {\\bf 93}, 7268 (1990).\n\\bibitem{sethia1999}\nA. Sethia, S. Sanyal, and F. Hirata,\nChem. Phys. Lett. {\\bf 315}, 299 (1999).\n\\bibitem{ivanapre1}\nI. Vidanovi\\' c, A. Bogojevi\\' c, and A. Beli\\' c, \nPhys. Rev. E {\\bf 80}, 066705 (2009).\n\\bibitem{ivanapre2}\nI. Vidanovi\\' c, A. Bogojevi\\' c, A. Bala\\v z, and A. Beli\\' c, \nPhys. Rev. E {\\bf 80}, 066706 (2009).\n\\bibitem{becpla}\nA. Bala\\v z, I. Vidanovi\\' c, A. Bogojevi\\' c, and A. Pelster,\nPhys. Lett. A {\\bf 374}, 1539 (2010).\n\\bibitem{feynmanstat}\nR. P. Feynman,\n\\emph{Statistical Mechanics}\n(W. A. Benjamin, New York, 1972).\n\\bibitem{parisi}\nG. Parisi,\n\\emph{Statistical Field Theory} (Addison Wesley, New York, 1988).\n\\bibitem{prl-speedup}\nA. Bogojevi\\' c, A. Bala\\v z, and A. Beli\\' c,\nPhys. Rev. Lett. {\\bf 94}, 180403 (2005).\n\\bibitem{prb-speedup}\nA. Bogojevi\\' c, A. Bala\\v z, and A. Beli\\' c,\nPhys. Rev. B {\\bf 72}, 064302 (2005).\n\\bibitem{pla-euler}\nA. Bogojevi\\' c, A. Bala\\v z, and A. Beli\\' c,\nPhys. Lett. A {\\bf 344}, 84 (2005).\n\\bibitem{pre-ideal}\nA. Bogojevi\\' c, A. Bala\\v z, and A. Beli\\' c,\nPhys. Rev. E {\\bf 72}, 036128 (2005).\n\\bibitem{pre-recursive}\nA. Bala\\v z, A. Bogojevi\\' c, I. Vidanovi\\' c, and A. Pelster,\nPhys. Rev. E {\\bf 79}, 036701 (2009).\n\\bibitem{jelapla}\nJ. Gruji\\' c, A. Bogojevi\\' c, and A. Bala\\v z,\nPhys. Lett. A {\\bf 360}, 217 (2006).\n\\bibitem{speedup}\nSPEEDUP code distribution,\n{\\tt http:\/\/www.scl.rs\/speedup\/}\n\n\\bibitem{ceperley}\nD. M. Ceperley,\nRev. Mod. Phys. {\\bf 67}, 279 (1995).\n\\bibitem{takahashiimada}\nM. Takahashi and M. Imada,\nJ. Phys. Soc. Jpn. {\\bf 53}, 3765 (1984).\n\\bibitem{libroughton}\nX. P. Li and J. Q. Broughton,\nJ. Chem. Phys. {\\bf 86}, 5094 (1987).\n\\bibitem{deraedt2}\nH. De~Raedt and B. De~Raedt,\nPhys. Rev. A {\\bf 28}, 3575 (1983).\n\\bibitem{jangetal}\nS. Jang, S. Jang, and G. Voth,\nJ. Chem. Phys. {\\bf 115}, 7832 (2001).\n\\bibitem{makrimiller}\nN. Makri and W. H. Miller,\nChem. Phys. Lett. {\\bf 151}, 1 (1988);\nN. Makri and W. H. Miller,\nJ. Chem. Phys. {\\bf 90}, 904 (1989).\n\\bibitem{makri}\nN. Makri,\nChem. Phys. Lett. {\\bf 193}, 435 (1992).\n\\bibitem{alfordetal}\nM. Alford, T. R. Klassen, and G. P. Lepage,\nPhys. Rev. D {\\bf 58}, 034503 (1998).\n\\bibitem{chinkrotscheck}\nS.~A. Chin and E.~Krotscheck, \nPhys. Rev. E {\\bf 72}, 036705 (2005).\n\\bibitem{hernandez}\nE. R. Hern\\' andez, S. Janecek, M. Kaczmarski, E. Krotscheck, \nPhys. Rev. B {\\bf 75}, 075108 (2007).\n\\bibitem{ciftja}\nO. Ciftja and S.~A. Chin, \nPhys. Rev. B {\\bf 68}, 134510 (2003).\n\\bibitem{sakkos}\nK. Sakkos, J. Casulleras, and J. Boronat, \nJ. Chem. Phys. {\\bf 130}, 204109 (2009).\n\\bibitem{janecek}\nS. Janecek and E. Krotscheck,\nComput. Phys. Comm. {\\bf 178}, 835 (2008).\n\\bibitem{bandrauk}\nA.~D. Bandrauk and H. Shen, \nJ. Chem. Phys. {\\bf 99}, 1185 (1993).\n\\bibitem{chinchen}\nS.~A. Chin and C.~R. Chen, \nJ. Chem. Phys. {\\bf 117}, 1409 (2002).\n\\bibitem{omelyan}\nI.~P. Omelyan, I.~M. Mryglod, and R. Folk, \nComput. Phys. Commun. {\\bf 151}, 272 (2003).\n\\bibitem{bayepre}\nG. Goldstein and D. Baye,\nPhys. Rev. E {\\bf 70}, 056703 (2004).\n\\bibitem{chinarxiv}\nS. A. Chin, arXiv:0809.0914.\n\\bibitem{krotscheck}\nS. A. Chin, S. Janecek, and E. Krotscheck,\nComput. Phys. Comm. {\\bf 180}, 1700 (2009).\n\\bibitem{chin}\nS. A. Chin, S. Janecek, and E. Krotscheck,\nChem. Phys. Lett. {\\bf 470}, 342 (2009).\n\\bibitem{mathematica}\nMathematica software package,\n{\\tt http:\/\/www.wolfram.com\/mathematica\/}\n\\bibitem{mathtensor}\nMathTensor package,\n{\\tt http:\/\/smc.vnet.net\/mathtensor.html}\n\\bibitem{sprng}\nScalable Parallel Random Number Generator library, {\\tt http:\/\/sprng.fsu.edu\/}\n\\end{thebibliography}\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nStar formation is expected to have a strong impact on the interstellar medium (ISM).\nIn particular, young massive stars have strong stellar winds and radiation pressure capable of expelling the surrounding gas.\nAt a later stage, the massive stars create supernova explosions that heat the surrounding gas through thermal energy input.\nThese interactions play an important role in the evolution of galaxies.\nThe lack of cold gas leads to the suppresses of star formation activity, which is known as stellar feedback \\citep{Cole+00,Hopkins+12,Leroy+15}.\nStar formation also contributes to the metal enrichment of the intergalactic medium by ejecting gas from galaxies \\citep{Tremonti+04,Dayal+13,Andrews&Martini13}.\nIn particular, stellar feedback is effective in galaxies with low stellar masses ($M_*$) whose shallow gravitational potential wells may not be able to retain the gas.\nThe shallow potential may also be the cause of the increasing fraction of escaping gas from $M_*\\sim10^{12}~M_\\odot$ towards $M_*\\sim10^9~M_\\odot$ \\citep{Arribas+14,Bruno+19}.\nMoreover, stellar feedback may be able to explain two well-known observational results for low-mass galaxies that differ from the predictions of dark matter only numerical simulations, \nthe observed number of low-mass galaxies and the density profile of DM halo \\citep{Brooks19}.\n\nGalactic outflow are among the most important and noticeable results of stellar feedback.\nThe strength of stellar feedback can be quantified by the mass-loading factor ($\\eta$), which is defined as the ratio of the mass loss rate ($\\dot{M}_\\mathrm{out}$) to the star formation rate (SFR).\nSemianalytic models have adopted prescriptions for the feedback process assuming either energy-driven outflows with $\\eta\\propto v_\\mathrm{cir}^{-2}$ or momentum-driven outflows with $\\eta\\propto v_\\mathrm{cir}^{-1}$ (see Equations 13 and 35 in \\citealt{Murray+05}), where $v_\\mathrm{cir}$ is the circular velocity of the DM halo.\nRecent simulations can resolve the star formation activity and the outflowing gas in individual galaxies.\nHowever, the ``sub-grid'' physics of outflows still relies on theoretical or empirical formulae \\citep[e.g.,][]{Muratov+15,Christensen+18,Hu19}.\nIn this paper we will focus on low-mass galaxies below $M_*\\sim10^7~M_\\odot$, a mass range for which the ``sub-grid'' physics is particular poorly understood, and which has not been fully explored by previous observational studies.\n\nIt is known that the outflows are composed of hot ($T\\sim10^6~\\mathrm{K}$), warm ($T\\sim10^4~\\mathrm{K}$), and cold phases ($T\\sim10^1-10^3~\\mathrm{K}$) \\citep[e.g.,][]{Veilleux05}.\nIn the present work, we focus on the warm-phase gas that is found to be widely extended in young low-mass star-forming galaxies \\citep{Pardy+16}.\nFor warm-phase gas, various methods are used in the literature to measure gas motion with emission lines, including two-component fitting \\citep[e.g.,][]{Freeman+19,Bruno+19}, \nnon-paramatric procedures \\citep[e.g.,][]{Cicone+16},\nand spatially resolved line mapping \\citep[e.g.,][]{McQuinn+19,Bik+15}.\nThe two-component fitting decomposes the emission profile into narrow and broad components.\nThe narrow components are broadened by the rotation or dispersion of gas, while the broad components are supposed to trace the motion of outflowing gas.\nThe two-component fitting of emission lines requires\nhigh spectral resolution to resolve the emission line profiles.\nObservations are particularly challenging for low-mass galaxies, due to low brightness and intrinsically narrow emission.\n\nThis paper is the sixth paper of a program named ``Extremely Metal-Poor Representatives Explored by the Subaru Survey (EMPRESS)'' started by \\citeauthor{Kojima+20a} (\\citeyear{Kojima+20a}; hereafter K20). \nUsing machine-learning techniques, K20 select extremely metal-poor galaxies (EMPGs) from their source catalogs that are constructed with the data from Subaru\/Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP, \\citealt{HSC}) and the 13th release of Sloan Digital Sky Survey (SDSS DR13, \\citealt{SDSS13}).\nK20 identify 113 EMPG candidates in the local universe, that have compact sizes and possibly very small stellar masses.\nThis paper describes a dataset with sufficient spectral resolution and signal-to-noise ratio (SNR) to study the properties of such objects.\nIn Section \\ref{observations}, we describe our MagE observations and the data reduction. \nIn Section \\ref{sample}, we present our galaxy sample, and in Section \\ref{analysis}, we fit the spectra and derive galaxy properties. In Section \\ref{results}, we present our results on outflow properties.\nIn Sections \\ref{discussion} and \\ref{summary}, we discuss and summarize our observational results, respectively.\nThroughout the paper we adopt a cosmological model with $H_0=70~\\mathrm{km~s^{-1}~Mpc^{-1}}$, $\\Omega_\\Lambda=0.7$, and $\\Omega_{m}=0.3$.\n\n\\begin{deluxetable}{ccccc}\n \\label{tab:obsobj}\n \\tablecaption{Summary of the MagE Observations}\n \\tablewidth{0pt}\n \\tablehead{\n \\colhead{ID} & \\colhead{R.A.} & \\colhead{Decl.} & \\colhead{Exposure} & \\colhead{Ref.} \\\\\n \\colhead{} & \\colhead{(hh:mm:ss)} & \\colhead{(dd:mm:ss)} & \\colhead{(sec)} & \\colhead{} \\\\\n \\colhead{(1)} & \\colhead{(2)} & \\colhead{(3)} & \\colhead{(4)} & \n \\colhead{(5)}\n }\n \\startdata\n \\multicolumn{5}{c}{Photometric Candidates}\\\\\n \\hline\n J0845$+$0131 & 08:45:30.81 & $+$01:31:51.19 & 1,800 & (a)\\\\\n J0912$-$0104 & 09:12:18.12 & $-$01:04:18.32 & 2,700 & (a)\\\\\n J0935$-$0115 & 09:35:39.20 & $-$01:15:41.41 & 1,350 & (a)\\\\\n J1011$+$0201 & 10:11:47.80 & $+$02:01:08.30 & 1,400 & (a)\\\\\n J1210$-$0103 & 12:10:33.54 & $-$01:03:11.69 & 2,700 & (a)\\\\\n J1237$-$0016 & 12:37:47.89 & $-$00:16:00.76 & 1,800 & (a)\\\\\n J1401$-$0040 & 14:01:07.61 & $-$00:40:50.10 & 1,800 & (a)\\\\\n J1411$-$0032 & 14:11:03.69 & $-$00:32:40.77 & 1,800 & (a)\\\\\n J1452$+$0241 & 14:52:55.28 & $+$02:41:01.31 & 1,800 &(b)\\\\\n J1407$-$0047 & 14:07:10.69 & $-$00:47:26.31 & 1,800 &(b)\\\\\n \\hline\\hline\n \\multicolumn{5}{c}{Spectroscopically-Confirmed Galaxies}\\\\\n \\hline\n J1044$+$0353 & 10:44:57.80 & $+$03:53:13.30 & 900 & (c)\\\\\n J1253$-$0312 & 12:53:05.97 & $-$03:12:58.49 & 600 & (d)\\\\\n J1323$-$0132 & 13:23:47.46 & $-$01:32:51.94 & 1,800 & (e)\\\\\n J1418$+$2102 & 14:18:51.13 & $+$21:02:40.02 & 1,200 & (f)\\\\\n \\enddata\n \\tablecomments{Columns: (1) ID. (2) R.A. in J2000. (3) Declination in J2000. (4) Total exposure time. (5) Reference. (a) \\cite{Kojima+20a}, (b) K. Nakajima et al. in preparation, (c) \\cite{Kniazev+03}, (d) \\cite{Kniazev+04}, (e) \\cite{Izotov+12}, and (f) \\cite{Sanchez-Almeida+16}.}\n\\end{deluxetable}\n\n\n\\section{Observations and Data Reduction}\n\\label{observations}\nWe carry out deep spectroscopy for 14 targets, 10 out of which are EMPG candidates from K20 and K. Nakajima et al. (in preparation).\nThe others, 4 targets, are spectroscopically-confirmed bright local dwarf galaxies taken from \\cite{Kniazev+03}, \\cite{Kniazev+04}, \\cite{Izotov+12}, and \\cite{Sanchez-Almeida+16}.\nOur targets are summarized in Table \\ref{tab:obsobj}.\n\n\\subsection{Observations}\nAll the targets were observed with the Magellan Echellette (MagE) spectrograph mounted on the Magellan Baade Telescope on 2021 February 9th (PI: M. Rauch). \nA $0''.70 \\times10''$ slit was used at the parallactic angle to avoid wavelength-dependent slit-losses.\nAlthough some targets have the feature of a bright clump on a diffused tail that is known as the tadpole morphology \\cite[e.g.,][]{Morales-Luis+11,SanchezAlmeida+13},\nthe observing program was aimed at exploring the bright clumps.\nThereofre, we placed the slit to cover the bright clumps.\nFor each target, we obtained 2-3 science frames with exposure times of $300-900$ seconds depending on the luminosity of the target.\nWe took Qh lamp and in-focus (out-of-focus) Xe flash frames as the flat-fielding data for red and blue (very blue) orders of spectra, respectively.\nThe standard stars, HD49798 and CD329927, were observed at the beginning and the end of the observing run, respectively.\nDuring the observations, the sky was clear with a typical seeing of $\\sim0''.6$.\n\n\\subsection{Data Reduction}\nWe reduce the MagE spectroscopic data with PypeIt \\citep{pypeit:joss_arXiv,pypeit:zenodo}, an open-source reduction package for spectroscopic data written in Python.\nOur data reduction basically follows standard procedures, including flat fielding, wavelength calibration, sky subtraction, cosmic ray removal, and extracting one-dimensional (1D) spectra. \nWe separately use two types of frames, the Qh lamp and Xe flash, for flat-fielding, and find no differences in the flat-fielded frames of 1D spectra in blue orders. \nHowever, Xe flash frames present broad emission line features in red orders. \nThus, we choose the flat-fielding results given with Qh lamp frames from blue to red orders.\n\nPypeIt does not fit a model of sky background correctly around bright extended emission lines (e.g. H$\\alpha$ and [{\\sc Oiii}]$\\lambda 5007$). \nGiven the fact that most emission lines we investigate do not overlap with strong skylines, we adopt flat sky background in pixels where strong emission lines exist and estimate the flux of sky background by averaging sky background in the nearby pixels.\nWe perform boxcar extraction using the sky subtracted 2D spectra.\nWe find no clear emission lines for J1011$+$0201.\nFor the other 13 targets, PypeIt successfully identifies the spatial position on each 2D spectrum that corresponds to the bright clump of the target, while the emission from the diffused tail is not clearly seen.\nAround the position identified by PypeIt, a spatial extraction width is manually selected to extract a 1D spectrum that well represents the bright clump.\nThe 1D spectra extracted from the different exposures of each target are combined via PypeIt using \\texttt{pypeit\\_coadd\\_1dspec}.\nThe coadded 1D spectra have a spectral resolution of $\\sim27.8~\\mathrm{km~s^{-1}}$ that is estimated from the unresolved emission lines of the lamp data (see Section \\ref{fitting1d}).\nFlux errors composed of read-out and photon noise are estimated and propagated to 1D spectra by PypeIt.\nOf the two standard stars, HD49798 and CD329927, we choose whichever was closer to each target in declination for flux calibration.\nWe also apply telluric corrections via PypeIt using \\texttt{pypeit\\_tellfit}.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\linewidth]{SFR_M.pdf}\n \\caption{Stellar masses and SFRs.\n The red and black circles are our sample.\n For the galaxies represented by red (black) circles, the emission lines have double-Gaussian (single-Gaussian) profiles.\n The blue, cyan, green, and magenta symbols indicate galaxies at $z\\sim0, 1, 2$, and $5-6$ that we take from previous outflow studies.\n For those we cannot obtain data for individual galaxies, parameter ranges are shown as rectangles.\n The blue rectangle, diamonds, and crosses represent galaxies investigated by \\cite{Bruno+19}, \\cite{Perrotta+21}, and \\cite{McQuinn+19}, respectively, who use emission lines to study warm-phase outflows.\n Note that \\cite{Bruno+19} and \\cite{Perrotta+21} conduct double-Gaussian profile fitting to the emission lines, while \\cite{McQuinn+19} uses single-Gaussian fitting.\n The cyan and green rectangles represent galaxies studied by \\cite{Swinbank+19} and \\cite{Freeman+19}, respectively, who use stacked H$\\alpha$ emission.\n The blue open triangles are z$\\sim$0 galaxies investigated by \\cite{Heckman+15} and \\cite{Heckman+16} using absorption line methods. \n The blue, cyan, green, and magenta inverse triangles are taken from \\cite{Sugahara+17,Sugahara+19} who investigate galaxies at different redshifts with absorption line methods.\n The grey lines indicate specific star formation rates of $\\log(\\mathrm{sSFR\/Gyr^{-1}})=(0, 1, 2, 3, 4)$.}\n \\label{fig:sfr_m}\n\\end{figure}\n\n\\input{tab_sample.tex}\n\n\\section{Sample and spectroscopic data}\n\\label{sample}\nOur sample in this paper is composed of two sets of MagE spectra obtained in this study (Section \\ref{observations}) and K20.\nExploiting the high spectral resolution of MagE, we try to resolve spectral profiles of emission lines in our sample.\n\n\\subsection{Data from this Study}\nIn this study, ten EMPG candidates from K20 and K. Nakajima et al. (in preparation) are observed with MagE (Section \\ref{observations}). \nWe identify that nine targets out of the ten EMPG photometric candidates are galaxies with confirmed emission lines, while one object is a Galactic star. \nWe also investigate the MagE spectra of 4 bright spectroscopically-confirmed EMPGs (Section \\ref{observations}), and find that S\/Ns of their strong emission lines are sufficiently high.\nWe thus obtain 13 (=9+4) galaxies useful for our analysis.\n\n\\subsection{Data from K20}\nK20 conduct follow up spectroscopy for 8 EMPG candidates with MagE.\nAll 8 sources are confirmed as star-forming galaxies, and 2 out of the 8 sources are EMPGs with $12+\\log(\\mathrm{O\/H})<7.69$.\nThese 8 galaxies are useful for outflow studies in a low-mass regime because of their low stellar masses ($\\log [M_*\/M_\\odot] =5-7$) and active star formation. \nThe physical properties of these galaxies are taken from K20 and listed in Table \\ref{tab:sample}.\n\nFinally, we combine the K20 spectra (8 galaxies) with newly observed data (13 galaxies) and obtain a total of 21 (=8+13) galaxies for our sample as listed in Table \\ref{tab:sample}.\nIn Figure \\ref{fig:sfr_m}, we show the SFRs and stellar masses of our galaxies which are derived in Section \\ref{galaxy_property}.\nWe also include the galaxies investigated by several previous outflows studies.\nOur galaxies have high specific star formation rate ($\\mathrm{sSFR=SFR}\/M_*$) suggestive of active star formation that may lead to observable outflow signatures.\n\n\n\\begin{figure*}[ht!]\n\\includegraphics[width=\\textwidth]{profile_fitting1.pdf}\n\\caption{Spectra of H$\\alpha$ and [{\\sc Oiii}] lines with the best-fit profiles.\nWe only show the spectra for the galaxies that have double-Gaussian profiles in at least one of the H$\\alpha$ and [{\\sc Oiii}] lines.\nEach panel has two sub-panels presenting the emission lines (top) and the fitting residuals (bottom).\nIn the top panels, the black histograms indicate the observed spectra, while the blue- and red-solid lines are the best-fit double and single Gaussian profiles, respectively.\nThe red-dashed lines denote the individual narrow and broad components of the double-Gaussian profiles. \nThe flux densities are given in units of $10^{-17}\\mathrm{erg~s^{-1}~cm^{-2}~\\AA^{-1}}$.\nThe wavelengths are in rest frame.\nFor the panels with H$\\alpha$ lines, the nearby [{\\sc Nii}] lines are outside the wavelength range and have no effect on the fitting of the H$\\alpha$ lines.\nThe inset panels show the zoom-in of the spectra, the best-fit single Gaussian profiles, and the broad components.\nIn the bottom panels, we show fitting residuals normalized by 1$\\sigma$ flux errors, with the blue and red histograms for the single and double Gaussian profiles, respectively.\nThe shade regions indicate the $3\\sigma$ errors.}\n\\label{fig:fitting1}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\includegraphics[width=\\textwidth]{profile_fitting2.pdf}\n\\caption{Figure \\ref{fig:fitting1} continued}\n\\label{fig:fitting2}\n\\end{figure*}\n\n\\begin{figure}[ht!]\n \\centering\n \\gridline{\n \\fig{J2115-1734_Inst_Ha.pdf}{0.24\\textwidth}{}\n \\fig{J2115-1734_Voigt_Ha.pdf}{0.24\\textwidth}{}\n }\n \\caption{\\textit{left}: Best-fit single-Gaussian profile. For the blue line, no instrumental broadening is taken into account, while for the red line, the instrumental profile is convoluted before fitting. \\textit{right}: Same as the left panel but now the red line includes the effect of natural broadening instead of instrumental broadening.}\n \\label{fig:fitting_justification}\n\\end{figure}\n\n\\section{Analysis}\n\\label{analysis}\n\\subsection{Spectrum Fitting}\n\\label{fitting1d}\nWe investigate the line profiles of H$\\alpha$ and [{\\sc Oiii}]$\\lambda 5007$ emission lines, both of which have been used to investigate outflows driven by star-formation in previous studies \\citep[e.g.,][]{Cicone+16,Bruno+19}.\nThe K20 galaxies include 4 galaxies with saturated lines of [{\\sc Oiii}]$\\lambda 5007$ and\/or H$\\alpha$.\nIn such cases, assuming that the H$\\beta$ ([{\\sc Oiii}]$\\lambda 4959$) line have similar profiles as the H$\\alpha$ ([{\\sc Oiii}]$\\lambda 5007$) line, we investigate the profiles of the H$\\beta$ ([{\\sc Oiii}]$\\lambda 4959$) lines instead.\nWe investigate the profiles of H$\\alpha$ lines without considering the nearby [{\\sc Nii}]$\\lambda\\lambda6548,6583$ lines because the H$\\alpha$ lines are relatively narrow and clearly separated from the [{\\sc Nii}] lines.\nWe fit the emission lines with Gaussian profiles, using a customized non-linear least squares fitting routine with the package \\texttt{minpack.lm} in the R language.\nFor each Gaussian profile, there are four free parameters, the amplitude, line width ($\\sigma_\\mathrm{obs}$), line central wavelength, and one parameter for the flat continuum where the first three parameters define a Gaussian shape.\nWe do not fix the line central wavelengths to the systemic redshifts determined by multiple emission lines (Section \\ref{galaxy_property}), because some of the emission lines may have shifts from the systemic redshifts. \nBy looking at the fitting residuals, we find that the emission line profiles show broad wings and statistically-significant residuals over the best-fit Gaussian profiles for 14 out of the 21 galaxies which are presented in Figure \\ref{fig:fitting1} and \\ref{fig:fitting2}.\nFor the rest of 7 galaxies, we do not find significant broad wings.\n\nThe SNRs calculated within 3$\\sigma_\\mathrm{obs}$ around the central wavelengths are as high as $\\sim 1000$ (see Figure \\ref{fig:BN_SNR}).\nBecause the high SNRs may reveal complex structures, we need to consider the possibility that the broad wings presented in the emission lines originate from the instrumental broadening of MagE.\nWe identify several strong skylines and stack the skylines to approximate the instrumental profile, assuming that the instrumental profile does not depend on wavelength.\nWe follow the same procedure as the single-Gaussian fitting, except that we convolve the Gaussian profile with the instrumental profile before fitting the parameters.\nIn the left panel of Figure \\ref{fig:fitting_justification}, we show one example spectrum that is fitted by the single-Gaussian profile with and without instrumental broadening.\nWhether or not we take into account the instrumental broadening, the best-fit single-Gaussian profile shows broad wings and large residuals.\nWe thus conclude that the broad wings cannot be dominated by the instrumental profile.\n\nAnother possible origin of the broad wings is the intrinsic natural broadening of emission lines that has a Lorentzian profile.\nWe similarly fit the emission lines with the convolution of Gaussian and Lorentzian profiles, which is known as the Voigt profile.\nWe fix the line width of the Lorentzian profile as the typical value produced by the natural broadening of H$\\alpha$ that is $\\mathrm{FWHM}\\sim4.6\\times10^{-3}~\\mathrm{\\AA}$.\nAs shown in the right panel of Figure \\ref{fig:fitting_justification}, the best-fit single Gaussian profile with natural broadening cannot explain the broad wings.\nAlthough the pressure broadening from collision of atoms also has a Lorentzian profile, it is negligible in nebular gas.\n\nRuling out the possibilities of instrumental broadening and natural broadening, it is likely that the emission lines consist of multiple Gaussian components \\citep[e.g.,][]{Newman+12,Freeman+19}.\nWe conduct double-Gaussian profile fitting using a total of seven free parameters, the flat continuum and two sets of three Gaussian parameters, namely the amplitude, central wavelength, and line width.\nFrom the double-Gaussian profile fitting, we obtain two Gaussian profiles with different line widths. \nWe refer to the narrower (broader) Gaussian profile as a narrow (broad) component.\nWe allow for a velocity difference ($\\Delta v$) between the central wavelengths of the narrow and broad components.\nIn the spectra shown in the top panels of Figures \\ref{fig:fitting1} and \\ref{fig:fitting2}, the narrow\/broad components and overall shapes of double-Gaussian profiles are overplotted with dashed and solid red lines, respectively.\nThe double-Gaussian profiles explain well the line shapes, leaving very small residuals. \nWe thus do not increase the number of components to three and beyond.\nFinally, we conclude that the line shapes of the 14 galaxies are well explained by the double-Gaussian profile.\nFor the following analysis, we assume that the broad components trace outflowing gas.\nIn Section \\ref{profile_discuss} we discuss alternative interpretations of the double-Gaussian profile and show that the broad components are most likely originated from outflows.\n\nTable \\ref{tab:line} presents the best-fit parameters of the double-Gaussian profiles. \nThe line widths listed in Table \\ref{tab:line} are not corrected for instrumental broadening.\nIn further analysis, we calculate the intrinsic-line widths by quadratically subtracting $\\sigma_\\mathrm{inst}$ from the $\\sigma_\\mathrm{n}$ and $\\sigma_\\mathrm{b}$ values in Table \\ref{tab:line}.\nFor our MagE spectra taken with the slit widths of $0.''7$, we evaluate $\\sigma_\\mathrm{inst}$ to be $27.8~\\mathrm{km~s^{-1}}$ in the same manner as K20 who use unresolved emission lines of the lamp data. \nFor the K20 spectra, we apply $\\sigma_\\mathrm{inst}$ values obtained by K20 that are $26.4$ and $33.3~\\mathrm{km~s^{-1}}$ for the slit widths of $0.''85$ and $1.''20$, respectively.\n\n\\subsection{Galaxy Properties}\n\\label{galaxy_property}\n\n\\input{tab_fitting}\n\n\\input{tab_photometry}\n\n\\input{tab_outflow}\n\nWe estimate the redshifts, dust extinction, stellar masses, stellar ages, and star-formation rates (SFRs) of the thirteen galaxies observed in our MagE run (Table \\ref{tab:obsobj}).\nFor galaxies taken from K20, we adopt these values estimated in K20.\nWe also estimate the circular velocities of all our galaxies for further analysis.\nDiscussions of galaxy properties will be presented in the forthcoming paper (K. Nakajima et al. in preparation).\n\nWe estimate redshifts, $z$, and color excesses, $E(B-V)$, in the same manner as K20.\nWe compare the observed central wavelengths of 4 strong emission lines (H$\\alpha$, H$\\beta$, [{\\sc Oiii}]$\\lambda 5007$, and [{\\sc Oiii}]$\\lambda 4959$) with the respective rest-frame wavelengths in air.\nColor excesses are calculated from the Balmer decrements of H$\\alpha$, H$\\beta$, H$\\gamma$, H$\\delta$, and H$\\varepsilon$ under the assumptions of the case B recombination and the dust attenuation curve of the Small Magellanic Cloud \\citep{SMC1,SMC2}.\n\nIn order to obtain the stellar masses and ages, we use the BayEsian Analysis of GaLaxy sEds (BEAGLE, v0.23.0; \\citeauthor{Chevallard+16} \\citeyear{Chevallard+16}) to fit photometry data acquired from Galaxy Evolution Explorer (GALEX; \\citealt{GALEX}), SDSS, and HSC (Table \\ref{tab:photometry}).\nFor the HSC photometry of $griz$, we use the HSC-SSP internal data of the latest S20A data release, and adopt ``cmodel\" photometry.\nFor the SDSS data ($ugriz$), we use the ``ModelMag\" photometry from the DR16 catalog \\citep{SDSS16}.\nThe $u$-band photometry of SDSS is also used for the HSC sources if available, where we assume the ($u-g$) color of SDSS and the $g$-band magnitude of HSC.\nFor the GALEX photometry of FUV and NUV bands, we choose the deepest images available for each of the sources from the MAST archive \\footnote{\\url{https:\/\/archive.stsci.edu\/}}, and obtain the total magnitudes using ``MAG\\_AUTO\" of SExtractor. \nWe do not use the GALEX photometry data for the three galaxies in our sample that are highly blended with nearby sources in the GALEX images.\nAll magnitudes are corrected for Galactic extinction based on the \\cite{milkyway_extinction}'s map as well as the extinction curve of \\cite{Cardelli+89}.\nThe results are listed in Table \\ref{tab:photometry}.\nBecause $g$- and $r$-band magnitudes include contribution from strong emission lines, we choose the spectral energy distribution (SED) templates of \\cite{Gutkin+16} that include nebular emission lines.\nWe adopt the \\cite{Chabrier+03} stellar initial mass function (IMF) with upper mass cutoffs $100~M_\\odot$.\nFor the fitting process, we assume a constant star-formation history and a dust extinction curve of \\cite{Charlot+00}.\nWe fit 5 free parameters with uniform prior distributions, the age of star-formation period ($4 < \\log[t\/\\mathrm{yr}] < 10$), stellar mass ($4 < \\log[M\/M_\\odot] < 9$), metallicity ($-2 < \\log[\\mathrm{Z\/Z_\\odot}] < 0$), nebular ionization parameter ($-3.0 < \\log\\mathrm{U} < -0.5$), and $V$-band attenuation optical depth ($0 < \\hat{\\tau}_V < 3$).\nThe redshifts are fixed to the values derived from the spectra, that are used to derive the distances of the galaxies during the fitting process.\nAfter we run the SED fitting, we take into account the peculiar motions of galaxies that affect the estimation of distances and stellar masses.\nAs our galaxies have low redshifts, we consider that the luminosity distances are proportional to $cz-v_\\mathrm{pec}$, where $c$ is the speed of light and $v_\\mathrm{pec}$ is the line-of-sight velocity of peculiar motion.\nFor each galaxy, we perform 1000 Monte-Carlo simulations of $v_\\mathrm{pec}$ in which the $1\\sigma$ error is $\\sim300~\\mathrm{km~s^{-1}}$ \\citep[e.g.,][]{Kessler+09}.\nWe then apply the correction factors of $(cz-v_\\mathrm{pec})^2\/(cz)^2$ to the posterior distribution of the stellar mass given by the SED fitting.\nWe combine the distributions obtained in the 1000 simulations and take the 16th, 50th, and 84th percentiles.\nFinally, we obtain estimates of stellar masses and stellar ages in the range of $\\sim10^{4.2}-10^{7.4}~M_\\odot$ and $\\sim10^{6}-10^{8.4}~\\mathrm{yr}$, respectively.\n\nWe estimate SFRs with the relation determined by \\cite{Kennicutt+98}, which assumes a Salpeter IMF:\n\\begin{equation}\n \\mathrm{SFR}~[M_\\mathrm{\\odot}~\\mathrm{yr}^{-1}] = 7.9\\times10^{-42} L(H\\alpha)~ [\\mathrm{erg~s^{-1}}],\n \\label{eq:sfr}\n\\end{equation}\nwhere $L(H\\alpha)$ is the intrinsic luminosity of H$\\alpha$ emission. \nTo calculate the intrinsic luminosities, we use the line fluxes that are estimated from the best-fit Gaussian profiles in Section \\ref{fitting1d}.\nThe distances are estimated from redshifts assuming a flat $\\Lambda$CDM cosmology.\nWe apply aperture correction and dust extinction correction to the line fluxes.\nFor aperture correction, we first extract $r$-band photometris from the 1D spectra using the python package \\texttt{speclite}.\nThe differences between the extracted photometries and those we list in Table \\ref{tab:photometry} are then used as the factors of aperture correction.\nFor the uncertainties of SFR, we combine the uncertainties of the flux measurements and those originated from galaxy peculiar motion, in a similar manner to the case of $M_*$.\nFinally, we divide the SFRs by 1.8 based on the Chabrier IMF \\citep[][]{Hsyu+18}.\nWe obtain $\\log(\\mathrm{SFR}\/M_\\odot~\\mathrm{yr}^{-1})\\sim-3.79-0.31$ for our sample.\nThe SFRs estimated from the SED fitting tend to be larger than those estimated from $L(H\\alpha)$ by $\\sim0.1-0.3$ dex.\nIn particular, the SED fitting overestimate the SFR of J1411$-$0032 by $\\sim3$ dex.\nBecause SFRs calculated with BEAGLE can be affected by various fitting parameters including the star-formation timescale, we adopt SFRs estimated from the dust-corrected H$\\alpha$ fluxes.\n\nTo estimate $v_\\mathrm{cir}$, we first convert the stellar mass to the mass of dark matter (DM) halo ($M_\\mathrm{h}$) with the $M_*-M_\\mathrm{h}$ relation for low-mass galaxies proposed by \\cite{Brook+14}:\n\\begin{equation}\n M_*=\\left(\\frac{M_\\mathrm{h}}{79.6\\times10^6~\\mathrm{M}_\\odot}\\right)^{3.1},\n \\label{eq:Mh}\n\\end{equation}\nwhere $M_\\mathrm{h}$ is the DM halo mass defined by an overdensity of $\\Delta_c=200$.\nThen we adopt the equations in \\cite{Mo&White+02} for $v_\\mathrm{cir}$:\n\\begin{equation}\n v_\\mathrm{cir}=\\left(\\frac{GM_\\mathrm{h}}{r_\\mathrm{h}}\\right)^{1\/2},\n\\end{equation}\n\\begin{equation}\n r_\\mathrm{h}=\\left(\\frac{GM_\\mathrm{h}}{100\\Omega_\\mathrm{m}H_0^2}\\right)^{1\/3}(1+z)^{-1},\n\\end{equation}\nwhere $r_\\mathrm{h}$ is the halo radius.\n\\cite{Prole+19} derive $M_\\mathrm{h}$ for galaxies with $M_*\\sim10^5-10^8~M_\\odot$ and find good agreement with the $M_*-M_\\mathrm{h}$ relation of Equation (\\ref{eq:Mh}).\nTo derive the uncertainties, we adopt the typical scatter of $\\Delta\\log(M_\\mathrm{h}\/M_\\odot)\\sim0.24$ calculated by \\cite{Prole+19} for the galaxies with $M_*\\sim10^6~M_\\odot$.\nThe uncertainties of $M_*$ and the typical scatter are propagated into the $M_\\mathrm{h}$ and $v_\\mathrm{cir}$ estimations.\n\nGalaxy properties are summarized in Table \\ref{tab:sample}.\n\n\\begin{figure*}[!htb]\n \\centering\n \\includegraphics[width=\\textwidth]{BN_grid.pdf}\n \\caption{Broad-to-narrow flux ratio ($BNR$). \\textit{Bottom left}: $BNR$ as a function of $M_*$. The red filled and open circles are $BNR$ measured with the H$\\alpha$ and [{\\sc Oiii}] lines, respectively. \n The green-filled and open squares are taken from \\cite{Freeman+19} who evaluate $BNR$ with the H$\\alpha$ and [{\\sc Oiii}] lines.\n The blue diamonds are calculated from the H$\\alpha$ lines by \\cite{Perrotta+21}.\n The cyan squares are calculated by \\cite{Swinbank+19} with stacked H$\\alpha$ spectra.\n \\textit{Bottom right}: Same as the bottom left panel, but for SFR. \\textit{Top left}: Ratio between the $BNR$ from [{\\sc Oiii}] and H$\\alpha$ in log scale.}\n \\label{fig:BN_mass_sfr}\n\\end{figure*}\n\n\\begin{figure}[!htb]\n \\centering\n \\includegraphics[width=\\linewidth]{BN_SNR.pdf}\n \\caption{BNR as a function of the SNR of the emission lines. The symbols are the same as Figure \\ref{fig:BN_mass_sfr}.}\n \\label{fig:BN_SNR}\n\\end{figure}\n\n\\begin{figure*}[!htb]\n \\centering\n \\includegraphics[width=\\textwidth]{vmax_grid.pdf}\n \\caption{\n Same as Figure \\ref{fig:BN_mass_sfr}, but for maximum outflow velocity ($v_\\mathrm{max}$).\n The blue circles are taken from \\cite{Bruno+19}.\n The other symbols are the same as Figure \\ref{fig:sfr_m}.}\n \\label{fig:vmax_mass_sfr}\n\\end{figure*}\n\n\\section{Results}\n\\label{results}\nIn this study, we investigate ionized outflows with emission lines of [{\\sc Oiii}] and H$\\alpha$, following the procedures taken by previous studies \\citep[e.g.,][]{Concas+17,Freeman+19,Bruno+19}.\nHereafter we focus on the 14 out of 21 galaxies that have emission lines with double-Gaussian profiles.\nWe characterize outflows with our double-Gaussian profile fitting results, focusing on three properties:\n1) the flux ratio between the broad and narrow components ($BNR$), \n2) the velocity shift of the broad component relative to the narrow component ($\\Delta v$), \nand 3) the maximum outflow velocity ($v_\\mathrm{max}$).\nWe adopt the line-of-sight velocity in the direction of $\\Delta v$ as the maximum outflow velocity \\citep[][]{Veilleux05}:\n\\begin{equation}\n v_\\mathrm{max}=|\\Delta v| + \\mathrm{FWHM_b}\/2,\n \\label{eq:vmax}\n\\end{equation}\nwhere $\\mathrm{FWHM_b}$ is the intrinsic FWHM of the broad component. \nThe values of $\\mathrm{FWHM_b}$ are calculated from the observed line widths as discussed in Section \\ref{fitting1d}.\nThe outflow properties are listed in Table \\ref{tab:outflow}.\n\nWe find that 3 (5) out of 14 galaxies have $\\Delta v>0$ ($\\Delta v<0$) values at the $3\\sigma$ levels, indicating that the broad components are redshifted (blueshifted) from the narrow components. \nThe other 6 galaxies have $\\Delta v$ consistent with zero within the $3\\sigma$ errors.\nIf one assumes symmetric outflows in the line-of-sight direction, the broad component is usually blueshifted due to a strong dust attenuation in the red wing of emission line.\nThe redshifted broad components with $\\Delta v>0$ would therefore suggest that the respective galaxies may have an excess of outflowing gas towards positive redshift.\n\nWe obtain $BNR$ values from the H$\\alpha$ and [{\\sc Oiii}] lines that are referred to as $BNR_\\mathrm{H}$ and $BNR_\\mathrm{O}$, respectively. \nThe top panel of Figure \\ref{fig:BN_mass_sfr} shows the ratio of $BNR_\\mathrm{H}\/BNR_\\mathrm{O}$.\nWe find no significant differences between $BNR_\\mathrm{O}$ and $BNR_\\mathrm{H}$ due to the moderately large errors of $BNR$.\nThe results suggest that the broad components in the forbidden [{\\sc Oiii}] lines are as prominent as those in the H$\\alpha$ lines.\nBecause the [{\\sc Oiii}] emission cannot take place in dense environment, the broad components are unlikely originated from the rotation disks around massive objects, e.g., supermassive black holes (BHs, see Section \\ref{profile_discuss} for further discussions).\nThe bottom panels of Figure \\ref{fig:BN_mass_sfr} presents $BNR_\\mathrm{H}$ with filled-red circles and $BNR_\\mathrm{O}$ with open-black circles.\nFor both H$\\alpha$ and [{\\sc Oiii}] emission, we find large scatters of $BNR$ from $\\sim0.01$ to $\\sim1$.\nSimilar to the results of previous studies,\nwe find no clear correlation between $BNR$ and stellar mass (SFR) for our galaxies,\nas shown in the bottom left (right) panel of Figure \\ref{fig:BN_mass_sfr}.\nHowever, the mean values of $BNR_\\mathrm{H}$ and $BNR_\\mathrm{O}$ (0.31 and 0.21, respectively) are smaller than the values calculated by \\cite{Freeman+19}, 0.76 and 0.74.\nThe difference of redshifts may not be the reason of different $BNR$ values, because we find no difference of $BNR$ between $z\\sim0$ galaxies (\\citealt{Perrotta+21}, blue diamonds) and $z\\sim2$ galaxies (\\citealt{Freeman+19}, green squares).\nIt is possible that the weak broad components in our galaxies originate from weak outflows with small mass outflow rate.\nWe will come back to this in Section \\ref{eta}.\nAnother factor we need to consider is the close relation between BNR and the SNR of emission lines.\nWe show BNR as a function of the emission line SNRs in Figure \\ref{fig:BN_SNR}.\nThe SNRs of our data are significantly higher than those of \\cite{Freeman+19}, which allows us to detect low BNR values.\n\nWe also investigate the outflow velocities by obtaining $v_\\mathrm{max}$ values from the H$\\alpha$ and [{\\sc Oiii}] lines that are referred to as $v_\\mathrm{max,H}$ and $v_\\mathrm{max,O}$, respectively. \nIn the bottom panels of Figure \\ref{fig:vmax_mass_sfr}, we present the results and find positive dependence of $v_\\mathrm{max}$ on stellar masses and SFRs. \nThe majority of our galaxies have $v_\\mathrm{max,H}\\sim60-120~\\mathrm{km~s^{-1}}$ and $v_\\mathrm{max,O}\\sim60-140~\\mathrm{km~s^{-1}}$.\nJ2253$+$1116 has relatively fast outflows with $v_\\mathrm{max}\\sim200~\\mathrm{km~s^{-1}}$ in both H$\\alpha$ and [{\\sc Oiii}] lines. \nAnother interesting galaxy is J2310$-$0211 that has $v_\\mathrm{max}\\sim200~\\mathrm{km~s^{-1}}$ only in the [{\\sc Oiii}] line.\n\nSimilar to $BNR$, we investigate the difference of the H$\\alpha$ and [{\\sc Oiii}] lines in tracing outflow velocity, as shown in the top left panel of Figure \\ref{fig:vmax_mass_sfr}.\nFour galaxies show the $v_\\mathrm{max}$ values of the [{\\sc Oiii}] lines to be larger than those of the H$\\alpha$ lines.\nIn particular, J2253$+$1116 has $v_\\mathrm{max,O}\\sim2v_\\mathrm{max,H}$.\nPrevious studies also report differences between $v_\\mathrm{max,O}$ and $v_\\mathrm{max,H}$.\n\\cite{Cicone+16} find outflow velocities inferred from\nthe [{\\sc Oiii}] lines higher than those from the H$\\alpha$ lines for local star forming galaxies.\n\n\\section{Discussion}\n\\label{discussion}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\linewidth]{vmax_vcir.pdf}\n \\caption{Maximum outflow velocity ($v_\\mathrm{max}$) as a function of circular velocity ($v_\\mathrm{cir}$). The symbols are the same as Figure \\ref{fig:vmax_mass_sfr}. The black line is the $v_\\mathrm{max}-v_\\mathrm{cir}$ relation given by \\cite{Muratov+15}. We shrink the red circles to avoid a busy plot.}\n \\label{fig:outflow_vcir}\n\\end{figure}\n\n\\subsection{Scaling Relations of Outflow Velocity and Comparison with Previous Studies}\n\\label{discussion_vmax}\n\nOutflow velocities are known to scale with host galaxy properties including stellar masses and SFRs \\citep[e.g.,][]{Martin+05,Chisholm+15}.\nThis study explores the scaling relations down to $M_*\\sim10^5~M_\\odot$.\nIn the bottom panels of Figure \\ref{fig:vmax_mass_sfr}, \nwe show the correlations of $v_\\mathrm{max}$ with $M_*$ and SFR for our galaxies and galaxies investigated by previous studies.\nThe same definition of $v_\\mathrm{max}$ is adopted by \\cite{Bruno+19} and this study.\nFor galaxies taken from \\cite{Perrotta+21}, we calculate $v_\\mathrm{max}$ with Equation (\\ref{eq:vmax}).\n\\cite{Heckman+16} and \\cite{Sugahara+17,Sugahara+19} estimate $v_\\mathrm{max}$ with interstellar UV absorption lines.\nWe note that, between the outflow velocities measured by emission lines and absorption lines, a comparison is hard to make without knowing the precise velocity distribution in outflowing gas.\nWe mainly focus on the comparison with previous results that are derived from emission lines.\nMost galaxies investigated by previous studies and this study have high SFRs, suggested by the SFR-$M_*$ distribution in Figure \\ref{fig:sfr_m}.\n\nIn the bottom left panel of Figure \\ref{fig:vmax_mass_sfr}, \na positive correlation between the outflow velocity and the stellar mass is identified for our galaxies, with Spearman rank correlation coefficients of $\\rho\\sim0.60$ ($\\sim0.60$) and $p\\sim0.028$ ($\\sim0.026$) for the H$\\alpha$ ([{\\sc Oiii}]) lines.\nDown to $M_*\\sim10^5~M_\\odot$, we obtain outflow velocity as small as $\\sim40~\\mathrm{km~s^{-1}}$.\nIn comparison, massive galaxies of $M_*\\sim10^{11}~M_\\odot$ can host outflow as fast as $\\sim1000~\\mathrm{km~s^{-1}}$.\nThe positive correlation $v_\\mathrm{max}-M_*$ correlation for our sample is generally consistent with the finds of previous studies that use emission lines.\nOur galaxies also show a positive $v_\\mathrm{max}-$SFR correlation, as presented in the bottom right panel of Figure \\ref{fig:vmax_mass_sfr}.\nThe Spearman rank correlation coefficients are $\\rho\\sim0.71$ ($\\sim0.78$) and $p\\sim0.006$ ($\\sim0.002$) for the H$\\alpha$ ([{\\sc Oiii}]) lines.\n\\cite{Bruno+19} and \\cite{Perrotta+21} conclude no clear $v_\\mathrm{max}-$SFR correlation.\nCombining all the previous data, it is likely that a positive $v_\\mathrm{max}-SFR$ correlation exists over a large SFR range.\nInterestingly, our galaxies are offsetted from this correlation.\nOur galaxies have $v_\\mathrm{max}$ smaller than those obtained by \\cite{Bruno+19} by at least a factor of three, despite the same SFRs.\nWe thus conclude that the scaling relation between $v_\\mathrm{max}$ and SFR have large scatters due to different galaxy properties including stellar masses.\nThis result implies that SFR cannot be treated as the fundamental parameter of determining outflow velocity.\n\nMany studies have investigated the dependence of $v_\\mathrm{max}$ on the circular velocity of DM Halo.\n\\cite{Sugahara+19} discuss the key role of $v_\\mathrm{cir}$ in determining both the gravitational potential and star-forming activity over different redshifts.\nThey conclude that $v_\\mathrm{cir}$ is probably the fundamental parameter that determines outflow velocity.\nTheir results also agree well with the simulation results obtained by \\cite{Muratov+15}. \nIn Figure \\ref{fig:outflow_vcir}, we show the dependence of $v_\\mathrm{max}$ on $v_\\mathrm{cir}$ and find a positive correlation for our sample and the galaxies taken from \\cite{Bruno+19}, \\cite{Sugahara+17,Sugahara+19}, and \\cite{Heckman+16}.\nFor data taken from \\cite{Bruno+19} and \\cite{Heckman+16}, we convert the stellar masses to $v_\\mathrm{cir}$ following the procedures taken by \\cite{Sugahara+19}.\nThe solid line represents the simulation results of \\cite{Muratov+15}.\nAlthough \\cite{Muratov+15} use the 95th percentile outflow velocity at 25\\% of virial radius, the differences from definitions of the outflow velocity is within the scatter of observational results. \nIn this study, we extend the $v_\\mathrm{max}-v_\\mathrm{cir}$ relation in \\cite{Sugahara+19} and \\cite{Muratov+15} down to $v_\\mathrm{cir}\\sim10~\\mathrm{km~s^{-1}}$.\n\n\\subsection{Gas Escaping}\nOutflows are considered an important mechanism to eject gas into the IGM, especially for low-mass galaxies that have shallow gravitational potentials.\nWe estimate the escape velocity ($v_\\mathrm{esc}$) for our galaxies to examine whether the outflows are fast enough to escape from the gravitational potentials.\nAssuming that the DM halo is an isothermal sphere truncated at a radius $r_\\mathrm{max}$,\n$v_\\mathrm{esc}$ at the radius $r$ can be estimated from the circular velocity \\citep{Heckman+00}:\n\\begin{equation}\n v_\\mathrm{esc}=v_\\mathrm{cir}\\{2[1+\\mathrm{ln}(r_\\mathrm{max}\/r)]\\}^{\\frac{1}{2}}.\n\\end{equation}\nThe value of $r_\\mathrm{max}$ is approximated by the halo radius calculated in Section \\ref{galaxy_property}.\nNote that the drag force is ignored in this estimation.\nThe escape velocity is not sensitive to the value of $r_\\mathrm{max}\/r$ in the range of $10-100$ \\citep{Veilleux05}.\nWe thus adopt $v_\\mathrm{esc}=3v_\\mathrm{cir}$, and obtain $v_\\mathrm{esc}\\sim60-130~\\mathrm{km~s^{-1}}$ for our galaxies as listed in Table \\ref{tab:outflow}.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\linewidth]{vesc_mass.pdf}\n \\caption{$v_\\mathrm{max}\/v_\\mathrm{esc}$ as a function of stellar mass. The symbols are the same as Figure \\ref{fig:vmax_mass_sfr}.} \n \\label{fig:vesc_mass}\n\\end{figure}\n\nIn Figure \\ref{fig:vesc_mass}, we show $v_\\mathrm{max}\/v_\\mathrm{esc}$ for our galaxies, where the 1$\\sigma$ errors are mainly from the scatter of the $M_*-M_\\mathrm{h}$ relation (Section \\ref{galaxy_property}).\nMost of our galaxies have $v_\\mathrm{max}\/v_\\mathrm{esc}$ slightly larger than unity.\nHowever, we cannot confirm or rule out the possibility for the outflowing gas to escape due to the large uncertainties.\nIn particular, J2253$+$1116 has $v_\\mathrm{max}\/v_\\mathrm{esc}\\sim3$, which is significantly larger than unity.\nGiven that the H$\\alpha$ line in J2253$+$1116 has relatively weak broad components, deeper observations are needed to reveal the fate of outflowing gas in this galaxy.\n\n\\cite{Bruno+19} and \\cite{Arribas+14} also show $v_\\mathrm{max}\/v_\\mathrm{esc}$ as a function of stellar mass and find that galaxies with smaller stellar masses are more likely to have $v_\\mathrm{max}\/v_\\mathrm{esc}>1$.\nBy comparing our results with previous ones with emission lines in Figure \\ref{fig:vesc_mass},\nwe do not find a increasing trend of $v_\\mathrm{max}\/v_\\mathrm{esc}$ below $\\sim 10^8~M_\\odot$.\nOne the other hand, \\cite{Heckman+16} find galaxies with extreme outflows that are fast enough to escape over a large range of stellar masses.\nIn Figure \\ref{fig:vesc_mass}, most galaxies taken from \\cite{Heckman+16} have $v_\\mathrm{max}\/v_\\mathrm{esc}>1$.\nOne galaxy with $M_*\\sim 10^7~M_\\odot$, Haro 3, shows $v_\\mathrm{max}\/v_\\mathrm{esc}\\sim3$ similar to J2253$+$1116.\nDespite the differences of $v_\\mathrm{max}$ definitions, there likely exist outflows with velocity fast enough to escape in low-mass regime.\n\nWe note that ten out of the sixteen galaxies show diffuse tails on the HSC\/SDSS images.\nThis tadpole geometry is commonly seen in local low-mass star-forming galaxies \\citep[][]{Morales-Luis+11}.\nThe diffuse tails can be ten times as massive as the respective galaxies, which is suggested by \\cite{Isobe+21}, who estimate the stellar masses of the diffuse tails of J1142$-$0038 and J2314$+$0154.\nIf our galaxies reside in the same DM halos as the diffuse tails, the halo mass and escape velocity may be underestimated.\nRe-calculating escape velocities with stellar masses ten times of the original values, we find that the escape velocities would be $\\approx30$ percent larger than the values shown in Table \\ref{tab:outflow}.\nWith these escape velocities, the majority of our galaxies would have $v_\\mathrm{max}\/v_\\mathrm{esc}\\lesssim1$.\nThe conclusion is unchanged given the large uncertainties of $v_\\mathrm{esc}$.\nFurther investigations on the dynamics of low-mass galaxies will be helpful to better constrain the escape of gas.\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{eta_vcir.pdf}\n\\caption{Mass loading factor ($\\eta$) and its dependence on $v_\\mathrm{cir}$. \nWe estimate $\\eta$ with fiducial parameters and extreme parameters, shown as the open and close solid circles, respectively, (see text). The solid line in the left-bottom corner indicates the typical error of $v_\\mathrm{cir}$ for our galaxies. We include the predictions of two `zoom' simulations \\citep[][]{Muratov+15,Christensen+18} and the prescription used at injection by IllustrisTNG \\citep[e.g.,][]{Pillepich+18,Nelson+19}.} The other symbols are the same as those in Figure \\ref{fig:BN_mass_sfr} and \\ref{fig:vmax_mass_sfr}.\n\\label{fig:eta_vcir}\n\\end{figure*}\n\n\\subsection{Mass Outflow and Implications for Feedback}\n\\label{eta}\nThe ratio of outflow mass loss rate ($\\dot{M}_\\mathrm{out}$) to star-formation rate\nis referred to as the mass-loading factor ($\\eta$):\n\\begin{equation}\n\\label{eq:eta}\n \\eta = \\frac{\\dot{M}_\\mathrm{out}}{\\mathrm{SFR}}.\n\\end{equation}\nThis quantity is widely used by observations \\citep[e.g.,][]{Heckman+15,Freeman+19,Sugahara+17} and simulation studies \\citep[e.g.,][]{Muratov+15}.\nFollowing the procedures taken by \\cite{Newman+12} and \\cite{Freeman+19}, we calculate the mass of outflowing gas using the equation:\n\\begin{equation}\n\\label{eq:Mout}\n M_\\mathrm{out}=\\frac{1.36m_\\mathrm{p}\n L_\\mathrm{H\\alpha,b}}{\\gamma_\\mathrm{H\\alpha}n_\\mathrm{e}},\n\\end{equation}\nwhere $L_\\mathrm{H\\alpha,b}$ is the extinction-corrected H$\\alpha$ luminosity of the broad component.\nThe quantity $m_\\mathrm{p}$ is the atomic mass of hydrogen and $n_\\mathrm{e}$ is the electron density in the outflowing gas.\nFor the volume emissivity of H$\\alpha$ ($\\gamma_\\mathrm{H\\alpha}$), we use $\\gamma_\\mathrm{H\\alpha}=3.56\\times10^{-25}~\\mathrm{erg~cm^{-3}~s^{-1}}$, assuming the case B recombination and an electron temperature of $T_\\mathrm{e}=10^4~\\mathrm{K}$.\n\nWe assume that the outflowing gas moves in a solid angle $\\Omega$ with a radially constant velocity $v_\\mathrm{max}$ and calculate the mass outflow rate with:\n\\begin{equation}\n\\label{eq:dotMout}\n \\dot{M}_\\mathrm{out}=M_\\mathrm{out}\\frac{v_\\mathrm{max}}{r_\\mathrm{out}},\n\\end{equation}\nwhere $r_\\mathrm{out}$ is the radial extent of outflow.\nCombining Equations (\\ref{eq:eta})-(\\ref{eq:dotMout}) and adopting Equation (\\ref{eq:sfr}) for SFR, \nwe obtain the following calculation for $\\eta$:\n\\begin{align}\n \\eta\n &\\approx0.75\\left(\\frac{v_\\mathrm{max}}{n_\\mathrm{e}r_\\mathrm{out}}\\right)\n \\left(\\frac{L_\\mathrm{H\\alpha,b}}{L_\\mathrm{H\\alpha,b}+L_\\mathrm{H\\alpha,n}}\\right) \\\\\n &\\approx0.75\\left(\\frac{v_\\mathrm{max}}{n_\\mathrm{e}r_\\mathrm{out}}\\right)\n \\left(\\frac{BNR}{1+BNR}\\right),\n\\label{eq:eta_final}\n\\end{align}\nwhere $L_\\mathrm{H\\alpha,n}$ is the extinction-corrected H$\\alpha$ luminosity of the narrow component.\nThe units of $v_\\mathrm{max}$, $n_\\mathrm{e}$, and $r_\\mathrm{out}$ are $\\mathrm{km~s^{-1}}$, $\\mathrm{cm^{-3}}$, and $\\mathrm{kpc}$, respectively.\nIn Equation (\\ref{eq:eta_final}), assuming that the emission of narrow and broad components have the same dust attenuation, we approximate the ratio of extinction-corrected luminosity using the flux ratio (BNR) of the H$\\alpha$ line.\nWe note that we do not multiply Equation (\\ref{eq:eta_final}) by a factor of two as in \\cite{Newman+12} because we do not find that the the red wings of our broad components are significantly obscured.\nSimilar to previous studies, our measurements of $\\eta$ are subject to the large uncertainties given by the estimation of $n_\\mathrm{e}$ and $r_\\mathrm{out}$.\nTherefore we present two estimations of $\\eta$: fiducial values calculated with $r_\\mathrm{out}$ and $n_\\mathrm{e}$ that our low-mass galaxies likely have,\nand maximum values ($\\eta_\\mathrm{max}$) calculated with small $r_\\mathrm{out}$ and $n_\\mathrm{e}$ that produce relatively large $\\eta$ in extreme cases.\n\nElectron density of the outflowing gas can be estimated if emission line doublets like [{\\sc Oii}]$\\lambda\\lambda3727,3729$ and [{\\sc Sii}]$\\lambda\\lambda6716,6731$ are decomposed into narrow and broad components, where broad components originate from outflows.\n\\cite{Arribas+14} and \\cite{Bruno+19} obtain median values of\n$n_\\mathrm{e}\\sim315~\\mathrm{cm}^{-3}$ and $302~\\mathrm{cm}^{-3}$, respectively, for the broad components of [{\\sc Sii}] doublets.\nBoth of \\cite{Arribas+14} and \\cite{Bruno+19} find that electron densities measured from the broad components are larger than those from the narrow components.\nThere are two models of outflowing gas proposed by previous studies \\citep[e.g.,][]{Genzel+11,Newman+12}, \none of which can explain the observational results of \\cite{Arribas+14} and \\cite{Bruno+19},\nassuming that outflowing gas forms compact clouds and retains relatively high local electron density.\nHowever, electron density decreases with radius in the other model that assumes outflowing gas filling the entire volume of outflow cone.\nIn this study, we do not have enough SNRs to measure the line ratios of the broad components in [{\\sc Oii}] and [{\\sc Sii}] doublets, \nwhile the systemic values of electron density can be measured from [{\\sc Oii}] doublets with single-Gaussian fitting.\nWe therefore adopt the systemic values as the fiducial values of $n_\\mathrm{e}$ in Equation (\\ref{eq:eta_final}), given that we do not know which model better explains the outflowing gas in our galaxies.\nFor galaxies without measurements of electron density, we assume a typical value of $100~\\mathrm{cm^{-3}}$.\nWe use $n_\\mathrm{e}\\sim10~\\mathrm{cm^{-3}}$ to evaluate $\\eta_\\mathrm{max}$.\n\nThe value of $r_\\mathrm{out}$ could be the main source of uncertainties in the estimation of $\\eta$ using the emission line method.\nThe observational measurements of $r_\\mathrm{out}$ is difficult and sensitive to the assumption of outflow geometry.\n\\cite{McQuinn+19} measure the extent of the H$\\alpha$ emission in local dwarf galaxies with narrow band images.\nResults of \\cite{McQuinn+19} show that the diffused ionized gas reach the projected distances of $\\sim1-5~\\mathrm{kpc}$, that are $\\sim2-3$ times the radius of the H{\\sc i} disk.\nThey estimate $\\eta$ based on the amount of gas that pass through a thin shell whose thickness is equivalent to our definition of $r_\\mathrm{out}$. \n\\cite{Newman+12} utilize the integral field spectra of a $z\\sim2$ clumpy star-forming galaxy and approximate $r_\\mathrm{out}$ with the radius of the star-forming clumps.\nIn their case, the mass of outflowing gas is calculated from the flux of the broad emission line components.\nMotivated by the method of \\cite{Newman+12}, we adopt the effective radius ($r_\\mathrm{e}$) as the our estimations for $r_\\mathrm{out}$.\nFor 15 galaxies selected by the EMPRESS project, \\cite{Isobe+21} obtain $r_\\mathrm{e}\\sim40-2500~\\mathrm{pc}$ and find a size-mass relation similar to the $z\\sim0$ star-forming galaxies in \\cite{Shibuya+15}.\nWe adopt the median (minimum) value of $r_\\mathrm{e}\\sim150~\\mathrm{pc}$ ($40~\\mathrm{pc}$) as the fiducial (extreme) \n$r_\\mathrm{out}$ values for the 14 galaxies in this paper.\nThe values of $r_\\mathrm{e}$ measured by \\cite{Isobe+21} has a standard deviation of $\\sim600~\\mathrm{pc}$, which is larger than the range of $r_\\mathrm{e}$ given by the size-mass relation between $M_*\\sim10^5-10^7~M_\\odot$.\nTherefore, we cannot improve $r_\\mathrm{e}$ estimations by applying the size-mass relation.\nBecause we discuss the uncertainties originated from $r_\\mathrm{out}$ by calculating $\\eta$ and $\\eta_\\mathrm{max}$, we apply the same $r_\\mathrm{out}$ value for all galaxies.\n\nEquation (\\ref{eq:dotMout}) is applicable for expanding shells or spheres as pointed out in \\cite{Olmo-Garcia+17}, if an appropriate extent of outflows is chosen.\nIt is also used in studies that assume biconical outflows.\nHere we test different geometries of outflows by evaluating $r_\\mathrm{out}$ assuming a sphere filled with ionized gas that has a density of $1.36m_\\mathrm{p}n_\\mathrm{e}$:\n\\begin{equation}\n r_\\mathrm{out,sphere}=\\left(\\frac{3}{4\\pi}\\frac{M_\\mathrm{out}}{1.36m_\\mathrm{p}n_\\mathrm{e}}\\right)^{1\/3}.\n \\label{eq:rout_sphere}\n\\end{equation}\nWe obtain $r_\\mathrm{out,sphere}\\sim10-80~\\mathrm{pc}$ combining Equations (\\ref{eq:Mout}) and (\\ref{eq:rout_sphere}).\nOur choice of the fiducial and extreme values of $r_\\mathrm{out}$ are of the same order as $r_\\mathrm{out,sphere}$.\nIf the outflowing gas moves as an expanding shell, the extent of outflows would be smaller as the radius of the shell increases ($r_\\mathrm{out} < r_\\mathrm{out,sphere}$).\nHowever, if the outflowing gas has a clumpy structure or a non-spherical geometry (e.g., biconical outflows), an extent larger than $r_\\mathrm{out,sphere}$ needs to be assumed ($r_\\mathrm{out} > r_\\mathrm{out,sphere}$).\nWe show in Section \\ref{profile_discuss} that the broad components are likely originated from a spherical or conical structure filled by ionized gas instead of a thin shell.\nTherefore, outflows in our galaxies are probably characterized by our choice of $r_\\mathrm{out}$.\n\nFinally, we obtain $\\eta$ ($\\eta_\\mathrm{max}$) values of $0.21-2.18$ ($4.78-131.39$) with a median value of $0.43$ ($17.09$).\nWe present the results in Table \\ref{tab:outflow}, and show $\\eta$ and $\\eta_\\mathrm{max}$ as a function of $v_\\mathrm{cir}$ in Figure \\ref{fig:eta_vcir}.\nGalaxies taken from \\cite{Heckman+15}, \\cite{Freeman+19}, \\cite{McQuinn+19}, \\cite{Sugahara+17}, and \\cite{Swinbank+19} are compared in Figure \\ref{fig:eta_vcir}.\nWe estimate the value of $v_\\mathrm{cir}$ from halo mass as explained in Section \\ref{discussion_vmax}.\n\\cite{McQuinn+19} investigate $\\eta$ as a function of $v_\\mathrm{cir}$ that is defined for the H{\\sc i} region. \nTo make a consistent comparison, here we use the stellar masses of their galaxies to evaluate $v_\\mathrm{cir}$ that correspond to circular velocity on the virial radius of dark matter halo.\nOur estimations of $\\eta$ are comparable with those of high mass galaxies with $v_\\mathrm{cir}\\sim50-300~\\mathrm{km~s^{-1}}$ within one order of magnitude.\n\nThe black dashed lines in Figure \\ref{fig:eta_vcir} are the predictions of the Feedback in Realistic Galaxies (FIRE) simulations conducted by \\cite{Muratov+15}, who find a boundary of $v_\\mathrm{cir}=60~\\mathrm{km~s^{-1}}$ below which $\\eta$ increase steeply towards small $v_\\mathrm{cir}$ ($\\eta\\propto v_\\mathrm{cir}^{-3.2}$).\nThe steep slope below $v_\\mathrm{cir}=60~\\mathrm{km~s^{-1}}$ is originated from the simulation results of low-mass galaxies.\nAmong the $z\\sim0$ simulations in \\cite{Muratov+15}, the least massive halo with $v_\\mathrm{cir}\\sim33~\\mathrm{km~s^{-1}}$ has $\\eta\\sim45$.\n\\cite{Christensen+18} also explore low-mass galaxies with `zoom' simulations and predict a $\\eta\\propto v_\\mathrm{cir}^{-2.2}$ relation as shown with the black solid line in Figure \\ref{fig:eta_vcir}.\nThe slope of $-2.2$ given by \\cite{Christensen+18} is close to the slope of $-2$ for energy-driven outflows but flatter than the one predicted by \\cite{Muratov+15}.\nAlthough our galaxies reside in the regime of energy-driven outflows, \nour estimation of $\\eta$ is smaller than the predictions of \\cite{Muratov+15} and \\cite{Christensen+18}.\nFor $\\eta_\\mathrm{max}$ that is calculated with extreme parameters, our results are consistent with the predictions of \\cite{Christensen+18} but still smaller than those of \\cite{Muratov+15}.\nOne exception is J1253$-$0312 that has $\\eta_\\mathrm{max}\\sim131.39$, suggesting strong feedback.\nHowever, we emphasize the choice of $r_\\mathrm{out}\\sim40~\\mathrm{pc}$ and $n_\\mathrm{e}\\sim10~\\mathrm{cm^{-3}}$ is extremely low, especially for J1253$-$0312 that has the largest stellar mass in our galaxies.\nIn other words, even in the case that outflows have $v_\\mathrm{max}\\sim150~\\mathrm{km~s^{-1}}$ and BNR$\\sim2$, we still need to assume concentrated geometry and low electron density to produce large $\\eta$.\nOn the other hand, the fiducial $\\eta$ values are significantly below the simulation predictions, possibly suggestive of the feedback being weak in our galaxies.\nAs discussed in Section \\ref{results}, the detection of weak outflows is possible because our spectra have sufficiently high SNRs.\nWe also compare our results with the prescription used by The Next Generation Illustris \\citep[IllustrisTNG,][]{Pillepich+18}.\nIn the low-mass regime, IllustrisTNG assumes a minimum wind velocity at injection and adopts a maximum mass loading factor that is larger than our estimations.\nHowever, the outflows in low-mass regime are still unexplored by large-volume simulations like TNG50 \\citep[][]{Nelson+19}.\nTherefore, adopting a prescription with weak feedback ($\\eta\\sim1$) is potentially interesting for future simulations that resolve low-mass galaxies.\n\nWe note that different phases of outflowing gas are used by simulations and our methods in the calculation of $\\eta$.\nThe H$\\alpha$ and [{\\sc Oiii}] emission is originated from the warm gas.\nTherefore the $\\eta$ values derived from emission line methods are usually treated as lower limits \\citep{Newman+12} because a significant fraction of outflowing mass can be in cold atomic or molecular phase that is not detected with the emission line methods.\nSimulations including \\cite{Muratov+15}, on the other hand, usually include gas in all phases.\nFor our galaxies, active star formation may produce strong ionization radiation that ionizes most of the diffused gas.\nWith MUSE observations, \\cite{Bik+18} find a largely extended H$\\alpha$ halo around a low-mass galaxies, ESO 338, that has similar properties to our galaxies,\nsuggestive that the most gas is in warm ionized phase with the strong ionization radiation from the center.\nFor another similar galaxy, Haro 11, observed with MUSE, \n\\cite{Menacho+19} find that the neutral atomic gas mass is one-third of the ionized gas mass, while the molecular gas mass could be comparable to the ionized gas.\nIt is likely that the mass contribution from cold phase outflows are relatively small compared to warm phase outflows in local compact star-forming galaxies.\nIf we adopt the mass ratio in \\cite{Menacho+19},\nincluding the cold-phase outflows may only increase our $\\eta$ estimations by a factor of two, which does not impact our conclusions.\n\n\\subsection{Other Interpretations of the Broad Components}\n\\label{profile_discuss}\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=\\linewidth]{BPT.pdf}\n\\caption{BPT diagnostics diagram. The small red dots are derived from the narrow components of the emission lines, while the red circles from the broad components. In particular, the [{\\sc Nii}]$\\lambda6583$ lines are not fit with the double-Gaussian profile, therefore the total fluxes are used, see text. The solid and dashed lines are taken from \\cite{Kewley+01} and \\cite{Kauffmann+03}, respectively.}\n\\label{fig:BPT}\n\\end{figure}\n\nProfile fitting with multiple components is commonly used to probe outflows.\nHowever, alternative interpretations of the secondary components can be proposed \\citep[e.g.,][]{Olmo-Garcia+17, Freeman+19}. \nHere we discuss different possibilities that are rotation of galaxies, accretion disks around massive objects (e.g., BHs), gas inflows, and expanding bubbles.\n\nIn this study, we successfully decompose the emission lines with double-Gaussian profiles, showing not only the existence of two components, but also clear deviation between the line widths of the two components, with $\\sigma_\\mathrm{b}-\\sigma_\\mathrm{n} \\sim20-100~\\mathrm{km~s^{-1}}$.\nAccording to the Tully-Fisher relation \\citep[][]{TF}, galaxies with $M_*\\sim10^6-10^7~M_\\odot$ are characterized by rotation velocities of $\\lesssim30~\\mathrm{km~s^{-1}}$, while we obtain $\\mathrm{FWHM_b}\\gtrsim100~\\mathrm{km~s^{-1}}$.\nEven we assume that the gas contributes to 99\\% of the baryonic mass and refer to the baryonic Tully-Fisher relation \\citep[][]{McGaugh+00}, the typical rotation velocity for a galaxy with a baryonic disk mass of $\\sim10^9~M_\\odot$ is $\\lesssim100~\\mathrm{km~s^{-1}}$.\nTherefore, the broad components most likely represent gas components with relatively high velocities (e.g., outflows) that are distinctive from the rotation of galaxies.\n\nThe accretion disk around massive objects can produce broad emission lines.\nIn our galaxies, the fluxes from the broad components are of the same order of magnitude as those from the narrow components ($BNR\\sim0.3$).\nThe broad line region (BLR) around a supermassive BH can emit strong emission lines that are broadened by the rotational motion of the accretion disk, with $\\mathrm{FWHM}\\gtrsim1000~\\mathrm{km~s^{-1}}$.\nHowever, the FWHMs of the broad components in this study are significantly smaller than those found in BLRs.\nThe BLRs and the accretion disks are also known to have high densities, which results in the absence of forbidden lines.\nAs we pointed out in Section \\ref{results}, the prominent broad components in the [{\\sc Oiii}] are only produced in ionized gas with low density.\nTherefore, the broad components found in this study do not characterize BLRs.\n\nWe further check the excitation states for the gas traced with different components using the well-known BPT diagnostic diagram.\nWe conduct double-Gaussian fitting to the H$\\beta$ lines following the procedures in Section \\ref{fitting1d}.\nFor 13 out of the 14 galaxies, we successfully obtain best-fit double-Gaussian profiles and derive the line fluxes for the narrow and broad components.\nFor J1418$+$2102 whose H$\\beta$ line is too faint, we use single-Gaussian fitting and derive the line fluxes of the narrow and broad components adopting the $BNR$ value of the H$\\alpha$ line.\nThe [{\\sc Nii}]$\\lambda6583$ lines are too faint to be reliably fitted with the double-Gaussian profiles, therefore we only use the the line fluxes derived from the best-fit single-Gaussian profiles.\nIn other words, we derive the upper limits of [{\\sc Nii}]\/H$\\alpha$, assuming one single component contributes to all the [{\\sc Nii}] emission.\nIn Figure \\ref{fig:BPT}, we show the BPT diagram for the narrow and broad components.\nExcept for J1253$-$0312, narrow components for 13 out of the 14 galaxies fall in the region of star-forming galaxy.\nFor the broad components, 12 out of the 14 galaxies locate in the region of star-forming galaxy or close to the classification line \\citep[][]{Kewley+01}.\nThe other two objects J2253$+$1116\nand J1401$-$0040 \nare offset from the star-forming region, indicating possible excitation from active galactic nuclei (AGN) or fast radiative shock.\nWe note again that the [{\\sc Nii}]\/H$\\alpha$ of the broad components are uppler limits, that can be lowered by a factor of $\\sim3$ if we adopt a typical value of $BNR=0.3$.\nTherefore, there is no clear evidence showing the connection between the broad components and AGNs.\nSimilarly, the possibility of inflows can be ruled out given that the gas is highly ionized ([{\\sc Oiii}]\/H$\\beta > 1$), while the inflow is expected to be composed of cold gas.\n\nAn expanding bubble is produced when sufficient energy or momentum is injected into the ISM. \nWhen the expanding bubble reaches the disk scale, the bubble breaks up and possibly create outflows with a conical geometry.\nBy observations, most ionized outflowing gas moves as a thin shell or along the walls of conical structures \\citep[see][and references therein]{Veilleux05}.\nIn the case of the expanding bubble, an optically thin, symmetric thin shell, would produce emission lines with a top-hat profile \\citep[e.g.,][]{Cid+94}.\n\\cite{Olmo-Garcia+17} observe ``double-horn'' structures in the line profiles that are pairs of secondary components on both sides of the emission lines.\nThey discuss different origins of emission line profiles and conclude that the ``double-horn'' structure likely emerges from a shell with dust extinction.\nIn our case, the broad wings of an emission line cannot be explained by the ``double-horn'' structure but one broad component with a Gaussian shape.\nThe inner part of a Gaussian shape represents the gas with small line-of-sight velocities.\nTherefore, the Gaussian shape indicates that the gas has a range of velocities, which favors the explanation that outflowing gas is moving at different radii instead of on one thin shell.\nTherefore, a thin shell structure is unlikely for the broad components we detect, while either a spherical or conical morphology can be adopted in our analysis.\n\n\\section{Summary}\n\\label{summary}\nWe analyze the profiles of the H$\\alpha$ and [{\\sc Oiii}] emission lines in 21 nearby low-mass galaxies\nwith masses of $M_*\\sim10^4-10^7~M_\\odot$.\nSpectra of thirteen galaxies were newly taken with Magellan\/MagE.\nWe find evidence for warm ionized gas outflows in 14 out of the 21 galaxies, and study the outflow properties in the low-mass regime.\nOur findings are summarized below.\n\n\\begin{enumerate}\n \\item For our galaxies, we do not find a clear correlation between $BNR$ and $M_*$ (SFR). However, we obtain mean values of $BNR\\sim0.31$ and 0.21 for the H$\\alpha$ and [{\\sc Oiii}] lines, respectively, that are generally smaller than those of massive galaxies.\n The relatively high SNR of our spectra may allow us to detect the smaller $BNR$ values characteristic of weaker outflows.\n \\item We find strong evidence of smaller $v_\\mathrm{max}$ towards lower $M_*$ and SFR.\n Combing our sample with existing data from previous studies, we confirm a positive correlation between $v_\\mathrm{max}$ and SFR, but find a large scatter of $v_\\mathrm{max}$ for a given SFR.\n We also explore the $v_\\mathrm{max}-v_\\mathrm{cir}$ relation below $v_\\mathrm{cir}\\sim30~\\mathrm{km~s^{-1}}$, showing consistent results with previous observations for massive galaxies \\citep[e.g.,][]{Sugahara+19} and the predictions from the simulations by \\cite{Muratov+15}.\n \\item We investigate whether the outflowing gas is fast enough to escape from the galaxies by estimating the escape velocities. However, we cannot conclude whether the outflowing gas can escape in most of our galaxies due to the large uncertainties given by $v_\\mathrm{esc}$ estimation.\n \\item We evaluate the fiducial values of mass-loading factors $\\eta$ with estimated $n_\\mathrm{e}$ and $r_\\mathrm{out}$.\n We also provide maximum $\\eta$ values with extreme parameters.\n Our results point to relatively weak stellar feedback in our galaxies.\n Even if we choose extreme parameters, we find $\\eta$ values are generally smaller than those predicted by simulations of low-mass galaxies. \n\\end{enumerate}\n\n\\begin{acknowledgements}\nWe thank the anonymous referee for constructive comments and suggestions.\nWe are grateful to Kazuhiro Shimasaku, Ricardo O. Amor\\'{i}n, Xinfeng Xu, Alejandro Lumbreras-Calle, Dylan Nelson for their useful comments and discussions. \n\nThis work is supported by the World Premier International\nResearch Center Initiative (WPI Initiative), MEXT, Japan, as\nwell as KAKENHI Grant-in-Aid for Scientific Research (A)\n(20H00180 and 21H04467) through the Japan\nSociety for the Promotion of Science (JSPS).\nThis work is supported by the joint research program of the Institute for Cosmic Ray Research (ICRR), the University of Tokyo.\n\nThis paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. We are grateful to the observatory personnel for help with the observations.\n\\end{acknowledgements}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}