diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzeei" "b/data_all_eng_slimpj/shuffled/split2/finalzeei" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzeei" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe rise of deep neural networks for learning general-purpose representations in an end-to-end manner has led to numerous breakthroughs in different areas of artificial intelligence, including object recognition~\\cite{ren2015faster}, complex gameplay~\\cite{silver2017mastering}, and language modeling~\\cite{devlin2018bert}. These advancements have brought their widespread adoption to other domains, particularly for problems involving time-series or sensory inputs, which, crucially, depended on ad-hoc feature extraction with shallow learning techniques. The efficiency of deep learning algorithms substantially improved the state-of-the-art in these fields~\\cite{supratak2017deepsleepnet, martinez2013learning, hannun2019cardiologist, saeed2018model, radu2018multimodal}; while largely dismissing manual feature design strategies. However, this success is due to supervised learning models, which require a huge amount of well-curated data to solve the desired task.\nCompared to computer vision or other realms, semantically-labeled sensory data (such as electrooculography, heart rate variability, and inertial signals) is much more difficult to acquire, owing to: privacy issues, complicated experimental set-ups and the prerequisite of expert-level knowledge for data labeling.\n\nDue to these limitations, unsupervised learning holds an enormous potential to leverage a vast amount of unlabeled data produced via omnipresent sensing systems. For instance, an average smartphone or smartwatch is equipped with a multitude of sensors, such as IMUs, microphone, proximity, ambient light and heart rate monitors producing a wealth of data that can be utilized for solving challenging problems and can enable novel use cases through harnessing the power of machine learning. Past efforts to learn from sensory (or time-series) data were mainly limited to the use of autoencoding based approaches~\\cite{li2014unsupervised, bhattacharya2014using, martinez2013learning, plotz2011feature} that can learn to compress the data, but fail to learn semantically useful features~\\cite{oord2018representation}. More recently, generative adversarial networks (GANs) have been explored to some extent for unsupervised learning from sensory inputs~\\cite{yao2018sensegan}, but GANs are infamous for being notoriously unstable during training and suffer from mode collapse, making it a great challenge to use them in practice, for now~\\cite{thanh2019improving}. It might also be excessive to use GANs as a pre-training strategy when synthesizing data is not a core focus, as the number of parameters in the network that need to be learned increases extensively. Moreover, transfer learning has been utilized to a limited extent for tackling the issue of unavailability of massive well-annotated sensory datasets for training deep models. It has been explored to improve the performance in a supervised setting through joint-training on labeled source and target datasets~\\cite{chen2019cross, gjoreski2019cross}. In these cases, the features transferred from supervised models may not be general and are mostly tied to a specific task; therefore, they might not generalize well to other tasks of interest, compared to methods that learn task-agnostic features, in an unsupervised manner. Likewise, existing methods did not focus on learning in low-data regimes nor from unlabeled input which is available in much larger quantities (see section~\\ref{sec:rw} for related work). In this paper, we show that the emerging paradigm of self-supervised learning offers an efficient way for learning semantically-meaningful representations from sensory data that can be used for solving a diverse set of downstream tasks~\\footnote{downstream or end tasks referred to the tasks of interest e.g., sleep stage scoring.}. The self-supervised approaches exploit the inherent structure of the input to derive a supervisory signal. The idea is to define a pretext task, for which annotations can be acquired without human involvement (directly from the raw data) and can be solved using some form of unsupervised learning techniques. This intriguing property essentially renders a deep sensing model, that is developed based on the earlier described principle of \"self-learning\" in nature: a system that can be trained continuously on massive, readily-accessible data in an unsupervised manner~\\cite{de1994learning, schmidhuber1990making}. However, in this case, the challenge lies in designing complex auxiliary tasks that can force the deep neural network to capture meaningful features of the input, while avoiding shortcuts~\\cite{geirhos2020shortcut} (i.e., simple unintended ways to trivially solve the auxiliary task without learning anything useful that generalizes beyond the auxiliary task). \n\nOver the last few years, given the large potential of self-supervised learning in exploiting unlabeled data, multiple surrogate or auxiliary tasks have been proposed for feature learning to ultimately solve complex problems in different domains~\\cite{oord2018representation, gidaris2018unsupervised, devlin2018bert}. Particularly in the vision community, a surge has been seen in developing self-supervised methods, owing to the availability of a wide variety of large scale datasets and well-established deep network architectures. In this realm, the most straightforward strategy is the reconstruction of contextual information based on partially observable input~\\cite{doersch2015unsupervised}. The prediction of color values for grayscale images~\\cite{zhang2016colorful} and the detection of the angle of rotation~\\cite{gidaris2018unsupervised} are recent attempts found to be useful in learning visual representations. Similarly, the temporal synchronization of multimodal data is exploited to learn audio-visual representations~\\cite{korbar2018cooperative}. Likewise, contrastive learning is another highly promising technique that aim to capture shared information among multiple views of the data~\\cite{tian2019contrastive, oord2018representation}, including successes in robotic imitation learning~\\cite{Sermanet2017TCN}. Thus, we conjecture that self-supervision is fruitful for automatically extracting generic latent embeddings from sensory data that can improve much-needed label efficiency, as acquiring well-labeled sensory data is extremely challenging in the real world. Furthermore, due to its annotation-free nature, this learning strategy is not only effective and scalable, but can also be directly leveraged in a federated learning environment~\\cite{bonawitz2019towards}, to learn from widely distributed and decentralized data without aggregating it in a centralized repository, which can preserve users' privacy~\\cite{mcmahan2017communication}. \n\nIn this paper, we present a principled framework for self-supervised learning of multisensor representations from unlabeled data. Our objective is to have numerous tasks, with each perhaps imposing a distinct prior on to the learning process, resulting in varying quality features that may differ across sensing datasets. Specifically, as proxy tasks and modalities could be of more or less relevance to the downstream task's performance, it is essential to explore and compare several pretext tasks so as to discover the ones with better generalization properties. The broad aim is to have many auxiliary tasks in a user's toolbox such that, either experimentally or based on prior knowledge, a relevant task can be selected for training deep models. Particularly, the objective is to have proxy tasks that enable learning of representations invariant to several input deformations that commonly arise in the temporal data, such as sensor noise and sampling-rate disparities, or that can be used jointly in a multi-task learning setting. To this end, we develop eight novel auxiliary tasks that intrinsically obtain supervision from the unlabeled input signals to learn general-purpose features with a temporal convolutional network, such that the pre-trained model generalizes well to the end tasks.\n\nOur approach comprises of pre-training a network through self-supervision with unlabeled data so that it captures high-level semantics and can be used either as a feature extractor\\footnote{i.e. leveraging representations from intermediate layers of the deep neural network} or utilized as initialization for making successive tasks of interest easier to solve with few labeled data. To develop the auxiliary tasks, we take advantage of the synchronized multisensor (or multimodal) data as it belongs to the same underlying phenomena and we exploit it to create proxy tasks that can capture broadly useful features. Specifically, it can substantially help in learning powerful representations of each modality, and ultimately learn more abstract concepts in a joint-embedding space. Thus, we use a multi-stream neural network architecture to solve proxy tasks so that it can learn modality-specific features with a distinct encoder per modality and subsequently learn a shared embedding space with a modality-agnostic encoder. The fundamental structure of our framework is illustrated in Figure~\\ref{fig:overview}. We adopt a small model architecture in this work to highlight a) effectiveness of self-supervised tasks (i.e. improvement is not due to complex architecture) and b) potential of deployment on resource-constrained devices for training and inference.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=10.5cm]{Figures\/overview.pdf}\n\\caption{Illustration of our \\textit{Sense and Learn} representation learning framework. A deep neural network is pre-trained with self-supervision using input modalities from large unlabeled sensory data, such as inertial measurements (or electroencephalogram, heart rate, and channel state information). The learned network can then be utilized as a feature-extractor or initialization for rapidly solving downstream tasks of interest with few labeled data.}\n\\label{fig:overview}\n\\end{figure}\n\nWe demonstrate that a relatively straightforward suite of auxiliary tasks results in meaningful features for diverse problems, including: activity recognition, stress detection, sleep stage scoring, and WiFi sensing. First, we show that the self-supervised representations are highly competitive with those learned with a fully-supervised model, by training a linear classifier on top of the frozen network, as it is a standard evaluation protocol for assessing the quality of self-supervised tasks~\\cite{tagliasacchi2019self, oord2018representation}. Second, we explore fine-tuning the last layer of the encoder to gain further improvements over training from scratch. Third, we investigate the effectiveness of the learned representations in low-data regime\\footnote{or in a semi-supervised setting}. Using our pre-trained network as initialization, we achieve a significant performance boost with as little as $5$ to $10$ labeled instances per class, which clearly highlights the value of self-supervised learning. Lastly, we evaluate the transferability of the features across related datasets\/tasks to show the generality of our method in an unsupervised transfer learning setting.\n\nIn summary, our main contributions are as follows:\n\n\\begin{itemize}\n\\item We propose \\textit{Sense and Learn}, a generalized self-supervised learning framework comprising several surrogate tasks to extract semantic structural concepts inherent to diverse types of sensory or time-series data. \n\n\\item We extensively evaluate our self-supervised tasks on various problems (e.g. sleep stage scoring, activity recognition, stress detection, and WiFi sensing) and learning settings (i.e. transfer and semi-supervised) to significantly improve the data efficiency or lower the requirement of collecting large-scale labeled datasets.\n\n\\item Our results demonstrate that self-supervision provides an effective initialization of the network (and powerful embeddings) that improves performance significantly with minimal fine-tuning, and works well in a low-data regime, which is of high importance for real-world use cases.\n\n\\item The developed auxiliary tasks require an equivalent computational cost as standard supervised learning and has fewer parameters than autoencoding methods, but provide better generalization with greatly improved sample efficiency. \n\n\\item We utilize a small network architecture to show the capability of self-supervision and its prospective usage on resource-constrained devices. In particular, the majority of our proposed tasks are designed around the principle that self-supervised data generation should not be computationally expensive; thus, it can be readily used for on-device learning. \n\n\\item We briefly discuss how to use our framework in practice, as well as its limitations.\n\n\\end{itemize}\n\n\\noindent In the following sections, we present the relevant literature to our work in Section~\\ref{sec:rw}. Our self-supervised methodology is described in Section~\\ref{methodology}. The experimental results are discussed in Section~\\ref{experiments}, real-world impact and limitations in Section~\\ref{sec:impact}, and conclusions and directions for future work are presented in Section~\\ref{sec:conclusion}.\n\n\\section{Related Work}\n\\label{sec:rw}\n\\subsection{Unsupervised and Self-Supervised Learning}\nDeep learning has revolutionized several areas of research with an intuitive property of learning discriminative features directly from the raw data and eliminating the need of manual feature extraction\\cite{radu2018multimodal, hammerla2016deep, martinez2013learning, hannun2019cardiologist}. The success of deep learning is largely attributed to the massive labeled datasets apart from other factors, such as availability of computational power and better neural architectures. Obtaining semantically labeled data required for training supervised models is an expensive and time-consuming process. Therefore, unsupervised learning has seen growing interest in the last couple of years as unlabeled data is available in huge quantities, especially on decentralized edge devices. A classical illustration of unsupervised feature learning is the autoencoder, which learns to map an input onto a lower-dimensional embedding so that reconstructing the original input from such a space incurs a lower error. However, the decoding-based strategies deplete the network capacity through attending to low-level details instead of capturing semantically meaningful features. Therefore, the focus of recent studies is on providing an alternative form of supervision, where annotations can be intrinsically extracted from the data itself. \n\nThe field of self-supervised learning exploits the natural supervision available within the input signal to define a surrogate task that can force the network to learn broadly-usable representations. To that end, numerous pretext tasks are proposed in different domains.~\\cite{noroozi2016unsupervised} established the task of predicting the relative position of randomly cropped image patches.~\\cite{larsson2016learning, zhang2016colorful} inferred color values for grayscale pictures,~\\cite{Sermanet2017TCN} utilize time-contrastive loss as a way to minimize the embedding distances of the same scene recorded from multiple viewpoints, while maximizing the distances for those captured at different timesteps. A similar technique is proposed in~\\cite{tian2019contrastive} to learn from multiple views of the data.~\\cite{tagliasacchi2019self} defined self-supervised tasks for audio, inspired by word$2$vec~\\cite{mikolov2013distributed}.~\\cite{korbar2018cooperative} showed that video representations could be learned by exploiting audio-visual temporal synchronization. Time-contrastive learning is suggested in~\\cite{hyvarinen2016unsupervised} for extracting features from time-series, in an unsupervised manner, through predicting segment IDs. Likewise, autoregressive modeling has been combined with predictive coding to learn compact latent embeddings for various domains~\\cite{oord2018representation}. For natural language modeling, self-supervised objectives, such as predicting masked tokens from surrounding ones and predicting the next sentence, turn out to be powerful methods for learning generic representations of text~\\cite{devlin2018bert}. Similarly, for learning inertial sensory features, ~\\cite{saeed2019multi} presented a signal transformation recognition task. Lately, self-supervised learning has been shown to be beneficial for semi-supervised learning, through jointly optimizing supervised and self-supervised losses~\\cite{zhai2019s}. In this work, we develop several self-supervised tasks for learning representations from a wide range of sensory data such as electroencephalography, electrodermal activity and inertial signals. We show that pre-training with self-supervision using unlabeled data helps in learning highly generalizable features that improve data efficiency and transfer well to a related set of tasks. \n\n\\subsection{Learning Sensing Models with Machine Learning}\nAn understanding of human contexts, activities and states is an important area of research in ambient computing and pervasive sensing due to the fact that it can play a central role in several application domains including: health, wellness, assistance, monitoring, and human computer interaction. To achieve the earlier described objective, the data is collected from users through wearables or other sensors, under varied environments, for learning a task-specific model. For instance, prior work on activity recognition explored various methodologies with inertial sensors embedded in smartphones or smartwatches~\\cite{himberg2001time, stisen2015smart, hammerla2016deep}. Emotional state recognition is widely achieved with physiological signals, such as skin conductance and heart rate variability~\\cite{saeed2018model, martinez2013learning, picard2001toward}. Similarly in sleep analysis, the electrical brain activity is captured with an electroencephalogram to classify sleep into different stages~\\cite{supratak2017deepsleepnet, lajnef2015learning, gunecs2010efficient}. Importantly, for device-free sensing systems, channel state information from WiFi is utilized to infer participants' activities in a non-intrusive manner~\\cite{yousefi2017survey}. Earlier developed methods for these problems heavily relied on manual feature extraction from sensory data to infer a user's activity, emotional state or sleep score and these methods were limited depending on the domain knowledge available to extract discriminative features. With the tremendous progress in end-to-end supervised learning via deep networks, it has been shown that the features can be learned directly from data instead of hand-crafting them based on domain knowledge \\cite{radu2018multimodal, hammerla2016deep, martinez2013learning, hannun2019cardiologist}.\n\nConsequently, 1D convolutional and recurrent neural networks have become standard techniques for achieving state-of-the-art performance on problems involving temporal data~\\cite{hannun2019cardiologist, saeed2018model, supratak2017deepsleepnet, hammerla2016deep}. Nevertheless, these approaches have heavily relied on the availability of large-annotated datasets, which are notoriously difficult to acquire in the real-world. Due to this, in recent years, few work explored unsupervised feature learning to exploit the availability of vast amounts of unlabeled data, while mainly focusing on input reconstruction via autoencoders and related variants, such as restricted Boltzmann machines and sparse coding ~\\cite{li2014unsupervised, bhattacharya2014using, martinez2013learning, plotz2011feature}. There has also been work on utilizing generative adversarial networks for modeling data distributions without supervision~\\cite{luo2018multivariate, esteban2017real} and in semi-supervised learning for sensing models~\\cite{yao2018sensegan}. Furthermore, transfer learning has also been leveraged to improve neural network generalization in domains where large labeled data is difficult to obtain, but focused on transfer from supervised models~\\cite{chen2019cross, gjoreski2019cross}. More recently,~\\cite{saeed2019multi} proposed a self-supervised task of signal transformation recognition for feature learning that achieved significant improvement in activity recognition over autoencoding, though focusing only on unimodal input and the activity recognition problem. As opposed to earlier works, we present a general framework for learning multimodal representations from a diverse set of sensors in a self-supervised way and compared to~\\cite{saeed2019multi} we simplify the problem formulation of transformation recognition (see section~\\ref{sec:sslt}); our novel proxy tasks work on-par and can be used when transforming the input is not desirable or when it may lead to unintended outcomes (e.g. ECG signals). Furthermore, pre-training models with our auxiliary tasks significantly lower the amount of labeled data required to achieve better generalization and opens up the possibility of on-device learning from decentralized unlabeled data.\n\n\\section{Methodology}\n\\label{methodology}\nIn this section, we begin with a motivation and an overview of our self-supervised framework for learning sensory representations. Next, we provide a formalization of the auxiliary tasks and discuss an end-to-end approach for mutli-modal learning. Subsequently, we describe the network architecture design, its implementation, and the optimization procedure. \n\n\\subsection{Motivation and Overview}\nThe key insight behind our technique is that the self-supervised pre-training acts as a prior that can give rise to varying quality representations that encode underlying signal semantics at different levels, which may or may not be useful for a downstream-task of interest. Therefore, it is vital to employ multiple auxiliary tasks to discover the suitable inductive bias necessary to obtain optimal performance on the desired end-task. This intuition is important considering that the time-series (or sensory) data shows peculiar characteristics (e.g. signal-to-noise ratio, amplitude variances, and sampling rates) depending on the nature of phenomena being recorded. Likewise, there should be an array of tasks to choose from depending on the learning problem and device type (e.g. available resources, sensor types etc.). Importantly, we want the self-supervised model to learn generic features rather than focusing on low-level input details, as a pre-trained network has to provide a strong initialization for learning with limited labeled data and generalize to other related tasks. Thus, instead of relying on a single auxiliary task, we learn latent representations with a broad set of tasks based on different objective functions. \n\nWe propose a generalized framework comprising of eight pretext tasks that can be used to learn features from heterogeneous multisensor data. To achieve this, we utilize a temporal convolutional network (TCN) $F_\\theta$ with a distinct encoder $e_m$ for each input modality $I_m$ and a shared encoder $e_s$ for multi-modal representation extraction. We choose to use TCN as an embedding network for sequence modeling due to its effectiveness in capturing long-term dependencies and parallelizability at a significantly lower cost than recurrent networks~\\cite{bai2018empirical}. For every learning problem, we consider unlabeled multisensor (or multimodal) data $\\mathcal{D} = \\{(\\textbf{u}_1, \\textbf{v}_1), (\\textbf{u}_2, \\textbf{v}_2), \\ldots (\\textbf{u}_n, \\textbf{v}_n)\\}$ consisting of $N$ examples. Here, $\\textbf{u}_n$ and $\\textbf{v}_n$ denote the samples of different modalities (e.g. accelerometer and gyroscope) of the $n^{th}$ example. The defined pretext tasks exploit the inherent properties of the data to obtain supervision from the input pairs without requiring any manual annotation to optimize a certain loss function. Specifically, each surrogate task employs its own loss function $L_t$ for learning $F_\\theta$ differently. For instance, an input reconstruction task employs mean-square error loss, while another task, concerning the detection of odd segments within a signal, uses negative log-likelihood; we discuss these in detail in the subsequent section. At a high-level, we utilize these objectives as necessary proxies for sensory representation learning without focusing on how well the model performs on them but on an end-task. After pre-training, $F_\\theta$ captures a joint embedding space of the inputs, and thus it can be utilized either as a feature extractor or as initialization for rapidly learning to solve other problems. Finally, it is important to note that proxy tasks cannot be applied arbitrarily to any type of input and tasks like blend detection can only be used when modalities are related to each other, e.g. as accelerometer and gyroscope.\n\n\\subsection{Self-Supervised Tasks}\n\\label{sec:sslt}\nIn order to achieve self-supervised learning of disentangled semantic representations from unannotated sensory data, we develop eight surrogate tasks for the network. To solve these tasks, we assume $\\textbf{u} = \\{u_1, u_2, \\ldots, u_l\\}$ and $\\textbf{v} = \\{v_1, v_2, \\ldots, v_l\\}$ denote multi-channel signals of length $l$ from different modalities (e.g. accelerometer and gyroscope). Let $z_u = e_u(\\textbf{u})$ and $z_v = e_v(\\textbf{v})$ be the low-dimensional embeddings computed from the corresponding input signals with respective encoders. Likewise, $z_s = e_s(e_u(\\textbf{u}), e_v(\\textbf{v}))$ provides a shared embedding of the inputs through fusion that may capture more abstract features. A high-level illustration of the self-supervised learning procedure is shown in Figure~\\ref{fig:overview}. A self-supervised data generation module produces annotated input from unlabeled multisensor data for learning $F_\\theta$. We utilize this formulation to define the self-supervised objectives in the following subsections.\n\n\n\\subsection*{Blend Detection}\nTo take advantage of the multisensor signals, we define an auxiliary task of detecting input blending as a multi-class classification problem. Given an unlabeled input batch $B = \\cup_{i=1}^{|B|} \\{(\\textbf{u}, \\textbf{v})\\}_i$, we generate three types of instances. First, we keep the original samples as belonging to a class $c_a$. Second, we perform a weighted blending of an instance from one modality with another randomly selected example from a different modality as class $c_b$. Third and last, the instances of the same modalities are blended to have instances for a class $c_c$. The blending weight $\\mu$ is sampled from a uniform distribution, i.e. $\\mu \\sim \\mathcal{U}(0, 1)$. The network is trained with a negative log-likelihood loss $\\mathcal{L}_{NL}$ for learning to differentiate between examples of blended and clean classes ($y_k$) on the entire training set $\\mathcal{D}_{train}$:\n\n\n\\begin{align*} \n\\mathcal{L}_{NL} = - \\frac{1}{K} \\sum_{k=1}^{K} y_{k} \\times \\log(F_\\theta(\\textbf{u},\\textbf{v}))\n\\end{align*}\n\n\n\\subsection*{Fusion Magnitude Prediction}\nWe create a variant of the earlier defined task that uses a similar data generation strategy but differs fundamentally in terms of the objective it optimizes. Here, we task the network with predicting the magnitude $\\mu$, which defines the blending (or weighting) factor of the signals. We assign $\\mu = 0$ to the clean examples, while assigning weight $\\mu \\sim \\mathcal{U}(0, 1)$ to the blended examples, as earlier. In this case, a natural choice is to adopt mean-square loss as learning objective. However, we experimentally discovered that utilizing binary cross-entropy with a logistic function in the network's output layer results in better generalization; thus the network is trained to minimize the following loss $\\mathcal{L}_{BCE}$ for each input modality: \n\n\\begin{align*} \n\\mathcal{L}_{BCE} = -(y \\times \\log(F_\\theta(\\textbf{u}, \\textbf{v})) + (1 - y) \\times \\log(1 - F_\\theta(\\textbf{u}, \\textbf{v})))\n\\end{align*}\n\n\n\\subsection*{Feature Prediction from Masked Window}\nIt is observed that networks which try to reconstruct every bit of the input waste capacity on modeling low-level details~\\cite{oord2018representation}. Instead, in this auxiliary task we ask the network to approximate summary statistics of a masked temporal segment within a signal. To generate the data, we randomly sample the segment length $s_l \\sim \\mathcal{U}(n_{low}, n_{high})$ and starting point $s_p \\sim \\mathcal{U}(0, l - s_l)$. From the selected subsequence, we extract $8$ basic features: \\texttt{mean}, \\texttt{standard deviation}, \\texttt{maximum}, \\texttt{minimum}, \\texttt{median}, \\texttt{kurtosis}, \\texttt{skewness}, \\texttt{number of peaks}; and then mask the segment with zeros. The multi-head network is trained with Huber loss $\\mathcal{L}_{HL}$ to predict statistics of a missing sequence as:\n\n\n\\begin{align*} \n\\mathcal{L}_{HL}=\\begin{cases}\n\t\\frac{1}{2} \\times o^2, & \\text{if $|o| \\leq \\delta$}\\\\\n \\delta \\times (|o| - \\frac{\\delta}{2}), & \\text{otherwise $|o| > \\delta$} \n \\end{cases}, \\text{where $o = F_\\theta(\\textbf{u}, \\textbf{v}) - y$} \n\\end{align*}\n\n\n\\subsection*{Transformation Recognition}\nThe signal transformation recognition is presented in~\\cite{saeed2019multi} as an auxiliary task, where it is posed as a set of binary classification problems solved with a multi-task network to determine whether a signal is a transformed version or not. Here, we simplify the problem formulation and treat the task as multi-class classification, to learn a network that can directly recognize the applied transformation on an input from one out of $K$ classes. The benefits of our formulation are that it does not require specifying weights for task-specific losses and the network can be efficiently optimized with categorical cross-entropy objective $\\mathcal{L}_{NL}$. Another key difference is that we address the problem of learning from multimodal data as opposed to a unimodal signal. To produce task-specific data, we generate transformed versions of each instance utilizing eight transformation functions: \\texttt{permutation} , \\texttt{channel shuffle}, \\texttt{timewarp}, \\texttt{scale}, \\texttt{noise}, \\texttt{rotation}, \\texttt{flip}, \\texttt{negation}), and an identity operation while assigning the function type as the corresponding class. During network training, we feed a batch of data consisting of examples for all the classes (inclusive of originals) and optimize a separate loss function for each input signal. \n\n\\subsection*{Temporal Shift Prediction}\nThis conceptually straight-forward task consists of estimating the number of steps by which the samples are circularly-shifted in their temporal dimension. We pose this problem such that it can be treated either as a classification or as a regression task. We define a range of shift intervals, depending on the input resolution. For instance, in the activity recognition task, the considered ranges are: $[(0, 5), (6, 10), (11, 20), (21, 50), (51, 100), (101, 200), (201, 300)]$. For producing shifted inputs, we first select a pair at random from the defined ranges, and second we sample a shifting factor within the defined boundary of the selected range. Last, we temporally shift the values of an input segment with the sampled factor. The network can be trained to predict either the range index (treating each entry as a class, with $7$ classes in total) or regress the factor. In our experiments, we notice that solving it as a regression problem results in better generalization on the end-task. Thus, the network is trained by minimizing mean-square error loss $\\mathcal{L}_{MSE}$ for each sensing modality: \n\n\n\\begin{align*} \n\\mathcal{L}_{MSE} = \\| F_\\theta(\\textbf{u},\\textbf{v}) - y \\|\n\\end{align*}\n\n\n\\subsection*{Modality Denoising}\nThis task's objective is to decompose a signal for obtaining a clean target through input reconstruction, i.e. isolating the mixed noise. It is similar in spirit to source separation in audio~\\cite{luo2019convtasnet, zeghidour2020wavesplit} and a denoising autoencoder~\\cite{vincent2008extracting}. The fundamental intuition here is that if the network is tasked to reconstruct the original input from corrupted or mixed modality signals, then it forces the network to identify core signal characteristics while learning usable representations in the process. In our case, instead of mixing arbitrary noise, we exploit the availability of multisensor data to generate instances that might be of sufficient difficulty for the network to denoise. Specifically, we utilize a \\texttt{weighted blending} operation $\\mathbf{u} \\times (1 - \\mu) + \\mathbf{v} \\times \\mu$ to mix instances of different modalities, i.e. we produce samples through combining the clean instances of accelerometer with gyroscope and vice versa while keeping the original samples as additional data points. The encoder-decoder network is trained end-to-end to minimize the mean-square error loss $\\mathcal{L}_{MSE}$ between ground truth and corrupted input pairs. \n\n\\subsection*{Odd Segment Recognition}\nThe goal of odd segment recognition is to identify the unrelated subsegment that does not belong to the input under consideration, where the rest of the sequences are in the correct order. The high-level idea behind the task is that if the network can spot artifacts in the signal, it should then also learn about useful input features. Similar ideas have been employed in video representation learning~\\cite{fernando2017self} to spot invalid frame detection in video. There are multiple ways to generate examples with odd subsegments; we approach it as an input consisting of an irregular segment of fixed length $s_o$ that is selected randomly from a different input modality. To generate proxy task examples, we begin with splitting an instance into equal-length sequences (e.g. of length $100$). Then, $2$ sequences from different modalities are randomly selected, that are either directly swapped or blended before applying a substitution operation. The index of the interchanged slices is used as the class, where valid inputs are assigned a distinct class. The network is asked to predict an index $id$ of the odd sequence in each input modality. For this task, we minimize a categorical cross-entropy loss $\\mathcal{L}_{NL}$ to train a multi-head network.\n\n\n\n\\subsection*{Metric Learning with Triplet Loss}\nAs we are interested in learning from multisensor data, we take advantage of multiple input modalities to formulate a metric learning objective. For this purpose, we utilize a symmetric triplet loss~\\cite{zhang2016tracking}, which encourages the representations of similar inputs but different modalities to be closer, while the representations of dissimilar inputs to be further apart. To optimize the specified loss, we need to generate input triplets consisting of an anchor, which can be an original instance, a positive sample that should be related (i.e. provides a complementary view of the input) to the anchor, and a negative sample which must be entirely different from the former pair. The loss then minimizes the distance between the anchor and the positive samples, while maximizing the distance of the negative samples from the anchor and the positive samples. For metric learning under this formulation, we generate the examples as follows: the actual instances are treated as anchors, and positive instances are generated by applying selected transformations at random~\\cite{saeed2019multi} on each anchor; whereas the negative instances are sampled from a different modality (i.e. for accelerometer, we treat samples from gyroscope as negatives). We then optimize $F_\\theta$ with triplet loss $\\mathcal{L}_{TL}$ to produce a smaller distance on associated samples and a more considerable distance on unrelated ones: \n\n\n\\begin{align*} \n\\mathcal{L}_{TL} = \\max [0, \\ D(z_{a}, z_{p}) - \\frac{1}{2} \\times (D(z_{a}, z_{n}) + D(z_{p}, z_{n})) + \\alpha],\n\\end{align*}\n\n\\noindent where $z_{a}$, $z_{p}$, $z_{n}$ are the embeddings of anchor, positive and negative samples respectively, $\\alpha$ represents the distance margin, and $D$ denotes squared-euclidean distance. \n\n\\begin{algorithm}[htbp]\n\\caption{Sense and Learn}\n\\label{alg:sal}\n\\KwIn{Multisensory unlabeled data $\\mathcal{D}_{U}$ and labeled data $\\mathcal{D}_{L}$, auxiliary task $A_{t}$, number of iterations $I$, batch size $B$, L$2$ regularization rate $\\beta$}\n\\KwOut{Self-supervised pre-trained network $F$}\n\ninitialize a representation learning network $F$ with parameters $\\theta_{F}$\\\\\ninitialize a linear classifier $C$ with parameter $\\theta_{C}$ for a down-stream task\\\\\ninitialize self-labeling data generation procedure $G_{T}$ based on task $A_{t}$\\\\\ninitialize proxy-task and end-task loss functions $\\mathcal{L}_{T}$ and $\\mathcal{L}_{E}$, respectively\\\\\n\n\n\\For{iteration $i$ $\\in$ $\\{$ $1$, \\ $\\ldots$, \\ $I$ $\\}$ }\n{\n Randomly sample a mini-batch of $B$ instances from $\\mathcal{D}_{U}$ as $\\{x_1, x_2, \\ldots, x_b\\}$ \\\\\n Generate labeled (self-supervised) samples $\\{$$(x$, $y)_1$, $(x$, $y)_2$, $\\ldots$, and $(x$, $y)_b$$\\}$ with $G_{T}$\\\\ \n Update $\\theta_F$ by descending along its gradient \\\\\n $\\nabla_{\\theta_{F}} \\Big[\\frac{1}{b} \\sum_{i=1}^{B} \\mathcal{L}_{T}(F_{\\theta}(x_i), y_i) + \\beta \\left\\lVert \\theta\\right\\rVert^2 \\Big]$\n}\n\n\\For{iteration $i$ $\\in$ $\\{$ $1$, \\ $\\ldots$, \\ $I$ $\\}$ }\n{\n Randomly sample a mini-batch of $B$ labeled instances from $\\mathcal{D}_{L}$ as $\\{$$(x$, $y)_1$, $(x$, $y)_2$, $\\ldots$, and $(x$, $y)_b$$\\}$ \\\\\n Extract latent embeddings $\\textbf{z}$ from encoder $e$ within $F_{\\theta}(\\textbf{x})$\\\\\n Update $\\theta_C$ by descending along its gradient \\\\\n $\\nabla_{\\theta_{c}} \\Big[\\frac{1}{b} \\sum_{i=1}^{B} \\mathcal{L}_{E}(C_{\\theta}(z_i), y_i) \\Big]$\n}\n\nWe use Adam optimizer~\\cite{kingma2014adam} for computing gradient-based parameter updates in all the experiments.\n\\end{algorithm}\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=11cm]{Figures\/architecture.pdf}\n\\caption{A multistream neural network architecture for learning representations from multiple sensory inputs. A distinct stream (with an identical architecture) is used for each modality, as depicted on the right.}\n\\label{fig:architecture}\n\\end{figure}\n\n\\subsection{Network Architecture Design}\n\nWe implement the learning network $F_{\\theta}$ as a multi-stream temporal convolutional model (TCN). The part of the motivation to use TCN came from~\\cite{bai2018empirical} where it has been shown that convolutional networks perform remarkably well on sequence modeling tasks. Likewise, they have a low footprint for training and inference as compared to other methods and can be pruned easily to further compress the network~\\cite{molchanov2016pruning}. Our model consists of a distinct learning stream for each input to extract modality-specific features. The subnetworks share the same architecture, which is followed by a modality-agnostic network that fuses and learns a shared representation from the multimodal input. Jointly, we refer to these modules as encoder $e$, which is embedded within $F_{\\theta}$. Importantly, we add an extra block connected to $e$, which is discarded after self-supervised pre-training. The intuition behind this strategy is that the model's last layers capture features that are primarily task-specific and do not generalize well on the end-task of interest. Therefore, the additional layers allow the base encoder to capture more generic features, while solving the auxiliary tasks. Figure~\\ref{fig:architecture} illustrates the architecture design by precisely highlighting these main building blocks. The modality-specific encoder consists of three $1$D convolutional layers with $32$, $64$, and $96$ feature maps and a kernel size of $24$, $16$, and $8$, respectively. The max-pooling layer, with a pooling size of $4$ and a stride of $2$, is added after the initial convolutional layers. A dropout is used with a rate of $0.1$ at the end of the block. The shared encoder consists of a single convolutional layer with $128$ feature maps and a kernel size of $4$, which takes concatenated features as input. The supplementary layers in the pre-training block consist of a convolutional layer with $64$ feature maps and a kernel size of $4$ and a dense layer having $512$ hidden units. Importantly, a separate output layer is used for each input modality for all the surrogate tasks except `sensor blend,' which, based on its formulation, does not require this. Likewise, we use global pooling as the last layer in the representation learning network that aggregates discriminative features. L$2$ regularization with a rate of $0.0001$ is applied to the weights of all the layers to avoid overfitting. Moreover, we employ SELU as non-linearity except on the output layer; the network is trained with a learning rate of $0.0001$ for a maximum of $30$ epochs unless stated otherwise. \n\n\nWe utilize a fixed network architecture for all the considered tasks (both auxiliary and down-stream), the intuition behind this choice being threefold. Firstly, we want to minimize the architectural differences to discover the true potential of self-supervision, i.e. it can be used with minimal effort on architecture tuning to extract semantic representations across diverse datasets. Secondly, our aim is to show that self-supervision has a huge prospect to be utilized for on-device learning. Having a smaller architecture and given the annotation-free nature of the proposed approach opens several exciting avenues in learning and inference with devices having limited processing capabilities. Lastly, our multi-modal architectural specification provides the flexibility to incorporate other modalities effortlessly. Furthermore, we highlight that in this work our focus is on individual task proposal and evaluation, but the framework can be used for jointly solving proxy tasks (i.e. in multi-task learning setting) as they share the same architecture, but differ fundamentally in terms of the loss function being optimized.\n\nThe high-level description of the learning procedure is summarized in Algorithm~\\ref{alg:sal}. Given an unlabeled data $\\mathcal{D}_{U}$ and a specified auxiliary task $A_{t}$, we optimize $F_{\\theta}$ with task-specific data that is generated on-the-fly, as described in the preceding section. Once pre-training converges, the layers specific to self-supervised learning are discarded, and the encoder $e$ is saved. Then, the second round of training on a down-stream task of interest begins with labeled data $\\mathcal{D}_{L}$. Depending on the evaluation criteria, the following can be done: a) the network is either kept frozen and used as a generic feature extractor for learning a linear classifier\\footnote{logistic regression}, b) the modality-agnostic encoder $e_{s}$ is fine-tuned during learning an end-task, or c) the self-supervised network is used as initialization for rapidly solving the final-task, e.g., fine-tuning a model with little labeled data. The \\textit{encoder} network shown in Figure~\\ref{fig:architecture} represents the module that is kept frozen, while depending on the learning setting the \\textit{shared} layers are further fine-tuned.\n\n\n\\section{Experiments}\n\\label{experiments}\nWe perform a comprehensive evaluation of our framework on four different application domains: a) activity recognition, b) sleep-stage scoring, c) stress detection, and d) WiFi sensing. For every area, we train the self-supervised networks with each proposed task and determine the quality of the learned representation with either a linear classifier or by fine-tuning with few labeled instances. Furthermore, we also examine the knowledge transferability between related datasets. In the following, we describe the utilized datasets, pre-processing steps, and assessment strategy, including the baselines. \n\n\\subsection{Datasets}\n\\label{sec:datasets}\nWe assess the performance of \\textit{Sense and Learn} on $8$ publicly available multisensor datasets from diverse domains. The brief description of each utilized data source is provided below, with Table~\\ref{tab:dataset} summarizing their major characteristics. \n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Key characteristics of the datasets used in the experiements. The relative class distribution of each dataset is given in Figure~\\ref{fig:cd} of appendix~\\ref{appendix:class_distribution}.}\n\\label{tab:dataset}\n\\small\n\\begin{tabular}{ccccc}\n\\hline\n\\textbf{Dataset} & \\textbf{\\#Subjects} & \\textbf{\\#Classes} & \\textbf{Task} & \\textbf{Inputs} \\\\ \\hline\nHHAR & 9 & 6 & \\multirow{5}{*}{\\begin{tabular}[c]{@{}c@{}}Activity\/Context\\\\ Recognition\\end{tabular}} & \\multirow{5}{*}{\\begin{tabular}[c]{@{}c@{}}Accelerometer \\\\ \\&\\\\ Gyroscope\\end{tabular}} \\\\\nMobiAct & 66 & 11 & & \\\\\nMotionSense & 24 & 6 & & \\\\\nUCI HAR & 30 & 6 & & \\\\\nHAPT & 30 & 12 & & \\\\ \\hline\nSleep-EDF & 20 & 5 & Sleep Stage Scoring & EEG \\& EOG \\\\ \\hline\nMIT DriverDb & 17 & 2 & Stress Detection & \\begin{tabular}[c]{@{}c@{}}Heart Rate \\& \\\\ Skin Conductance\\end{tabular} \\\\ \\hline\nWiFi CSI & 6 & 7 & \\begin{tabular}[c]{@{}c@{}}Activity (Behavior)\\\\ Recognition\\end{tabular} & CSI Amplitude \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\subsubsection*{Activity Recognition}\nFor smartphone-based human activity recognition, we select $5$ datasets containing accelerometer and gyroscope signals, namely: HHAR, MobiAct, UCI HAR, MotionSense, and HAPT. The Heterogeneity Human Activity Recognition (HHAR) dataset~\\cite{stisen2015smart} is collected from $9$ participants, each performing $6$ basic activities (i.e. sitting, standing, walking, stairs-up, stairs-down and biking) for $5$ minutes. A broad range of devices is used for the systematic analysis of sensor, device, and workload-specific heterogeneities across manufacturers. Specifically, each user carried $8$ smartphones on different body locations that were selected from a pool of $36$ devices of different models and brands. Likewise, the sampling rate differs considerably across phones with values ranging between 50Hz-200Hz. The MotionSense dataset~\\cite{malekzadeh2018protecting} is recorded with the aim of inferring personal attributes, such as physical and demographics, in addition to the activities. The iPhone$6$s is placed in the users' front pocket during the collection phase, while they performed $15$ trials of $6$ activities in the same experimental setting. In total, $24$ subjects of varying height, weight, age and gender performed the following 6 activities: walking, jogging, sitting, standing, downstairs and upstairs. We use this data only for the detection of activities without concerning with the identification of other attributes. UCI HAR~\\cite{anguita2013public} comprises data obtained from $30$ subjects with waist-mounted Samsung Galaxy S$2$ devices sampling at $50$Hz. Each participant completed $6$ activities of daily living (i.e. standing, sitting, lying down, walking, downstairs and upstairs) during $2$ trials with a $5$ seconds resting condition in-between. The MobiAct~\\cite{Vavoulas2014TheMD} contains inertial sensors data collected from $66$ participants with Samsung Galaxy S$3$ phones through more than $3200$ trails. The subjects freely placed the device in their trouser's pocket to mimic real-life phone usage and placement. We utilize the data of $61$ subjects for whom data of any of the following $11$ activity classes is available: walking, jogging, jumping, upstairs, downstairs, sitting, stand to sit, sit to stand, sitting on a chair, car step-in and car step-out. The Human Activities and Postural Transitions (HAPT) dataset~\\cite{reyes2016transition} is collected from a group of $30$ volunteers with Samsung Galaxy S$2$ devices sampling at $50$Hz. The phone was mounted on the waist of each subject who completed $3$ dynamic activities (walking, upstairs, downstairs), $3$ static posture activities (lying, sitting, standing), and $6$ postural transitions (sit-to-lie, lie-to-sit, stand-to-sit, sit-to-stand, stand-to-lie, and lie-to-stand); resulting in $12$ classes. \n\n\\subsubsection*{Sleep Stage Scoring}\nWe use the PhysioNet Sleep-EDF\\footnote{version 1} dataset~\\cite{kemp2000analysis, goldberger2000physiobank} consisting of $61$ polysomnograms (PSGs) from $20$ subjects. It is comprised of participants from $2$ different studies: a) effect of age on sleep and b) Temazepam effect on sleep. We use the $2$ whole-night PSGs sleep recording sampled at $100$Hz from the former study. Each record contains $2$ electroencephalogram (EEG) signals from Fpz-Cz and Pz-Oz electrode locations, electrooculography (EOG), electromyography (EMG) and event markers. Some instances also have oro-nasal respiration and body temperature. The hypnograms (30-seconds1 epochs) were manually annotated by sleep expert with one of the $8$ sleep classes (Wake, N$1$, N$2$, N$3$, N$4$, Rapid Eye Movement, Movement, Unknown), based on the R\\&K standard. We utilize EEG (Fpz-Cz) and EOG signals in our evaluation. Following previous work~\\cite{supratak2017deepsleepnet}, we merged N$3$ and N$4$ into a single class N$3$ and discarded Movement and unscored samples, to have $5$ sleep stages.\n\n\\subsubsection*{Stress Detection}\nFor physiological stress recognition, we utilize the MIT DriverDb dataset~\\cite{healey2005detecting, goldberger2000physiobank}, which is collected during a real-world driving experiment in a city, on a highway and in a resting condition. The publicly-available version on PhysioNet consists of $17$ drives out of $24$, each lasted between $1$-$1.5$ hours. The following physiological signals are recorded: EMG, electrocardiography (ECG), galvanic skin response (GSR) from hand and foot, heart rate (HR; derived from ECG), and breathing rate. The signals were originally sampled at different rates but downsampled to $15.5$Hz. The `marker' signal provided in the dataset is used to derive the binary ground truth, indicating a change-of-drive (i.e. resting, city or highway driving), which is found to be correlated with distress level through post-driving video analysis by experts~\\cite{healey2005detecting}. We use the following $10$ drives $04$, $05$, $06$, $07$, $08$, $09$, $10$, $11$, $12$ and $16$ in our experiments, which have HR and GSR (from hand), given collection of other signals in real-life is quite problematic.\n\n\\subsubsection*{WiFi Sensing}\nDevice-free context recognition with WiFi is an emerging area of research. To show the robustness of our self-supervised methods on this task, particularly on a unimodal signal, we utilize the WiFi channel state information (CSI) dataset~\\cite{yousefi2017survey} for activity recognition. This dataset is collected in a controlled office environment, where the transmitting (router) and receiving (Intel 5300 NIC) devices were 3m apart, and the channel state information (CSI) was recorded at $1$kHz. The $6$ subjects performed $20$ trials for each of the following $7$ activities: lying down, falling, walking, running, sitting down, standing up and picking something up. The ground truth was obtained from videos recorded during the data collection process, and CSI amplitude is used for learning a model.\n\n\\subsection{Pre-processing and Assessment Strategy}\nTo prepare the data for sequence modeling with a temporal convolutional network, we utilize a sliding window approach to segment the signals into fixed-sized inputs. In the case of the activity recognition task, we choose a window size of $400$ samples with a $50\\%$ overlap, except for the HAPT dataset where a segment size of $200$ samples is used, due to the short duration of posture-transition activities. We found these windows sizes to be optimal based on earlier experiments, as each activity dataset has a different sampling rate. We did not perform resampling as the sampling rate differences among phones does not vary significantly and $1$D convolutional layers with wide kernel sizes learn to adapt to the specific characteristics of the input signal. However, if the sampling rate varies considerably it might be essential to do resampling. For Sleep-EDF, we applied minimal pre-processing based on existing work~\\cite{supratak2017deepsleepnet} to formulate the problem as a $5$-stage sleep classification and used the $30$ seconds epochs as model input. In the WiFi sensing task, we process the input same as the original work that open-sourced the data and utilize a downsampled CSI signal of $500$Hz as~\\cite{yousefi2017survey}, which corresponds to an input window of $1$ second. The heart rate and skin conductance signals from MIT DriverDb are processed to remove artifacts and these signals are mean normalized using the `mean' and `standard deviation' calculated from the baseline (or resting phase) of the data collection following~\\cite{saeed2017personalized} for each subject. We use a window size of $30$ seconds with $50\\%$ overlap to generate input segments for the model. We randomly split the datasets based on subjects into train and test sets withholding $70\\%$ users for training and the rest $30\\%$ for testing. We further divide the training set to obtain a validation set of size $20\\%$, which is used for hyper-parameter tuning and early stopping. Most importantly, we also perform $5$-fold cross-validation for thorough performance analysis whenever it is applicable. Furthermore, we z-normalize the samples with mean and standard deviation calculated from the training set. For self-supervision, we pre-train the models using only the training set, including for the transfer learning experiments. The self-labeled examples are generated for each task on-the-fly during the learning phase, as defined earlier in Section~\\ref{sec:sslt}. \n\nFor each recognition problem, we treat a fully-supervised model directly trained (in an end-to-end manner) with the annotated data of an end-task as a `baseline.' Likewise, we compare self-supervised tasks against pre-training with a standard autoencoder. As explained earlier, we assess the quality of the self-supervised representation (including in the transfer-learning setting) through training a linear classifier or fine-tuning the last convolutional layer of the encoder on the downstream tasks. For learning in the low-data regime, we use a self-supervised network as initialization to quickly learn a model with few labeled examples. In all the cases, we assess the network performance with a weighted version of F-score and Cohen's kappa (see appendix~\\ref{appendix:kappa_results}); as these metrics are robust to unbalanced class distributions while being sensitive to misclassifications. \n\n\\subsection{Results and Discussion}\n\n\n\\begin{table}[t]\n \\caption{Performance evaluation (weighted F-score) of self-supervised representations with a linear classifier. The unsupervised pre-trained networks achieve competitive performance with the fully-supervised networks. In WiFi-CSI sub-table, the entries with hyphen indicate auxiliary tasks that cannot be applied to unimodal signals. See Table~\\ref{tab:kappa_linear} in appendix~\\ref{appendix:kappa_results} for kappa scores.}\\label{tab:linear}\n \\centering\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HHAR} & \\textbf{MobiAct} & \\textbf{MotionSense} & \\textbf{UCI HAR} \\\\ \\hline\n Fully Supervised & 0.794$\\pm$0.014 & 0.934$\\pm$0.005 & 0.952$\\pm$0.007 & 0.962$\\pm$0.006 \\\\\n Random Init. & 0.218$\\pm$0.062 & 0.383$\\pm$0.109 & 0.246$\\pm$0.090 & 0.221$\\pm$0.079 \\\\\n Autoencoder & 0.777$\\pm$0.003 & 0.726$\\pm$0.001 & 0.675$\\pm$0.019 & 0.782$\\pm$0.042 \\\\ \\hline\n Sensor Blend & 0.823$\\pm$0.006 & \\cellcolor[gray]{0.93}0.912$\\pm$0.001 & \\cellcolor[gray]{0.93}0.911$\\pm$0.009 & 0.902$\\pm$0.010 \\\\\n Fusion Magnitude & \\cellcolor[gray]{0.93}0.848$\\pm$0.005 & 0.905$\\pm$0.001 & \\cellcolor[gray]{0.93} 0.925$\\pm$0.011 & 0.895$\\pm$0.010 \\\\\n Feature Prediction & 0.817$\\pm$0.005 & 0.902$\\pm$0.001 & 0.849$\\pm$0.010 & 0.899$\\pm$0.010 \\\\\n Transformations & \\cellcolor[gray]{0.93} 0.854$\\pm$0.005 & \\cellcolor[gray]{0.93}0.911$\\pm$0.002 & 0.869$\\pm$0.013 & \\cellcolor[gray]{0.93}0.906$\\pm$0.011 \\\\\n Temporal Shift & 0.834$\\pm$0.008 & 0.909$\\pm$0.003 & 0.851$\\pm$0.016 & 0.747$\\pm$0.027 \\\\\n Modality Denoise. & 0.807$\\pm$0.006 & 0.817$\\pm$0.004 & 0.675$\\pm$0.019 & 0.798$\\pm$0.035 \\\\\n Odd Segment & 0.835$\\pm$0.006 & 0.901$\\pm$0.001 & 0.869$\\pm$0.012 & 0.888$\\pm$0.010 \\\\\n Tripet Loss & 0.773$\\pm$0.005 & 0.841$\\pm$0.002 & 0.910$\\pm$0.008 & \\cellcolor[gray]{0.93}0.905$\\pm$0.011 \\\\ \\hline\n \\end{tabular}\n } \\hspace{0.01cm}\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HAPT} & \\textbf{Sleep-EDF} & \\textbf{MIT DriverDb} & \\textbf{WiFi CSI} \\\\ \\hline\n Fully Supervised & 0.899$\\pm$0.009 & 0.825$\\pm$0.005 & 0.824$\\pm$0.029 & 0.964$\\pm$0.007 \\\\\n Random Init. & 0.119$\\pm$0.041 & 0.149$\\pm$0.127 & 0.321$\\pm$0.198 & 0.153$\\pm$0.04 \\\\ \n Autoencoder & 0.669$\\pm$0.003 & 0.679$\\pm$0.012 & 0.876$\\pm$0.002 & 0.767$\\pm$0.005 \\\\ \\hline\n Sensor Blend & 0.818$\\pm$0.006 & 0.779$\\pm$0.004 & 0.890$\\pm$0.002 & - \\\\\n Fusion Magnitude & 0.815$\\pm$0.004 & \\cellcolor[gray]{0.93}0.782$\\pm$0.006 & 0.892$\\pm$0.004 & - \\\\\n Feature Prediction & \\cellcolor[gray]{0.93}0.822$\\pm$0.002 & 0.671$\\pm$0.022 & 0.866$\\pm$0.000 & \\cellcolor[gray]{0.93}0.837$\\pm$0.005 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.841$\\pm$0.003 & 0.778$\\pm$0.006 & \\cellcolor[gray]{0.93}0.908$\\pm$0.001 & 0.768$\\pm$0.007 \\\\\n Temporal Shift & 0.782$\\pm$0.004 & 0.707$\\pm$0.012 & 0.883$\\pm$0.005 & 0.731$\\pm$0.011 \\\\\n Modality Denoise. & 0.738$\\pm$0.002 & \\cellcolor[gray]{0.93}0.784$\\pm$0.002 & 0.902$\\pm$0.001 & - \\\\\n Odd Segment & 0.790$\\pm$0.003 & 0.772$\\pm$0.003 & \\cellcolor[gray]{0.93}0.885$\\pm$0.002 & 0.774$\\pm$0.008 \\\\\n Tripet Loss & 0.815$\\pm$0.002 & 0.775$\\pm$0.003 & 0.891$\\pm$0.001 & 0.749$\\pm$0.009 \\\\ \\hline\n \\end{tabular}\n }\n\\end{table}\n\n\\subsubsection*{Linear separability and effects of fine-tuning the shared encoder}\n\\label{subsec:linear}\nFor assessing the quality of the self-supervised embeddings, we conduct experiments with a linear classifier on the end-tasks. Linear separability is a standard way of measuring the power of self-supervised-learned features in the literature~\\cite{oord2018representation, tagliasacchi2019self, gidaris2018unsupervised}, i.e. if the representations disentangle factors of variations in the input, then it becomes easier to solve subsequent tasks. Here, we train a linear classifier (i.e. logistic regression) $10$-times on top of a frozen network (pre-trained with self-supervision) using annotated data of the downstream task. Table~\\ref{tab:linear} summarizes the results on eight benchmark datasets from four application domains. We compare the performance against a fully-supervised network that is trained in an end-to-end manner (directly with annotated data). We also consider unsupervised pre-training with a standard autoencoder to analyze the improvements of self-supervision. Likewise, a linear model is also trained with random features (i.e. from a randomly initialized frozen network) to estimate its learning capacity. On the activity recognition problem, the self-supervised features achieve very close results on multiple benchmarks to training an entire network with annotated instances. On the HHAR dataset, the transformation and fusion magnitude prediction tasks improve the F-score by $7$ points. On other datasets with a large number of classes, such as HAPT and MobiAct, our simple proxy tasks learn features that are generalizable to end-tasks. In the case of sleep stage scoring, linear layers trained with features from the modality denoising and the fusion magnitude tasks achieve a kappa of $0.70$, which is impressive given that the representations are learned from completely unlabeled data. Similarly, in a stress classification problem, the self-supervised networks outperform a fully-supervised model with a large margin. The transformations and modality denoising tasks achieve kappa scores of $0.80$ and $0.79$, respectively. We believe it is because pre-training results in generic features, whereas a model trained directly on the end-task suffers from overfitting. Lastly, we evaluate on the device-free sensing problem using the amplitude of WiFi CSI. Although we designed the auxiliary tasks for multisensorinput, we find a subset of these to be applicable for self-supervision with a unimodal input. We achieve good results with self-supervised features even though the dataset size is relatively small, and input is noisy, complex and high-dimensional. The linear layer trained on top of the feature-prediction task representations achieves an F-score of $83\\%$ compared to the end-to-end training F-score of $96\\%$.\n\n\n\\begin{table}[htbp]\n \\caption{Improvement in recognition rate (weighted F-score) by fine-tuning the shared layers of the encoder while training on the end-task. We observe a significant increase in performance across datasets with self-supervised networks, either surpassing or achieving results on-par with the baseline. See Table~\\ref{tab:kappa_ft} in appendix~\\ref{appendix:kappa_results} for kappa scores.}\\label{tab:ft}\n \\centering\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HHAR} & \\textbf{MobiAct} & \\textbf{MotionSense} & \\textbf{UCI HAR} \\\\ \\hline\n Fully Supervised & 0.794$\\pm$0.014 & 0.934$\\pm$0.005 & 0.952$\\pm$0.007 & 0.961$\\pm$0.008 \\\\\n Random Init. & 0.218$\\pm$0.062 & 0.383$\\pm$0.109 & 0.246$\\pm$0.090 & 0.221$\\pm$0.079 \\\\\n Autoencoder & 0.835$\\pm$0.003 & 0.927$\\pm$0.003 & 0.938$\\pm$0.002 & 0.943$\\pm$0.004 \\\\ \\hline\n Sensor Blend & \\cellcolor[gray]{0.93}0.841$\\pm$0.009 & \\cellcolor[gray]{0.93}0.943$\\pm$0.004 & 0.937$\\pm$0.004 & \\cellcolor[gray]{0.93}0.956$\\pm$0.003 \\\\\n Fusion Magnitude & 0.831$\\pm$0.006 & 0.938$\\pm$0.005 & 0.945$\\pm$0.002 & 0.946$\\pm$0.002 \\\\\n Feature Prediction & \\cellcolor[gray]{0.93}0.840$\\pm$0.007 & 0.937$\\pm$0.002 & 0.951$\\pm$0.003 & 0.943$\\pm$0.003 \\\\\n Transformations & 0.828$\\pm$0.006 & \\cellcolor[gray]{0.93}0.946$\\pm$0.004 & \\cellcolor[gray]{0.93}0.951$\\pm$0.005 & \\cellcolor[gray]{0.93}0.954$\\pm$0.006 \\\\\n Temporal Shift & 0.831$\\pm$0.008 & 0.939$\\pm$0.002 & 0.934$\\pm$0.006 & 0.909$\\pm$0.008 \\\\\n Modality Denoise. & 0.840$\\pm$0.003 & 0.938$\\pm$0.002 & 0.928$\\pm$0.006 & 0.941$\\pm$0.001 \\\\\n Odd Segment & 0.826$\\pm$0.003 & 0.938$\\pm$0.005 & 0.935$\\pm$0.006 & 0.953$\\pm$0.003 \\\\\n Tripet Loss & 0.835$\\pm$0.013 & 0.912$\\pm$0.006 & \\cellcolor[gray]{0.93}0.955$\\pm$0.003 & 0.950$\\pm$0.002 \\\\ \\hline\n \\end{tabular} \n } \\hspace{0.02cm}\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HAPT} & \\textbf{Sleep-EDF} & \\textbf{MIT DriverDb} & \\textbf{WiFi CSI} \\\\ \\hline\n Fully Supervised & 0.899$\\pm$0.009 & 0.825$\\pm$0.005 & 0.824$\\pm$0.029 & 0.964$\\pm$0.007 \\\\\n Random Init. & 0.119$\\pm$0.041 & 0.149$\\pm$0.127 & 0.321$\\pm$0.198 & 0.153$\\pm$0.048 \\\\\n Autoencoder & 0.881$\\pm$0.002 & 0.805$\\pm$0.008 & 0.877$\\pm$0.002 & \\cellcolor[gray]{0.93}0.898$\\pm$0.025 \\\\ \\hline\n Sensor Blend & 0.895$\\pm$0.003 & 0.809$\\pm$0.003 & 0.881$\\pm$0.014 & - \\\\\n Fusion Magnitude & 0.898$\\pm$0.002 & 0.813$\\pm$0.003 & 0.882$\\pm$0.011 & - \\\\\n Feature Prediction & 0.893$\\pm$0.003 & 0.748$\\pm$0.006 & 0.859$\\pm$0.003 & 0.832$\\pm$0.037 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.898$\\pm$0.002 & \\cellcolor[gray]{0.93}0.822$\\pm$0.005 & \\cellcolor[gray]{0.93}0.890$\\pm$0.005 & 0.823$\\pm$0.028 \\\\\n Temporal Shift & 0.876$\\pm$0.007 & 0.779$\\pm$0.005 & 0.883$\\pm$0.005 & 0.736$\\pm$0.063 \\\\\n Modality Denoise. & 0.885$\\pm$0.003 & \\cellcolor[gray]{0.93}0.819$\\pm$0.002 & \\cellcolor[gray]{0.93}0.889$\\pm$0.001 & - \\\\\n Odd Segment & \\cellcolor[gray]{0.93}0.899$\\pm$0.003 & 0.804$\\pm$0.003 & 0.853$\\pm$0.023 & \\cellcolor[gray]{0.93}0.860$\\pm$0.030 \\\\\n Tripet Loss & 0.887$\\pm$0.005 & 0.805$\\pm$0.003 & 0.884$\\pm$0.002 & 0.755$\\pm$0.022 \\\\ \\hline\n \\end{tabular}\n } \n\\end{table}\n\nIn Table~\\ref{tab:ft}, we notice a substantial improvement on the downstream tasks if the last convolutional layer of the encoder (see Figure~\\ref{fig:architecture}) is fine-tuned while training the linear classifier. Comparing with the results given in Table~\\ref{tab:linear}, it can be seen that the recognition rate of the models improved significantly, achieving similar results as the fully-supervised baselines; while features learned by input reconstruction with an autoencoder scored low compared to our proposed surrogate tasks even after fine-tuning, except for the WiFi sensing task. On the MobiAct dataset, transformations and sensor blend tasks gain ~$2$ points improvement in kappa. Likewise, for MotionSense, HAPT and UCI HAR, we bridge the gap between fully-supervised and self-supervised models. Interestingly, fine-tuning did not help much with MIT DriverDb compared to training a linear classifier. These results agree with our intuition that training on an end-task directly in this case results in overfitting. \n\nIn summary, the evaluation with a linear classifier trained on top of a pre-trained (self-supervised) feature extractor highlights that the representations learned with auxiliary tasks are broadly useful and better than autoencoding-based approaches. It also confirms our hypothesis that general-purpose representations can be learned directly from raw input without any strongly (task-specific) labeled data. It is important to note we did not aim to surpass fully-supervised approaches in this setting. Supervised methods will be better because they have direct access to task-specific labels, while self-supervised objectives train a network without any foresight of the end-task. It can also be seen from the results of fine-tuning the encoder, as presented in Table~\\ref{tab:ft}, that the network performance matches the supervised methods or improves upon, when shared layers are further trained on the downstream tasks. Likewise, it might be possible to improve generalization of self-supervised models through pre-training on larger unlabeled datasets in a real-world setting.\n\n\\subsubsection*{Impact on learning in low-data regime}\nWe next investigate the performance of our approach in a semi-supervised (or low-data) setting. For this purpose, we pre-train an encoder using unlabeled instances for each self-supervised task and utilize it as initialization for efficiently learning with few labeled instances on the end-task; for the end-task, we add a randomly-initialized dense layer with $1024$ hidden units before a linear output layer. The non-linear classifier is then learned and the encoder is fine-tuned with the specified number of instances per class. Specifically, for the defined auxiliary tasks and datasets, we use $5$ and $10$ examples for each category. We want to highlight that in a on-device learning case, a few labeled instances can be pooled from multiple users quite easily (e.g. $2$-$3$ examples per user) as compared to accumulating several hundred for learning fully-supervised models. Likewise, personalization can also be achieved through precisely asking for a few labels for targeted classes. In Figure~\\ref{fig:ld}, we provide an average weighted F-score of $10$ independent experiment runs, comparing training from scratch (FS) with the pre-training as an effective initialization for learning a robust classifier. We show that in contrast to the purely supervised approach, leveraging unlabeled data for learning network parameters improves the performance on the end-task. Specifically, our self-supervised models greatly improve the F-score in the low-data setting, in some cases achieving F-scores nearly as good as networks trained with the entire labeled data. Similarly, the self-supervised trained models perform better than the autoencoder, which shows that, despite the simplicity, our proposed auxiliary tasks force the network to learn highly-generalizable features. For each experiment run, we randomly sample the stated number of annotated instances and use these to train all the networks, including fully-supervised baselines. \n\nOn activity recognition, our methodology significantly improves the performance in low-data; for example, on the HHAR dataset with $5$ and $10$ instances, temporal shift and transformations tasks gain $4$ and $7$ points over the fully-supervised models' F-score of $0.60$ and $0.68$. respectively. Similarly, for MobiAct, pre-training with the temporal shift task helps achieve an F-score of $0.75$ ($5$ instances) and $0.82$ ($10$ instances), compared to $0.61$ and $0.73$ respectively for networks learned from scratch. Furthermore, we achieve identical improvements on UCI HAR, HAPT, and MotionSense with $5$ instances per class. The attained F-scores are $0.91$, $0.77$ and $0.83$ in contrast to $0.90$, $0.59$, and $0.77$ of fully-supervised models, respectively. Our method represents a $26$ points increase in F-score on the challenging problem of sleep stage scoring. Likewise, on physiological stress detection and device-free sensing problems, the benefit of pre-training with auxiliary tasks is further apparent, where the presented methods achieve $12$ points improvement in F-score over the baseline. These results suggest that self-supervision can greatly help with learning general-purpose representations that work well in the low-data regime. We also want to highlight that although the selection of an equal number of instances results in a balanced training set, we use the full test sets (as in earlier experiments) for evaluation, which could be imbalanced. Importantly, utilizing even bigger unlabeled datasets and combining weak-supervision methods can boost the quality of the learned representations. \n\nWe emphasize that the broader objective of self-supervised methods is to learn high-level semantic features that can be used to solve an array of downstream tasks with minimal labeled data. The evaluation of our presented auxiliary tasks clearly highlights the benefit of pre-training the network with unlabeled data to achieve better generalization on the tasks of interest, with very few labeled instances. To the best of our knowledge, we, for the first time, evaluate self-supervised methods in a semi-supervised setting for problems involving multisensor data as earlier work developed fully-supervised network architectures or used classical autoencoding-based approaches for pre-training, followed by network fine-tuning with the entire labeled data. Overall, our approach provides a base for further work in developing sensing techniques that can achieve on-device personalization and perform continual, and few-shot learning, as the presented framework considerably reduces the requirement of labeled data from human annotators to learn the end-task models.\n\n\\begin{figure}[htbp]\n\\centering\n\\subfloat[HHAR]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/hhar_ld_summarized.pdf}} \n\\subfloat[MobiAct]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/ma_ld_summarized.pdf}}\\\\\n\\subfloat[MotionSense]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/ms_ld_summarized.pdf}} \n\\subfloat[UCI HAR]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/uci_ld_summarized.pdf}} \\\\\n\\subfloat[HAPT]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/hapt_ld_summarized.pdf}} \n\\subfloat[Sleep-EDF]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/edf_ld_summarized.pdf}} \\\\\n\\subfloat[MIT DriverDb]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/driverdb_ld_summarized.pdf}} \n\\subfloat[WiFi CSI]{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/csi_ld_summarized.pdf}}\\\\\n\\subfloat{\\includegraphics[width=6.4cm]{Figures\/Low_Data\/ld_caption.pdf}}\n\\caption{Contribution of self-supervised pre-training for improving end-task performance with few labeled data. We utilize pre-trained self-supervised models as initialization for learning in a semi-supervised setting. The subplots provide the mean F-score of $10$ independent runs, where randomly selected instances are used to train the models. The bars with gray color represent the results of the networks trained only on the labeled instances while vertical black line shows results of fully-supervised model trained with entire data.}\n\\label{fig:ld}\n\\end{figure}\n\n\\subsubsection*{Effectiveness in a transfer learning setting}\n\n\\begin{figure}[t]\n\\subfloat[Autoencoder]{\\includegraphics[width=5.5cm]{Figures\/Transfer\/aerl_summarized.pdf}} \n\\subfloat[Sensor Blend]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/sbrl_summarized.pdf}}\n\\subfloat[Fusion Magnitude]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/swrl_summarized.pdf}} \\\\\n\\subfloat[Feature Prediction]{\\includegraphics[width=5.5cm]{Figures\/Transfer\/fprl_summarized.pdf}} \n\\subfloat[Transformations]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/tprl_summarized.pdf}} \n\\subfloat[Temporal Shift]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/sdrl_summarized.pdf}} \\\\\n\\subfloat[Modality Denoising]{\\includegraphics[width=5.5cm]{Figures\/Transfer\/msrl_summarized.pdf}}\n\\subfloat[Odd Segment]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/osrl_summarized.pdf}}\n\\subfloat[Triplet Loss]{\\includegraphics[width=4.2cm]{Figures\/Transfer\/tlrl_summarized.pdf}}\n\\caption{Generalization of the self-supervised representations under transfer learning setting. We evaluate the features transferability on activity recognition task by pre-training networks with each auxiliary task for every dataset. For solving downstream tasks, we train a linear classifier on-top of the frozen feature extractor $10$ times, independently, and report the average F-score. The diagonal entries denote the numbers when the source and target datasets are the same with the x-axis and y-axis representing target and source datasets, respectively.}\n\\label{fig:tf}\n\\end{figure}\n\nIn a real-world learning setup, there is a high chance that we are interested in a different dataset and downstream task than the one originating from the unlabeled data accessible for pre-training. A broadly useful auxiliary task is thus one that produces generalizable representations that transfer well to other related end tasks. To examine the transferability property of the features learned with our proxy tasks, we evaluate their performance on the activity recognition datasets. To this end, we pre-train the feature extractor with each self-supervised objective (i.e. by discarding the semantic class labels) for all the five datasets (see section~\\ref{sec:datasets}) and investigate their performance through a) training a linear classifier with the entire target annotated data and b) fine-tuning it end-to-end with few labeled data (i.e. learning an activity classifier with $5$ and $10$ instances of each class from target dataset). Figure~\\ref{fig:tf} provides the results of the source-to-target transfer of self-supervised models trained with nine different auxiliary losses. The diagonal entries of each subplot represent the F-scores when the source and target datasets are the same. In comparison with autoencoder pre-training, features learned with our tasks transfer well between datasets. We observe that even leveraging smaller unlabeled datasets produces useful features, as with sensor-blend-task-learned features on UCI HAR scored $0.91$ F-score on the HHAR dataset. On the HAPT dataset of low input resolution (i.e. a segment size of $200$ samples) and complex postural activities, transfer learning improves the performance with approximately $8$ percentage points in F-score over pre-training on the same dataset. Importantly, our results are also competitive with the fully-supervised baselines on the respective datasets. \n\nWe further examine if the transferred self-supervised models are beneficial in learning from low-data; i.e. few labeled instances are available from the target data, but separate unannotated data is available for pre-training. We utilize the same network configuration as discussed earlier for low-data experiments and we fine-tune the model end-to-end. We randomly sample a specified number of instances and perform experiments $10$ times while utilizing the same instances for both types of networks (i.e. pre-trained and baseline) and report average F-score. In Figure~\\ref{fig:tf_ld}, we present the results of optimal auxiliary tasks for each combination of the source to target transfer, where gray-colored bars show a fully-supervised baseline. Our experiments show that the features learned from different but related datasets do transfer well and improve the recognition rate even when as little as $5$ examples per class are available. On the MobiAct dataset, our approach with HAPT as source data results in an F-score of $0.68$ and $0.78$ compared to the training from scratch F-score of $0.61$ and $0.73$, respectively. Similarly, with HAPT as a target, transferring from the UCI HAR using the sensor blend task, the F-score improved from $0.59$ to $0.68$ and $0.72$ to $0.78$. Interestingly, on UCI HAR and MotionSense, the performance attained with our approach is very close to the purely supervised models trained with entirely labeled data (see Table~\\ref{tab:linear}). \n\nLearning generalizable representations that can be reused for solving related tasks is an important property to have in a learning system. Our investigation of transferring unsupervised pre-trained models consistently highlights substantial performance improvements, indicating that the self-supervised features are broadly useful across different subjects, devices, environments and data collection protocols. In particular, the data efficiency enabled by our method in a low-data regime provides further evidence of semantic feature learning without merely over-fitting on the source dataset. It is also important to note that compared to earlier work which focuses on supervised transfer or joint-training on source and target datasets, we provide evaluation of unsupervised transfer and its ability to boost performance even with few-labeled data. Likewise, self-supervised learning has other benefits as it has been shown to improve adversarial robustness and uncertainty of deep models as compared to purely supervised methods~\\cite{hendrycks2019using}. Although we did not study these aspects explicitly in this work, the results of transfer learning across domains hint that our auxiliary tasks also enhance the model's robustness; we leave an in-depth study for future work.\n\n\\begin{figure}[htbp]\n\\subfloat[HHAR]{\\includegraphics[width=8cm]{Figures\/Transfer\/hhar-tfld-summarized.pdf}} \\\\\n\\subfloat[MobiAct]{\\includegraphics[width=8cm]{Figures\/Transfer\/ma-tfld-summarized.pdf}} \\\\\n\\subfloat[MotionSense]{\\includegraphics[width=8cm]{Figures\/Transfer\/motionsense-tfld-summarized.pdf}} \\\\\n\\subfloat[UCI HAR]{\\includegraphics[width=8cm]{Figures\/Transfer\/uci_har-tfld-summarized.pdf}} \\\\\n\\subfloat[HAPT]{\\includegraphics[width=8cm]{Figures\/Transfer\/hapt-tfld-summarized.pdf}} \\\\\n\\subfloat{\\includegraphics[width=6.5cm]{Figures\/Low_Data\/ld_caption.pdf}}\n\\caption{Contribution of self-supervised learning, and fine-tuning of the transferred networks in learning from few-data. We utilize a pre-trained model on each source data and train a non-linear classifier on the target task to assess the effectiveness of self-supervision for improving the recognition rate. The networks are fine-tuned with a specified number of instances per class $10$ times. For each source data, we provide mean results only of the best performing auxiliary task in order to improve readability.}\n\\label{fig:tf_ld}\n\\end{figure}\n\n\\subsubsection*{Cross-validation to determine robustness against subject variations}\n\\label{subsec:cv}\nTo validate the stability of our methodology against variations in subjects' data utilized for pre-training and downstream task evaluation, we perform $5$-fold cross-validation based on user split (i.e. the train and test division ($80-20$) is based on users with no overlap among them; train\/test users are entirely independent); and we follow the same experimental setup as earlier. For each fold's data and surrogate task, we pre-train the models and train a linear classifier on top of the frozen network. The fully-supervised baseline is trained in an end-to-end manner, directly with the semantic labels. Table~\\ref{tab:cv_linear} summarizes the results averaged across $5$ folds on eight considered datasets. We observe that the results achieved with self-supervision are consistent with earlier experiments. This highlights that our approach for sensory representation learning works well with different users' data and it is robust to subjects' differences. On the MobiAct dataset, the feature prediction and transformation recognition tasks achieve $0.90$ F-score, which is very close to a fully-supervised model's F-score of $0.91$. Likewise, on MIT DriverDb, self-supervision provides an impressive improvement over training from scratch. To summarize, these results suggest that the learned representations with unlabeled data learn useful features that can be used to a large extent for solving the end-task with a simple linear layer. Furthermore, we explore fine-tuning the last convolutional layer of the encoder while training a linear layer on downstream tasks. In Table~\\ref{tab:cv_ft}, we show that fine-tuning a shared layer leads to a better performance than the fully-supervised model training from scratch on most of the datasets. The feature prediction task on the HHAR dataset achieved an F-score of $0.87$, which is $5$ points above the baseline. Likewise, on other datasets and tasks, our technique either bridges the gap or achieves broadly similar results as the supervised models. We think that careful fine-tuning of the architecture and related hyper-parameters could further improve the recognition rate of self-supervised networks. We note that a direct comparison of our approach with existing methods is not feasible as we learn representations from unlabeled data and evaluate through training a linear classifier, whereas, prior methods focus on fully-supervised learning with different architectures and evaluation strategies. However, to be comparative, we summarize related results here, which are only indicative. On MotionSense, our sensor blend task achieves an F-score of $0.92$ compared to $0.95$ and $0.86$ accuracy for trial- and subject-wise evaluation in~\\cite{malekzadeh2018protecting}. For SleepEDF, our fusion magnitude task scores a kappa of $0.72$ compared to $0.76$ of a sophisticated fully-supervised model~\\cite{supratak2017deepsleepnet}. Likewise, on WiFi sensing task, feature prediction proxy task results in an F-score of $0.85$ compared to the $0.90$ accuracy of an LSTM-based model~\\cite{yousefi2017survey} over six classes.\n\nWe wondered whether pre-training with our auxiliary tasks is invariant to utilized subjects' data, as it is critical for learning in a real-world setting due to the non-curated nature of the data. We found that proxy tasks are highly stable and result in a similar performance as earlier, when a linear classifier is trained on top of self-supervised feature extractors. This analysis further shows that the self-supervised features are not necessarily subject-specific, but are general in nature. Moreover, our evaluation demonstrates there is a room for improvement through selecting problem- or task-specific network architectures and using larger unlabeled datasets for unsupervised learning. Specifically, it would be valuable to explore unifying supervised and self-supervised objectives in a multi-task setting to personalize or adapt sensing models directly on user devices.\n\n\n\\begin{table}[htbp]\n \\caption{Comparison of self-supervised representation learning to fully-supervised approach with $5$-fold cross-validation based on user-split. We pre-train the feature extractors for each fold's data and learn a linear classifier for the end-task as usual. We report weighted F-score averaged over the 5 folds, highlighting the robustness of our method to subject variations. See Table~\\ref{tab:kappa_cv_linear} in appendix~\\ref{appendix:kappa_results} for kappa scores.}\\label{tab:cv_linear}\n \\centering\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HHAR} & \\textbf{MobiAct} & \\textbf{MotionSense} & \\textbf{UCI HAR} \\\\ \\hline\n Fully Supervised & 0.844$\\pm$0.090 & 0.917$\\pm$0.017 & 0.960$\\pm$0.007 & 0.951$\\pm$0.025 \\\\\n Random Init. & 0.199$\\pm$0.047 & 0.394$\\pm$0.086 & 0.284$\\pm$0.086 & 0.268$\\pm$0.208 \\\\\n Autoencoder & 0.722$\\pm$0.085 & 0.736$\\pm$0.021 & 0.752$\\pm$0.050 & 0.831$\\pm$0.041 \\\\ \\hline\n Sensor Blend & 0.829$\\pm$0.061 & 0.886$\\pm$0.010 & \\cellcolor[gray]{0.93}0.920$\\pm$0.019 & \\cellcolor[gray]{0.93}0.915$\\pm$0.038 \\\\\n Fusion Magnitude & \\cellcolor[gray]{0.93}0.841$\\pm$0.040 & 0.889$\\pm$0.014 & \\cellcolor[gray]{0.93}0.924$\\pm$0.025 & 0.899$\\pm$0.049 \\\\\n Feature Prediction & 0.820$\\pm$0.068 & \\cellcolor[gray]{0.93}0.900$\\pm$0.016 & 0.900$\\pm$0.025 & 0.896$\\pm$0.043 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.822$\\pm$0.059 & \\cellcolor[gray]{0.93}0.900$\\pm$0.011 & 0.898$\\pm$0.013 & \\cellcolor[gray]{0.93}0.916$\\pm$0.018 \\\\\n Temporal Shift & 0.811$\\pm$0.057 & 0.890$\\pm$0.017 & 0.889$\\pm$0.027 & 0.793$\\pm$0.030 \\\\\n Modality Denoise. & 0.798$\\pm$0.077 & 0.834$\\pm$0.029 & 0.780$\\pm$0.058 & 0.829$\\pm$0.056 \\\\\n Odd Segment & 0.812$\\pm$0.079 & 0.890$\\pm$0.015 & 0.901$\\pm$0.014 & 0.861$\\pm$0.015 \\\\\n Tripet Loss & 0.749$\\pm$0.065 & 0.822$\\pm$0.013 & 0.917$\\pm$0.022 & 0.893$\\pm$0.036 \\\\ \\hline\n \\end{tabular}\n } \\hspace{0.02cm}\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HAPT} & \\textbf{Sleep-EDF} & \\textbf{MIT DriverDb} & \\textbf{WiFi CSI} \\\\ \\hline\n Fully Supervised & 0.897$\\pm$0.053 & 0.822$\\pm$0.025 & 0.789$\\pm$0.122 & 0.959$\\pm$0.005 \\\\\n Random Init. & 0.155$\\pm$0.061 & 0.072$\\pm$0.021 & 0.206$\\pm$0.015 & 0.214$\\pm$0.044 \\\\\n Autoencoder & 0.818$\\pm$0.064 & 0.701$\\pm$0.026 & 0.850$\\pm$0.054 & \\cellcolor[gray]{0.93}0.793$\\pm$0.014 \\\\ \\hline\n Sensor Blend & 0.855$\\pm$0.044 & 0.788$\\pm$0.014 & 0.824$\\pm$0.106 & - \\\\\n Fusion Magnitude & 0.840$\\pm$0.040 & \\cellcolor[gray]{0.93}0.795$\\pm$0.025 & 0.859$\\pm$0.061 & - \\\\\n Feature Prediction & \\cellcolor[gray]{0.93}0.859$\\pm$0.040 & 0.777$\\pm$0.033 & 0.843$\\pm$0.045 & \\cellcolor[gray]{0.93}0.855$\\pm$0.024 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.863$\\pm$0.045 & 0.788$\\pm$0.028 & 0.860$\\pm$0.060 & 0.770$\\pm$0.032 \\\\\n Temporal Shift & 0.837$\\pm$0.042 & 0.753$\\pm$0.027 & 0.844$\\pm$0.082 & 0.729$\\pm$0.015 \\\\\n Modality Denoise. & 0.835$\\pm$0.050 & \\cellcolor[gray]{0.93}0.797$\\pm$0.029 & \\cellcolor[gray]{0.93}0.864$\\pm$0.061 & - \\\\\n Odd Segment & 0.821$\\pm$0.043 & 0.767$\\pm$0.037 & 0.839$\\pm$0.071 & 0.793$\\pm$0.018 \\\\\n Tripet Loss & 0.845$\\pm$0.044 & 0.789$\\pm$0.027 & \\cellcolor[gray]{0.93}0.860$\\pm$0.059 & 0.769$\\pm$0.022 \\\\ \\hline\n \\end{tabular}\n }\n\\end{table}\n\n\n\n\\begin{table}[htbp]\n \\caption{The effect of fine-tuning modality-agnostic encoder while learning downstream task under $5$-folds cross-validation as evaluated through weighted F-score. See Table~\\ref{tab:kappa_cv_ft} in appendix~\\ref{appendix:kappa_results} for kappa scores.}\\label{tab:cv_ft}\n \\centering\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HHAR} & \\textbf{MobiAct} & \\textbf{MotionSense} & \\textbf{UCI HAR} \\\\ \\hline\n Fully Supervised & 0.844$\\pm$0.090 & 0.917$\\pm$0.017 & 0.960$\\pm$0.007 & 0.951$\\pm$0.025 \\\\\n Random Init. & 0.199$\\pm$0.047 & 0.394$\\pm$0.086 & 0.284$\\pm$0.086 & 0.268$\\pm$0.208 \\\\\n Autoencoder & 0.891$\\pm$0.049 & 0.914$\\pm$0.019 & 0.961$\\pm$0.010 & 0.936$\\pm$0.051 \\\\ \\hline\n Sensor Blend & 0.893$\\pm$0.062 & 0.919$\\pm$0.011 & 0.964$\\pm$0.011 & \\cellcolor[gray]{0.93}0.949$\\pm$0.036 \\\\\n Fusion Magnitude & 0.885$\\pm$0.054 & 0.918$\\pm$0.011 & 0.961$\\pm$0.013 & 0.942$\\pm$0.039 \\\\\n Feature Prediction & \\cellcolor[gray]{0.93}0.894$\\pm$0.050 & \\cellcolor[gray]{0.93}0.930$\\pm$0.014 & 0.962$\\pm$0.003 & 0.943$\\pm$0.047 \\\\\n Transformations & 0.893$\\pm$0.052 & \\cellcolor[gray]{0.93}0.933$\\pm$0.0126 & \\cellcolor[gray]{0.93}0.968$\\pm$0.007 & 0.949$\\pm$0.033 \\\\\n Temporal Shift & 0.885$\\pm$0.055 & 0.920$\\pm$0.014 & 0.941$\\pm$0.012 & 0.915$\\pm$0.050 \\\\\n Modality Denoise. & 0.886$\\pm$0.061 & 0.929$\\pm$0.015 & \\cellcolor[gray]{0.93}0.966$\\pm$0.011 & 0.933$\\pm$0.054 \\\\\n Odd Segment & \\cellcolor[gray]{0.93}0.894$\\pm$0.067 & 0.927$\\pm$0.011 & 0.962$\\pm$0.004 & \\cellcolor[gray]{0.93}0.951$\\pm$0.030 \\\\\n Tripet Loss & 0.856$\\pm$0.055 & 0.904$\\pm$0.020 & 0.957$\\pm$0.006 & 0.944$\\pm$0.044 \\\\ \\hline\n \\end{tabular}\n } \\hspace{0.02cm}\n \\subfloat{\n \\small\n \\centering\n \\begin{tabular}{ccccc}\n Method & \\textbf{HAPT} & \\textbf{Sleep-EDF} & \\textbf{MIT DriverDb} & \\textbf{WiFi CSI} \\\\ \\hline\n Fully Supervised & 0.897$\\pm$0.053 & 0.822$\\pm$0.025 & 0.789$\\pm$0.122 & 0.959$\\pm$0.005 \\\\\n Random Init. & 0.155$\\pm$0.061 & 0.072$\\pm$0.021 & 0.206$\\pm$0.015 & 0.214$\\pm$0.044 \\\\\n Autoencoder & 0.883$\\pm$0.059 & 0.764$\\pm$0.028 & 0.804$\\pm$0.132 & \\cellcolor[gray]{0.93}0.911$\\pm$0.032 \\\\ \\hline\n Sensor Blend & \\cellcolor[gray]{0.93}0.892$\\pm$0.052 & 0.801$\\pm$0.020 & 0.793$\\pm$0.149 & - \\\\\n Fusion Magnitude & 0.884$\\pm$0.051 & \\cellcolor[gray]{0.93}0.808$\\pm$0.023 & 0.788$\\pm$0.148 & - \\\\\n Feature Prediction & 0.893$\\pm$0.055 & 0.794$\\pm$0.031 & 0.795$\\pm$0.143 & 0.857$\\pm$0.040 \\\\\n Transformations & \\cellcolor[gray]{0.93}0.896$\\pm$0.051 & \\cellcolor[gray]{0.93}0.801$\\pm$0.029 & 0.806$\\pm$0.127 & 0.805$\\pm$0.051 \\\\\n Temporal Shift & 0.890$\\pm$0.052 & 0.781$\\pm$0.027 & 0.805$\\pm$0.133 & 0.758$\\pm$0.048 \\\\\n Modality Denoise. & 0.882$\\pm$0.051 & 0.796$\\pm$0.028 & \\cellcolor[gray]{0.93}0.858$\\pm$0.051 & - \\\\\n Odd Segment & 0.888$\\pm$0.048 & 0.778$\\pm$0.035 & \\cellcolor[gray]{0.93}0.849$\\pm$0.050 & \\cellcolor[gray]{0.93}0.854$\\pm$0.032 \\\\\n Tripet Loss & 0.888$\\pm$0.056 & 0.792$\\pm$0.031 & 0.806$\\pm$0.128 & 0.765$\\pm$0.022 \\\\ \\hline\n \\end{tabular}\n }\n\\end{table}\n\n\\section{Impact and Limitations}\n\\label{sec:impact}\nOur \\textit{Sense and Learn} framework shows that it is possible to use unlabeled data, in addition to smaller amounts of labeled data, when learning features for varied classification problems. We believe our method is useful in practice, where obtaining labeled data is difficult and costly. Since the same approach, with a fixed neural network structure, provides gains for quite different application areas, ranging from activity recognition to sleep stage scoring, we also believe the method is applicable in practice. While it is true that a practitioner cannot be certain which self-supervised task will work best for a new application, the range of experiments we present should provide a valuable starting point as to which tasks are most promising. Moreover, our fine-tuning experiments (Table~\\ref{tab:ft}) show that e.g. the Transformations task provides significant gains across all datasets even when using all available supervised data. Finally, self-supervised tasks don't need any labels while learning the representations, which opens up the possibility of using our framework for on-device Federated Learning~\\cite{bonawitz2019towards}, where the sensor data never leaves the users' device (e.g., smartphone).\n\nSelf-supervised learning provides a scalable, inexpensive, and data efficient way to learn high-level features with deep neural networks without requiring strong labels, which could be unclear, noisy or limited for many real-world problems. However, there are limitations of these approaches which are also applicable to our methodology. First, deep neural networks are prone to learning via shortcuts through exploiting low-level cues in the input e.g. object textures and other local artifacts in image classification~\\cite{geirhos2020shortcut}. The unintended cue learning is not limited to supervised methods, but is a problem for self-supervised methods too, as networks can use shortcuts to solve proxy task without learning anything useful (e.g. chromatic aberration in vision models~\\cite{nathan2018improvements}). For time-series or multisensor inputs discovering, a model relying on shortcuts is an unsolved problem and could be challenging to detect. Second, as getting access to large unlabeled and labeled sensory datasets is difficult, evaluating how auxiliary tasks will perform on non-curated data or learning in an open-world environment needs further exploration. Third and last, interpretability and understanding the decision mechanism of deep models is another open area of research to address issues of model uncertainty, bias and fairness. The features learned with deep network could be non-interpretable, but we think that unifying shallow models using hand-crafted features with deep networks consuming raw input through knowledge distillation~\\cite{hinton2015distilling} might shed light on the importance of certain features.\n\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion}\nWe proposed a self-supervised framework for multisensor representation learning from unlabeled data, produced by the omnipresent sensors. To realize the vision of unsupervised learning for sensing systems and IoT in general, we developed eight novel auxiliary tasks that acquire their supervision signal directly from the raw input, without any human involvement. The defined proxy objectives are utilized to learn general and effective deep models for a wide variety of problems. Through extensive evaluation on eight publicly available datasets from four application domains, we demonstrate that the self-supervised networks learn useful semantic representations that are competitive with fully-supervised models (i.e. trained end-to-end with labeled data). In summary, we demonstrated that the straight-forward and computationally-inexpensive surrogate tasks perform well on downstream tasks of interest by learning a linear classifier on top of frozen feature extractors. We further showed that fine-tuning a pre-trained modality-agnostic encoder further improved the detection rate of a network. As the key objective of leveraging unannotated data is to reduce the labeled data required for the end-tasks, we have also shown that our approach significantly improves the performance in the low-data regime. In particular, with as few as $5$ to $10$ labeled examples per class, the self-supervised initialized networks achieve an F-score between $0.70$-$0.80$. Furthermore, we examined the effectiveness of learned representations in an unsupervised transfer setting with linear separability analysis and semi-supervised learning, achieving much better results than training from scratch. \n\nWhile in this work, we individually evaluate the quality of learned features for each auxiliary task, an interesting direction for future research is to jointly solve these problems in a multi-task setting, in order to learn more discriminative features. Likewise, an important area of investigation is to utilize the proposed tasks in a large-scale federated learning setting on distributed data. We believe this will truly highlight the potential of self-supervision for continual on-device (e.g., smartphones) learning and improving personalization. Finally, the general nature of our methodology offers the opportunity for leveraging self-supervision in other application areas, where labeled data accumulation is naturally difficult, such as arrhythmia detection.\n\n\\section*{Acknowledgements}\nThe authors would like to thank F\u00e9lix de Chaumont Quitry, Marco Tagliasacchi and Richard F. Lyon for their valuable feedback and help with this work.\nVarious icons used in the figure are created by Sriramteja SRT, Berkah Icon, Ben Davis, Eucalyp, ibrandify, Clockwise, Aenne Brielmann, Anuar Zhumaev, and Tim Madle from the Noun Project.\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nJust like with natural images, deep convolutional neural networks (CNNs) have shown impressive results for the classification of various diseases in medical images \\cite{rajpurkar2017chexnet}, \\cite{10.3389\/fmed.2019.00264}, \\cite{campanella2019clinical}. CNNs have also been used on histopathology images for tasks such as screening pre-cancerous lesions and localizing tumors \\cite{spanhol2016breast}, as well as predicting mutations \\cite{coudray2018classification}, survival \\cite{zhu2017wsisa}, and cancer recurrence \\cite{xu2019deep}\\cite{ye2018hybrid}\\cite{mkl}.\n\nThough CNN based algorithms on histopathology images have produced promising results, these algorithms lack interpretability. Localization and visualization algorithms in CNNs such as guided-backpropagation \\cite{springenberg2014striving}, grad-CAM \\cite{selvaraju2017grad}, and other CAM-related techniques fail to produce informative visualization for histopathology images. For instance, these techniques do not highlight cell nuclei responsible for the diagnosis and relevant features of the tumor microenvironment to further our understanding of disease and treatment mechanisms. Also, often CNNs are not able to highlight the relevant portions of the macro environment of the tumor due to large sizes (giga-pixels) of the whole-slide images. \n\nMorphological features of nuclei and the spatial relationships between them decide the diagnosis of histopathology slide. Representing histopathology images in the form of graphs can help capture the interaction between nuclei and the spatial arrangement of the relative positions with each other. Nuclei are represented as nodes of a graph and the distance between the nuclei can be described as edges between nodes of a graph\\cite{gadiya2019histographs}. This representation of histopathology images as graphs can be fed to graph convolutional networks (GCNs) to learn the characteristics of tissue at the macro-environment level.\n\nTaking the idea of using GCNs on graphs extracted from histology images further in this work, we propose to use an attention-based architecture and an occlusion-based visualization technique to highlight informative nuclei and inter-nuclear relationships. Our visualization results for classification of disease states in breast and prostate cancer datasets agree satisfactorily with the pathologists' observations of the relevance of various inter-nuclear relationships. Our technique paves the way for visualization of previously unknown features relevant for more important problems such as prognosis and prediction of treatment response.\n\n\\section{Related Work}\nBefore the emergence of deep learning, processing of histopathology images as graphs was explored in various ways. Weyn et al.\\cite{weyn1999computer} represents a histopathology image as a minimum spanning tree for the diagnosis of mesotheliomas. They use k-nearest neighbor for the classification of minimum spanning trees. Similarly, Cigdem et al. \\cite{demir2005augmented} form a graph from a histopathology image by considering the cluster of nuclei as a node that is connected using binary edges between nodes. A multi-layer perceptron is used for the detection of inflammation in brain biopsy. Cell-graphs \\cite{yener2016cell} uses nuclei as nodes and heuristic features as node and vertex features to perform classification on breast cancer and brain biopsy datasets.\n\n\\begin{figure*}\n\\centering \n\\includegraphics[width=0.9\\linewidth]{image_nucleus_graph\/rsf.jpg}\n\\caption{Proposed graph convolutional neural network with an attention layer.}\n\\end{figure*}\nThough the above mentioned methods form graphs from histopathology images, they use classical machine learning approaches such as support vector machine (SVM),k-nearest neighbors (kNN), etc. Recent developments in deep learning for graphs have enabled the use of GCNs on graphs derived from histopathology images. Kipf et al. \\cite{kipf2016semi} exhibits impressive results for node classification on various graph datasets such as Citeseer, Cora, Pubmed and NELL. They used spectral graph convolution to operate on homogeneous graphs. Other lines of work in GCNs operate in the spectral domain, which enables these algorithms to analyze heterogeneous graphs as well. Such et al. \\cite{such2017robust} introduced a graph convolutional algorithm in spatial domain. This method achieves excellent performance on various graph datasets. CGC-Net\\cite{zhou2019cgc} uses a variant of GraphSage\\cite{hamilton2017inductive} for identification of grade of prostate cancer slide represented as a graph. Recently, GCNs have been applied to graphs of nuclei in histopathology images with classification accuracy that is at par with CNNs \\cite{gadiya2019histographs}.\n\nA large portion of the medical community is skeptical about deep learning deployment in histopathology due to the lack of transparency in its working. Some attempts have been made to make deep learning more explainable. For instance, attention-based multiple instance learning \\cite{ilse2018attention} frames classification of histopathology images as weakly supervised problem and assigns weights to patches of a large image. This method produces an attention map for histopathology images to highlight patches important for the classification of the overall slide, but it cannot be scaled to giga-pixel images because of its substantial computation requirements. Visualization in the form of clustering and heatmaps was presented in \\cite{coudray2018classification}, but insightful interpretations beyond the highlighting of the tumor regions cannot be derived through these visualizations. Not only does interpretable visualization in general for histopathology images remains an open problem, to our knowledge, visualization for histopathology images through graph representation has also not been explored yet.\n\n\\section{Datasets and Methodology}\nIn this section, we describe the datasets and methodology used.\n\n\\subsection{Datasets}\nIn order to test the ability of the proposed method to highlight interpretable features automatically, we used two datasets for which we knew the features that were expected to be seen by the pathologists. The first dataset is from ICIAR2018 Grand Challenge on Breast Cancer Histology images (BACH) \\cite{aresta2019bach} and it comprises of 400 histopathology images of breast cancer. Each image of this dataset is of the size of 2048 x 1536 pixels. The original BACH dataset contains four classes, viz. normal, benign, in-situ and invasive. We trained a GCN to perform the binary classification task between invasive and in-situ classes because these two differ in the spatial arrangement of nuclei even though the nuclei themselves share similar morphologies. We used PyTorch package for our simulations.\n\nGleason grade classification and visualization tasks were also performed on a prostate cancer dataset~\\cite{arvaniti2018automated}. This dataset consists of a total of 1506 images for various prostate cancer tumor grades. Experiments were carried out for binary classification between Gleason grade 3+3 (primary+secondary) versus Gleason grade 4+4 or 4+5.\n\n\\subsection{Graph construction from Hematoxylin and eosin stain (H\\&E) stained images}\nWe have used a UNet \\cite{ronneberger2015unet} based model for detecting the nuclei. Edge features are based on the inter-nucleus distance. We measure the distance between two nuclei as $$dist(i,j) = \\sqrt{(x_i-x_j)^2 +(y_i-y_j)^2}$$, where $(x_{i}, y_{i})$ are the co-ordinates of nucleus $n_{i}$.\nWe form an edge between two nodes i and j, $A_{i,j}$ if their inter-nuclei distance is less than 100 pixels and assign the following weight to the resultant edge in the adjacency matrix (A):\n\n\\begin{equation}\n A_{i,j} = 1-\\frac{dist(i,j)}{100}\n\\end{equation}\n\\begin{figure*}\n\\centering \n\\subfigure[Whole slide tissue image]{\\label{fig:a}\\includegraphics[width=0.3\\linewidth]{image_nucleus_graph\/is10.jpg}}\n\\subfigure[Detected nucleus mask]{\\label{fig:b}\\includegraphics[width=0.3\\linewidth]{image_nucleus_graph\/is010.jpg}}\n\\subfigure[Generated graph]{\\label{fig:c}\\includegraphics[width=0.3\\linewidth]{image_nucleus_graph\/is010_tif.jpg}}\n\\caption{Graph formation: We start with a histopathology image, detect all nuclei using a U-Net, and construct a graph by linking pairs of nuclei closer than a distance threshold.}\n\\label{combination}\n\\end{figure*}\n\n\\subsection{Robust spatial filtering (RSF)}\nOur GCN was adapted from robust spatial filtering (RSF) \\cite{such2017robust}. For a graph $G(V, E)$, $V$ is the set of vertices and $E$ is the set of edges and $N$ is the number of nodes. Each vertex and edge can have multiple features.The numbers of features for a vertex and an edge are $F$ and $L$ respectively. The above arrangement allows the set $V$ and $E$ to be represented as tensors such as $V \\in \\mathcal{R}^{ N\\times F}$ and $E \\in \\mathcal{R}^{N\\times N \\times L}$ respectively. In RSF, the convolution operation on graphs is given by the following equation:\n\\begin{equation} \\label{eq1}\n \\begin{split}\n V_{conv} &= \\sum_{i=1}^{F} H^i V_{in}^i + b, \\\\\n \\text{where, } H^i &= h_0^i + \\sum_{j=1}^{L} h_j^i A_j\n \\end{split}\n\\end{equation}\n\\\\\nwhere, $h_i^j$ and b are learnable parameters and $A_j$ represents the ${j^{th}}$ edge feature of adjacency matrix. Multiple such filters are used to learn $F^{'}$ vertex features. In RSF, the graph adjacency matrix is not transformed into the spectral domain. Hence the computationally heavy operation of inversion of the Laplacian matrix is avoided. \n\nFor pooling operation, $V_{emb}^{'} \\in \\mathcal{R}^{N \\times N'}$ is derived from the input graph with $V_{in} \\in \\mathcal{R}^{N \\times F}$ and $A \\in \\mathcal{R} ^{N\\times N \\times L}$. This operation is similar to convolution operation given in Equation \\ref{eq1}. Further, $V_{out} \\in \\mathcal{R}^{N' \\times F}$ and $A_{out} \\in \\mathcal{R}^{N' \\times N' \\times F}$ with $N' < N$ is obtained by,\n\\begin{equation}\n \\begin{split}\n V_{emb} &= Softmax(V) \\\\\n V_{out} &= V_{emb}^T V_{in} \\\\\n A_{out} &= V_{emb}^TA_{in}V_{emb}\n \\end{split}\n\\end{equation}\n\\subsection{RSF with edge convolutions (RSF+Edge) }\nThe convolutional layer in RSF convolves vertex features of neighbor vertices to learn enhanced vertex features. This operation does not exploit the edge features directly. Gadiya et al. \\cite{gadiya2018some} proposed a method to learn enhanced vertex as well as edge features. Edge convolutional is performed as per the following equation:\n\\begin{equation}\n A_{out} = \\phi ( W X )\n\\end{equation}\nwhere $W$ is tensor of learnable parameters and $X$ is obtained by concatenating edge and vertex features of a node and $\\phi$ is a monotonic nonlinear activation function.\n\\subsection{Robust Spatial Filtering with Attention (RSF+Attention)}\nWe conjectured that an attention mechanism could help rank the graph vertices in their relative order of importance. Attention mechanism is used in neural networks extensively for natural language processing and to a lesser extent for computer vision tasks \\cite{xu2015show,ilse2018attention}. In our work, the attention layer was included before the first pooling operation at the input to highlight important nuclei directly, as shown in Figure 1.\n\\subsection{Visualization}\nFor the proposed model (RSF+Attention), we used the attention scores for visualization of the importance of individual nuclei. For the models that lacked an attention mechanism, given a trained model $\\mathcal{M}$ and a graph $G$, we rank all the nodes based on the drop in classification probability in a manner similar to \\cite{zeiler2014visualizing}. To get a more discernible drop in accuracy, for every node all the 1-hop neighbors along with their edges were also occluded. Occlusion of a node $n_{i}$ creates a new graph $G_{n_{i}}$ . Classification probability is computed for the occluded graph. The relative drop in probability for the nodes $n_{i}$ gives a measure $score_{i}$ for the importance of each node. We also tested 2-hop and 3-hop occlusion but the results were similar to those of 1-hop. Formally, $score_{i}$ for node $n_{i}$ can be given as,\n\n\\begin{equation}\n score_{i} = p(\\mathcal{M}(G)) - p(\\mathcal{M}(G_{n_{i}})) \n\\end{equation}\n\n\\section{Experiments and Results}\nIn this section, we show graphs formed from histology images, classification accuracy of using various GCN architectures, and visualization of highlighted nuclei.\n\n\\begin{table*}\n\\begin{tabular}{llll}\n\\hspace{2cm}\nOriginal image \\hspace{1.5cm} & Detected nucleus map \\hspace{1.7cm} & RSF+edge \\hspace{1.8cm} & RSF+attention \n\\end{tabular}\n\\end{table*}\n\\ExplSyntaxOn\n\\NewDocumentEnvironment{places}{mm}\n \n \\setlength{\\tabcolsep}{2pt}\n \\dim_set:Nn \\l_places_width_dim\n {\n (#1-\\ht\\strutbox-\\dp\\strutbox-2pt)\/(#2)\n }\n \\begin{tabular}{r @{\\hspace{2pt}} *{#2}{c}}\n }\n {\n \\end{tabular}\n }\n\\NewDocumentCommand{\\place}{mm}\n \n \\seq_set_from_clist:Nn \\l_places_images_in_seq { #2 }\n \\seq_set_map:NNn \\l_places_images_out_seq \\l_places_images_in_seq { \\places_set_image:n {##1} }\n \\seq_put_left:Nn \\l_places_images_out_seq\n {\n \\begin{tabular}{c}\\rotatebox[origin=c]{90}{\\strut#1}\\end{tabular}\n }\n \\seq_use:Nn \\l_places_images_out_seq { & } \\\\ \\addlinespace\n }\n\\dim_new:N \\l_places_width_dim\n\\seq_new:N \\l_places_images_in_seq\n\\seq_new:N \\l_places_images_out_seq\n\\cs_new_protected:Nn \\places_set_image:n\n {\\makebox[\\l_places_width_dim]\n { \\begin{tabular}{c}\n \\includegraphics[\n width=\\l_places_width_dim,\n height=\\l_places_width_dim,\n keepaspectratio,\n ]{#1}\n \\end{tabular}\n }\n }\n\\ExplSyntaxOff\n\\begin{figure*}\n\\centering\n\\begin{places}{0.9\\textwidth}{4}\n\\place{In-situ }{\n image_nucleus_graph\/is10.jpg,results\/nucleus\/is010_tif.jpg,results\/edge\/aa.png,results\/attention\/a1.png\n}\n\\place{Invasive}{\n results\/invasive\/original.png,results\/nucleus\/iv002_tif.jpg,results\/invasive\/edge.png,results\/invasive\/attention.png\n}\n\\place{Gleason 3}{results\/gleason\/ori_3.jpg,results\/nucleus\/g.jpg,results\/gleason\/mask_3_vertex.png,results\/gleason\/gleason3_at.png}\n\\place{Gleason 4}{results\/gleason\/ori_4.jpg,results\/nucleus\/z.jpg,results\/gleason\/mask_4_vertex.png,results\/gleason\/gleason4_at.png}\n\\end{places}\n\\caption{Comparison of visualization of RSF+edge and the proposed RSF+attention: Using RSF+attention, while nuclei on gland boundary are relatively more highlighted in the in-situ breast cancer (first row), all cancerous nuclei are highlighted in invasive breast cancer (first row). Similarly, the gland shapes are prominently highlighted in Gleason 3 prostate cancer (third row) as opposed to all cancer cells being highlighted in Gleason 4 prostate cancer (bottom row). The scale on the right shows color scale for the relative importance of nuclei.}\n\\label{Insitu(1)}\n\\end{figure*}\n\n\\subsection{Graphs from H\\&E stained histopathology images} \nEach image produces a graph with a different number of nodes. For BACH and prostate cancer Gleason grade datasets, the average number of nodes in a graph was 1546 and 613, respectively. Figure \\ref{combination} shows an example of transforming H\\&E stained histopathology image to a graph. \n\n\\subsection{Classification of breast and prostate cancers}\n\\begin{table}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{RSF} & \\textbf{RSF + Edge} & \\textbf{RSF + Attention} \\\\ \\hline\nVertex Conv 1 & Vertex Conv 1 & Vertex Conv 1 \\\\\nVertex Conv 2 & Vertex Conv 2 & Vertex Conv 2 + Attn \\\\ \\hline\nPooling 1 & Pooling 1 & Pooling 1 \\\\ \\hline\n & Edge Conv 1 & \\\\\nVertex Conv 3 & Vertex Conv 3 & Vertex Conv 3 \\\\\n & Edge Conv 2 & \\\\ \\hline\nPooling 2 & Pooling 2 & Pooling 2 \\\\ \\hline\n & Edge Conv 3 & \\\\\nFC - 1 & FC - 1 & FC - 1 \\\\\nFC - 2 & FC - 2 & FC - 2 \\\\\nFC - 3 & FC - 3 & FC - 3 \\\\ \\hline\n\\end{tabular}\n\\caption{The architecture of techniques implemented}\n\\label{architecture}\n\\end{table}\n\\begin{table}\n\\centering\n\\begin{tabular}{\n>{\\columncolor[HTML]{FFFFFF}}c |\n>{\\columncolor[HTML]{FFFFFF}}c |\n>{\\columncolor[HTML]{FFFFFF}}c }\n\\hline\n\\textbf{Model} & \\textbf{BACH} & \\textbf{Gleason} \\\\ \\hline\nRSF Based & 94\\% & 97\\% \\\\\nRSF + Edge & 92\\% & 97\\% \\\\\nRSF + Attention & 90\\% & 97\\%\n\\end{tabular}\n\\caption{Accuracy results for different techniques}\n\\label{results}\n\\end{table}\nWe trained the three models described in the previous section, viz. robust spatial filtering (RSF), robust spatial filtering with edge convolution (RSF+Edge), and robust spatial filtering with attention (RSF+Attention). All models were trained for approximately 50 epochs with a learning rate of 0.01 using the Adam optimizer. The architectures of the three models are given in table \\ref{architecture}. Table \\ref{results} shows that classification accuracy for the three models was quite comparable to each other. All the models contained nearly 300,000 parameters.\n\n\\subsection{Visualization} \nWe now present the visualization produced by occlusion and attention mechanisms. We performed occlusion experiments on predictions of RSF and RSF+Edge models on the breast and prostate cancer datasets. Visualization produced by these models were nearly the same, so we have omitted the results from the former due to space constraints. The images in the first row correspond to in-situ subtype in breast cancer from BACH dataset. We can see that nuclei on the outer layer of the gland are highlighted by the occlusion experiments. Also, in the second row, which corresponds to the invasive class in BACH dataset, nearly all the nuclei are highlighted. Outer linings are crucial for in-situ classification and where as for invasive cancer is spread across the entire region. These are the characteristics of in-situ and invasive histologies that are correctly captured by the occlusion and attention experiments. In the last two rows, visualization results for the prostate cancer Gleason grade dataset are shown. In these images, nuclei of the glands that lose their structure are highlighted, as we expected them to be. The images in the last column of Figure \\ref{Insitu(1)} are visualization results from RSF+Attention model. These results were verified by expert pathologists and visibly better at highlighting the above mentioned features.\n\n\\section{Conclusion}\nWe occluded nuclei clusters and exploited an attention layer in a graph convolutional neural network to highlight nuclei in histopathology slides and visualized the results on a breast cancer and a prostate cancer datasets. The proposed methods provide a notably more interpretable map depicting the contribution of each nucleus and its neighborhood in the final diagnosis. The presented results provide a way to explain the new patterns the deep learning models found on the tissue images. The proposed techniques not only open a path for the verification of the existing practices in pathology but suggest a way to generate new knowledge on where to focus to find meaningful differences between tissue classes, for example, those that may have different disease or treatment outcome.\n\n\\section*{Acknowledgment}\nAuthors would like to thank Nvidia Corporation for donation of GPUs used for this research.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nStar and planet formation processes both give rise to objects in the $\\sim$1 to 20 M$_{Jup}$ range\n(e.g., Luhman et al. 2005; Bakos et al. 2009; Marois et al. 2010; Joergens et al. 2013).\nNaively, objects of higher mass are typically assumed to form primarily via the star formation process and\nobjects of lower mass are assumed to form primarily from a proto-planetary disk. This\nsimplification is directly testable with a variety of approaches, both\ntheoretical and observational. One straightforward observational approach\nis the study of multiplicity: do brown dwarfs come in pairs as frequently and\nwith the same binary properties as stars? Although trends are suggestive\namong stellar visual binaries, i.e. decreasing frequency with later spectral types\n(e.g., Siegler et al. 2003), the frequency of short-period brown dwarf binaries is\nrelatively uncertain (Burgasser et al. 2012).\n\n\nBinary parameter space is multi-dimensional. For a given \nspectral type, binaries may be examined in terms of companion frequency, separation,\nmass ratio distribution, or secondary mass distribution. Mazeh et al. (2003),\nfor example, showed that for main sequence M stars the secondary mass distribution\ndoes not conform to a standard initial mass function (IMF) but instead follows a\nrelatively flat distribution for a primary sample\nwith M$\\sim$0.7 $\\pm$0.1 M$_{\\odot}$ and orbital period P$<$3,000 days.\nThis represents one small slice of a parameter space that may also be studied\nin diverse populations: young, intermediate age, old,\nmetal-poor, clustered, etc. Specific comparisons between these samples provide a wealth\nof diagnostics for understanding the similarities and differences in formation\nand evolution of distinct groups of objects.\n\nIn a sample of 454 F6$-$K3 solar-type stellar systems within 25pc of the Sun,\nRaghavan et al. (2010) found a total fraction of binaries and higher order\nmultiples of 44\\%\\footnote{Thus, although the majority of these {\\it systems} may not\nbe multiple, the majority of the {\\it stars} studied reside in multiple systems,\nas previously concluded (e.g., Abt \\& Levy 1976; Duquennoy \\& Mayor 1991).}.\nAmong other results, they reconfirm the existence of the brown dwarf desert,\nthe pronounced dearth of brown dwarf mass (i.e. $\\sim$10$-$80 M$_{Jup}$) companions\nto stars in orbits with periods less than a few years (e.g., Grether \\& Lineweaver 2006;\nMetchev \\& Hillenbrand 2009). \n\nIs the brown dwarf desert the result of dynamical evolution preferentially\nimpacting lower mass companions (e.g., Reipurth \\& Clarke 2001; Armitage \\& Bonnell 2002)\nor does it have more to do with poorly understood\nbarriers to the formation of tightly bound companions of brown dwarf mass? \nIn a radial velocity (RV) survey with a few hundred m\/s precision of $>$100 stars in the Taurus star\nforming region (Crockett et al. 2012), no brown dwarf companions to 1$-$3 Myr old\nstars have been observed (Mahmud et al. 2015, in prep), indicating that the existence of the\ndesert is more likely related to formation than dynamical evolution in\norigin. Among 1$-$10 Myr old populations, to date only one brown dwarf-brown dwarf\nshort-period ($\\la$1 year) spectroscopic binary pair has been identified (Stassun et al. 2006).\nJoergens et al. (2010) found an orbital period of 5.2 years\nfor the young Chameleon brown dwarf binary Cha~H$\\alpha$8 and Joergens (2008) estimates a period\nof $>$12 years for the pair CHXR74. However, the results of Stassun et al. and Joergens are\nbased, respectively, on a survey for eclipsing systems and on a relatively small sample and thus are\nlikely incomplete. Joergens estimates a binary frequency among very low mass young objects\nof $10^{+18}_{-8}$\\%.\n\nAmong intermediate age brown dwarf spectroscopic binaries, Basri \\& Mart\\'{i}n (1999)\nfound the first brown dwarf pair, PPL 15, a 5.8 day period system, in a study of the\nPleiades. Simon et al. (2006) studied the 100 Myr old brown dwarf\nbinary GL~569B (Zapatero Osorio et al. 2004),\na pair with a $\\sim$2.4 year orbit, a semi-major axis of 0.89~AU.\nThey detected some evidence for a third spectroscopic component in the system, yet to be confirmed.\n\nRV surveys among field brown dwarfs, sensitive to binaries with periods\nof several years or less and semi-major axes of a few AU, have yielded a handful of definitive detections. \nIn a sample of 59 field brown dwarfs, Blake et al. (2010) found a tight binary (a$<$1~AU) frequency\nof $2.5^{+8.6}_{-1.6}$\\%. They had previously identified and measured the orbit of the $\\sim$247 day\nperiod system 2MASS 0320-04 (Blake et al. 2008), independently identified on the basis of spectral analysis by Burgasser et al. (2008),\ncombined their RV measurements with the astrometry of Dahn et al. (2008)\nfor the $\\sim$607 day period system LSR J1610-0040, and presented RV data on two wide\nsubstellar pairs with periods of $>$10 years (2MASS J15074769-1627386 and 2MASS J07464256+2000321).\nZapatero Osorio et al. (2007) measured space motions for over a dozen field\nbrown dwarfs but found no spectroscopic pairs in their sample.\nBurgasser et al. (2012) presented a solution for the spectroscopic orbit of\nthe 148 day period pair SDSS J000649.16-085246.3AB, in common proper motion with\nthe very low mass star LP 704-48, with M9 and T0 components straddling the substellar limit.\nAlthough Basri \\& Reiners (2006) indicate an overall spectroscopic binary fraction for\nfield brown dwarfs and very low mass stars of 11\\% in their RV survey of 53 targets,\nonly three L dwarfs in their sample show some level of RV variability. Of these three,\n2MASS J15065441$+$1321060 was subsequently shown by Blake et al. to be non-variable and\n2MASS J15074769-1627386 and 2MASS J07464256+2000321 are long-period systems identified as\nbinaries with imaging observations. Other brown dwarf pairs have been identified\nwith imaging (e.g., Lodieu et al. 2007), microlensing (e.g., Bennett et al. 2008), and astrometry\n(e.g., Sahlmann et al. 2013). Spectral brown dwarf binaries, systems that appear single in light of\nexisting data but are spectroscopically peculiar, implicating the possible presence of a companion, may be\nnumerous. Bardalez Gagliuffi et al. (2014) have identified 50 candidates, although Artigau et al. (2009)\nand Radigan et al. (2013) identified cases in which the brown dwarf binary candidates were instead found to have\nheterogeneous cloud covers. Thus it is unlikely that all spectral binaries have $<$1 year orbits, but two have been\nconfirmed as such to date (Blake et al. 2008; Burgasser et al. 2012).\n\nNo short period ($<$100 days) field brown dwarf spectroscopic binaries are known, but one\nintermediate age and one young system with periods of just a few days were identified by\nBasri \\& Mart\\'{i}n (1999) and Stassun et al. (2006), respectively. Short period systems ought to be\nthe most straightforward to identify; however,\nRV surveys for brown dwarf multiples require the world's biggest\ntelescopes and generous time allocations, challenging to obtain, and are fraught with bias. \nYet without such work our understanding of substellar multiplicity is\nskewed towards the anecdotal, and astronomers' grasp of the basis for planetary\nmass companion formation is isolated from the context of brown dwarf and stellar\nmass companion formation.\n\nWe report here on 11 years of dynamical observations of over two dozen\nfield brown dwarfs taken at high spectral resolution in the near-infrared (IR) at the Keck II telescope. \nThe intrinsic faintness of brown dwarfs, particularly the late L and T types, presents a challenge to\nhigh-resolution spectroscopic observations. However, this is the only method by which we may\nderive the RV data necessary for calculating space motions, and hence\npossible moving group or cluster membership, and the telltale RV variability of a short-period\nbinary. Measurements of $v \\sin i$ provide a lower limit on rotational velocity, crucial for understanding angular\nmomentum evolution. Ultimately, with sufficiently precise data, the combination of RV versus phase together with\nthe angularly resolved orbits for the few year or few tens of year period systems will yield the\nabsolute component masses of brown dwarfs in binaries, invaluable for furthering our understanding\nof brown dwarf structure and evolution (Konopacky et al. 2010; Dupuy et al. 2010). Because brown dwarfs emit the\nbulk of their energy at wavelengths greater than $\\sim$1~$\\mu$m, IR spectroscopy provides the\nbest approach for their RV measurements.\n\nOur goals were to identify brown dwarfs in spectroscopic\nbinary systems and to measure the dynamical properties of any such pairs\ndiscovered. This project to identify short-period\nbrown dwarf multiples is the latest contribution to the NIRSPEC Brown Dwarf Spectroscopic Survey\n(BDSS; McLean et al. 2001, 2003, 2007; McGovern et al. 2004; Rice et al. 2010) and leverages over a\ndecade of observations to characterize brown dwarfs at high spectral resolution.\nWe find that a critical factor in a productive survey hinges on the RV precision; sensitivity to\nRV variability scales rapidly with this parameter. \nWe describe our sample, observations, and data reduction in \\S 2 and discuss our\ndata analysis in \\S 3. Section 4 provides a discussion of our results and we briefly\nsummarize our work in \\S 5.\n\n\n\\section{Observations and Data Reduction}\n\nTargets were selected for a range of spectral types across the span of late M, L, and T dwarfs and on the basis of \nmagnitude ($J\\la15$ mag) and accessibility from the Keck Observatory ($\\delta\\ga-30^{\\circ}$). \nThe complete target list and observing log appears in Table 1 which lists the object name (column 1),\nRight Ascension and Declination (columns 2 and 3), spectral type (column 4),\n$2MASS$ J magnitude (column 5), reference for discovery paper (column 6), and the UT dates of observation (column 7).\n\nObservations were carried out with the high-resolution, cross-dispersed echelle mode\nof the facility, near-infrared, cryogenic spectrograph NIRSPEC (McLean et al. 1998, 2000)\non the Keck II 10 m telescope on Mauna Kea, Hawaii. The NIRSPEC \nscience detector is a 1024 $\\times$ 1024 pixel ALADDIN InSb array; a 256 $\\times$ 256\npixel HgCdTe array in a slit viewing camera was used for source acquisition.\nThe N3 (J-band) filter with the 12 $\\times$ 0$\\farcs$432 (3-pixel) slit, an echelle angle of 63.00$^{\\circ}$, \nand a grating angle of 34.08$^{\\circ}$ produces a resolving power\nof R = $\\lambda$\/$\\Delta \\lambda$ $\\approx$ 20,000 and nearly \ncontinuous coverage from 1.165$-$1.324$\\mu$m (orders 58$-$65; McLean et al. 2007).\nObservations made on 2000 July 25 and 29 employed the 12 $\\times$ 0$\\farcs$576 (4-pixel) slit, yielding\na resolution of $\\sim$15,000. Internal white-light spectra, dark frames, and arc lamp spectra were obtained for\nflat-fielding, dark current correction, and wavelength calibration.\nScience exposures were made in 600~s nodded AB pairs at two locations along the slit.\n\nAll spectroscopic reductions were made using the REDSPEC package, software produced\nat UCLA by S. Kim, L. Prato, and I. McLean specifically for the analysis of NIRSPEC\ndata\\footnote{See: http:\/\/www2.keck.hawaii.edu\/inst\/nirspec\/redspec.html} as \ndescribed in McLean et al. (2007). Wavelength solutions were determined using the OH night\nsky emission lines in each order; 4$-$5 OH lines across each of the orders used yielded\nwavelength solutions with typical uncertainties of better than 0.4 km~s$^{-1}$.\nThe two spectral orders most favorable for the analysis, 62 for the L dwarfs (1.221 $\\mu$m$-$1.239 $\\mu$m; Figure 1)\nand 59 for the T dwarfs (1.283 $\\mu$m$-$1.302 $\\mu$m; Figure 2), were\nselected independently on the basis of the presence of deep inherent lines in the brown dwarf targets.\nFurthermore, an additional advantage of these particular orders is the absence of terrestrial absorption lines,\nthus avoiding the necessity of division by a featureless telluric standard star.\nThis provided the optimal approach for several reasons: (1) eliminating division by\ntelluric standards maximized the signal-to-noise ratio and avoided the possible introduction of\nslightly offset spectra and potential small shifts in the brown dwarf absorption lines and hence RV measurements,\n(2) focusing on the narrowest and deepest lines available yielded the highest possible RV precision;\nalthough the KI lines in orders 61 and 65 are deep (e.g., McLean et al. 2007), their breadth is unfavorable\nto precision RV measurements through cross-correlation, and (3) selecting orders 62 and 59 further\nguaranteed the best possible RV precision given the regular spacing of the OH night sky emission lines across both\nof these orders, required for a superior dispersion solution; this condition was not met for all orders in our\nJ band setting. Multiple-epoch sequences for the L2 dwarf\nKelu-1 and the peculiar T6p dwarf 2M0937 are shown in Figures 3 and 4, respectively.\n\n\n\n\\section{Analysis}\n\n\\subsection{Radial Velocities}\n\nRice et al. (2010) found typical systematic RV uncertainties of 1$-$2 km~s$^{-1}$ for a sample\nobserved with a similar methodology, similar signal to noise, and with some overlap in target\ndata with this paper (Table 2). We thus adopt\na conservative 2 km~s$^{-1}$ internal uncertainty for our RVs.\nWe tested this estimate by cross-correlation of the RV invariant target 2M0036 (Blake et al. 2010), an L4 dwarf,\nfor which we obtained 7 epochs over more than 5 years.\nThe maximum RV shift between epochs was 1.91 km~s$^{-1}$; the standard deviation\nin the RV shift for all epochs was 0.59 km~s$^{-1}$. Thus 2 km~s$^{-1}$ provides a reasonable\nif conservative internal uncertainty on individual RV measurements. \n\nAt least two, and as many as seven, spectra were taken for each of our targets.\nWe tested for radial velocity variability by cross-correlating the\nhighest signal-to-noise spectrum against the spectra from all other epochs for a given target;\nno significant variability was detected (\\S 4).\nTable 2 lists the number of spectra taken for each object (column 2) and the\ntotal number of days spanned by the observations (column 3). RVs\n(column 4) were either taken from Blake et al. (2010) or determined by cross-correlation\nof the highest signal-to-noise spectrum for a particular object with spectra of objects with known RV\n(from Blake et al.) and similar spectral type, sometimes of type both earlier and later than our target.\nThe RVs resulting from cross-correlation of a target with more than one other object\nwere averaged and the standard deviation added in\nquadrature with the internal uncertainties in the radial velocity measurements. This result\nin most cases was dominated by the 2 km~s$^{-1}$ internal uncertainty; however, for a few objects,\nprimarily the fainter and thus lower signal to noise late T dwarfs,\nthis procedure resulted in an uncertainty of 3 km~s$^{-1}$ (Table 2).\n\nWe use the average RV values from Blake et al. (2010) when available because of their unprecedented precision,\nobtained by fitting models to near-IR K-band CO\nbandhead at $\\sim$2.3 $\\mu$m target spectra. The models are composed of synthetic\ntemplate spectra plus observed telluric spectra; the CO bandhead region of the telluric spectrum is rich in deep\nCH$_4$ lines that provide a wavelength dispersion and zero-point reference with a precision as\ngood as a few tens of m\/s. Small iterations of the RV shift between the synthetic photospheric spectra\nand the telluric spectra allow for high accuracy in the target RV measurements.\nWe compared our results with values from other RV studies in the literature, e.g., Basri et al. (2000),\nMohanty \\& Basri (2003), and Zapatero-Osorio et al. (2007). In every case our RVs were comparable\nto other values within 1~$\\sigma$. We provide our results where indicated in Table 2.\n\n\\subsection{Rotational Velocities}\n\nColumn 5 of Table 2 lists the $v \\sin i$ values for our targets. Most of these were taken\nfrom the literature (Basri et al. 2000; Mohanty \\& Basri 2003; Rice et al. 2010; Blake et al. 2010; references given in column 6). \nTo estimate $v \\sin i$ values for the remaining targets, we used visual comparison with objects\nof neighboring spectral types after superimposing the spectra. For some objects we\nconvolved comparison spectra of known $v \\sin i$ with a boxcar kernel in order to produce \nresulting spectra of larger $v \\sin i$ for comparison. This method was approximate and yielded\nuncertainties of 5$-$10 km~s$^{-1}$, based on visual comparisons with objects of known\n$v \\sin i$, for the T dwarfs in our sample. Nevertheless, these \nare the first estimates available for some of the targets and thus provide a useful guide.\n\n\n\\section{Discussion}\n\n\\subsection{Field Brown Dwarf Spectroscopic Multiplicity}\n\nKonopacky et al. (2010) obtained angularly resolved imaging and spectroscopy for each\ncomponent in 24 very low mass stellar and brown dwarf subarcsecond {\\it visual} binaries \ncontributing to eventual measurements of orbital solutions and\ncomponent masses. Our goal was to use high spectral resolution observations to identify\nany RV variability over time that might indicate a {\\it spectroscopic} binary brown dwarf.\nThis requires binaries with orbital periods sufficiently short to measure the component\nmotion at a significant level, i.e. at least several $\\sigma$ above the RV uncertainty. \n\nTo explore our sensitivity to the brown dwarf binary parameter space, given our $\\sim$2~ km~s$^{-1}$ RV precision, \nwe ran a Monte Carlo simulation of 10$^5$ possible binary orbits for each of the 25 objects in our sample, following\nBurgasser et al. (2014). Orbital parameters were\nuniformly distributed in log semi-major axis (10$^{-3}$ to 10$^2$ AU),\nmass ratio (0.8$-$1.0), sine inclination (0$-$1), eccentricity (0$-$0.6; Dupuy \\& Liu 2011), and all\nother orbital parameters (argument of periapsis, longitude of ascending node, and mean anomaly, 0$-$2$\\pi$).\nWe converted our target spectral types to effective temperature using the empirical relation of Stephens et al. (2009),\nand then from effective temperature to mass using the evolutionary models of Burrows et al. (2001), assuming ages of 0.5 Gyr, 1.0 Gyr, and 5 Gyr. \nEach of the 10$^5$ simulated orbits was sampled at the dates given and the primary orbital RV was\ncalculated. A binary detection, for a given semi-major axis bin (0.2 dex), was defined as a system for which a maximum RV\ndifference between all dates was $>$3$\\sigma$, i.e. $>$6~km~s$^{-1}$ given our 2~km~s$^{-1}$ precision.\nThe results are summarized in Figure 5. The most important factor impacting the probability\nof detecting a potential binary was the frequency of observation for a given target (Table 2).\n\nA binary with a separation of $\\la$0.1 AU should in principle be straightforward to detect with an RV precision of 2~km~s$^{-1}$.\nHowever, given our estimated target masses and sampling frequency, and assuming an age of 1.0 Gyr,\nwe could have detected such an orbit only 50\\% of the time for only 12 of the sources in\nour sample (middle left-hand panel of Figure 5). The detection probability for an 0.1 AU binary fails to\nreach 90\\% for {\\it any} of our sources. Using the probabilities of detection for separations greater than\na given threshold $a$, $P(>a)$, as a measure of the effective sample size, $N_{eff}(a) = \\Sigma_i P(>a)$,\nwe find our null result translates into a 1$\\sigma$ upper limit of 18\\% for spectroscopic\nbinaries down to $a=0.1$~AU, based on binomial statistics.\nOnly for systems with separations below 0.01 AU ($\\sim$1 day orbits) could the spectroscopic binary frequency\nof our sample be characterized as relatively rare, i.e. $\\la$10\\%.\n\nThese limits apply when we consider the detectability of individual systems. However, a signature of unresolved multiplicity could also emerge in higher velocity dispersions for the sample as a whole. Identifying higher dispersions across the sample requires robust determination of the individual measurement uncertainties, but we can perform a rough assessment as follows. Using the same simulation parameters, we calculated the distribution of velocity dispersions one would obtain if a given fraction of sources (randomly selected) were binaries with semi-major axes in logarithmically spaced bins. For a sample devoid of binaries, the mean dispersion is somewhat less than the adopted measurement uncertainty, about 1.75~km~s$^{-1}$. Sources with radial orbital motion drive the mean velocity dispersions of the sample higher. Figure~6 displays the thresholds at which the mean simulated sample velocity dispersions are 1.5, 3 and 5 times higher than the dispersions assuming a 2~km~s$^{-1}$ measurement uncertainty. The most conservative threshold is reached at a semi-major axis of 0.03--0.04~AU, and is detectable at even small binary fractions (i.e., 1--2 sources in the sample being binary). This analysis is roughly consistent with the individual detection limits above, and again implies that we can rule out a significant fraction of binaries ($\\gtrsim$10\\%) only for separations $\\lesssim$0.01~AU.\n\n\n\n\\subsection{Notes on Known Visual Binaries}\n\nOf the 25 targets in our sample, 7 are known visual binaries. For these systems we estimated the upper limit for the\nobservable RV shift, $\\Delta$(RV)$_{max}$, for the brighter binary component between two epochs, assuming the\nmost favorable possible observing conditions:\n(1) the epochs correspond to the two phases at which the primary component is moving toward and away from us with\nmaximum RV, (2) the projected separation corresponds to the semi-major axis\nof the system, and (3) the orbit is circular and edge-on. The observed and estimated binary properties are given\nin Table 3. A discussion of each visual binary and the results of our observations follows.\n\n\\subsubsection{2MASS J22344161+4041387 $-$ M6}\n\nUsing laser guide star adaptive optics imaging at the Keck II telescope,\nAllers et al. (2009) identified 2M2234 as a 1 Myr year old, visual binary with a projected physical separation of\n51 AU. Given the observed binary properties, the $\\Delta$(RV)$_{max}$\nis $\\sim$1.9 km~s$^{-1}$ (Table 3).\nWith an orbital period of $824^{+510}_{-310}$ years the\ninclination is effectively indeterminable. This estimate for the period is based on a circular orbit.\nA more realistic value, $1000^{+1600}_{-500}$ years, is calculated \nin Allers et al. (2009). In either case, it is not possible to observe the system at phases separated by half the orbit.\nFurthermore, given the $v \\sin i$ of 17 km~s$^{-1}$ for 2M2234 (Table 2), it is also impossible\nto spectroscopically resolve the RVs of the two components in a single epoch spectrum, even though the\ncomponent near-IR magnitudes are almost equal, because the maximum relative component RV separation is\nsignificantly less than the rotational broadening.\n\nCross-correlation of our 3 epochs of spectra with each other demonstrated no RV shift between\nthe 2007 and 2009 data. Between the 2006 and 2007 data there was an apparent shift of\n$-$7.7 km~s$^{-1}$; however, the signal to noise of the 2006 spectrum ($\\sim$20) is considerably\nlower than that of the other epochs ($\\sim$80), and the spectra are veiled (Allers et al. 2009),\nthus we do not have confidence in the 2006 result.\nUsing the young M6 brown dwarf [GY92] 5 for cross-correlation with our 2007 spectrum, we\nobtain an RV of $-$10$\\pm$2 km~s$^{-1}$ (Table 2)\\footnote{Cross-correlating the same 2M2234 spectrum with another young M6,\nCFHT Tau 7, Rice et al. (2010) found $-$13.1~km~s$^{-1}$.}, similar to the results of Allers et al. on the basis \nof Keck HIRES data from 2006\\footnote{This result is the weighted mean of two RVs;\nShkolnik et al. (2012) use the same Keck\nmeasurements to determine an unweighted mean of $-$10.9$\\pm$0.7 km~s$^{-1}$.}, $-$10.6$\\pm$0.5 km~s$^{-1}$.\nAllers et al. raise the possibility that 2M2234 could be a higher\norder multiple system, which would account for the overluminous nature of the A component.\nOur multi-epoch observations failed to detect any short-period, i.e. P $<$a few years, hierarchical spectroscopic binary in this system,\nalthough our sensitivity to intermediate separation binaries, and binary orientations unfavorable for detection, limit\nany significant statistical conclusions (\\S 4.1). Given the\ngreater $K_s-L'$ excess in the 2M2234A, it is feasible that the excess luminosity is related to the\ncircumstellar disk structure, orientation, and\/or possible accretion activity. Such a mismatch in disk properties around\nthe components of very low mass binaries is not unprecedented; for example,\nthe TWA 30AB wide, co-moving pair has an apparently edge-on disk around the embedded, earlier-type component, extinguishing this \nlate type M star by 5 magnitudes with respect to the cooler component (Looper et al. 2010).\n\n\n\n\\subsubsection{2MASS J07464256+2000321 $-$ L1}\n\nThe 2M0746 binary is a nearby ($d\\sim12$ pc), tight ($\\sim$3 AU) system.\nWe use the Konopacky et al. (2010) astrometric measurements (Table 3) to determine a $\\Delta$(RV)$_{max}$\nof 2.0 km~s$^{-1}$. Konopacky et al. find an average primary\/secondary flux ratio of 1.5 $\\pm$0.1, challenging the assumption\nthat angularly unresolved spectra are fully dominated by the primary (Blake et al. 2010). \n\nWe observed 2M0746 at two epochs separated by almost exactly 4 years, about 1\/3 of the orbital period.\nCross-correlation of our two order 62 J-band spectra yielded a 1.3 km~s$^{-1}$ shift with a high correlation coefficient, 0.92.\nComparing the epochs of our observations with the RV curve plotted for this system in Figure 14 of Blake et al. (2010), this is almost\nexactly the expected result; however, we are not sufficiently confident in our RV uncertainties to give it much weight.\n\n\n\\subsubsection{Kelu-1 $-$ L2}\n\nA rapid rotator with $v \\sin i$ of $\\sim$70 km~s$^{-1}$ and Li absorption, Kelu-1 was identified as a brown dwarf by Ruiz et al. (1997).\nMart\\'{i}n et al. (1999) hypothesized that Kelu-1's over-luminosity and Li abundance might be explained by a young age or an\nadditional component in the system (e.g., Golimowski et al. 2004). \nLiu \\& Leggett (2005) using Keck AO imaging found that Kelu-1 was a 0$\\farcs$291 binary.\nGelino et al. (2006) estimated spectral types for the components of L2 and L3.5 and a total mass of 0.115 $\\pm$0.014 M$_{\\odot}$.\nIn an unpublished preprint, Stumpf et al. (2008) describe additional observations of the system with VLT AO imaging\nthrough 2008; the separation steadily increased to 0$\\farcs$366 in 2008. The position angle has not changed by more than\n$4^{\\circ}$ or $5^{\\circ}$. Adopting the largest separation observed by Gelino et al. as the semi-major axis, 0$\\farcs$298 $\\pm$0$\\farcs$003,\nwe estimate a period of 39 $\\pm$5 years based on a circular orbit (although Stumpf et al. favor a high eccentricity of 0.82 $\\pm$0.10).\nIf viewed edge-on, this implies a $\\Delta$(RV)$_{max}$ of 4.3 $\\pm$0.4 km~s$^{-1}$, marginally detectable with our $\\sim$2 km~s$^{-1}$ precision.\n\nMeasurements of the Kelu-1 system RV in the literature are inconsistent: Basri et al. (2000)\nfound 17 $\\pm$1 km~s$^{-1}$ in June of 1997 and Blake et al. (2010) determined RVs of 6.35 $\\pm$0.39 and 6.41 $\\pm$0.75 km~s$^{-1}$\nin March and April of 2003. On the basis of angularly resolved spectra of the two\nknown components, Stumpf et al. (2008) suggest that Kelu-1 A\nis itself a spectroscopic binary. We used our highest signal to noise (S\/N) ratio spectrum of Kelu-1 to cross-correlate against five other epochs\n(Figure 3), all of S\/N ratio $>$50 per resolution element (the January, 2006, spectrum, with a S\/N of $\\sim$10, was not included in this analysis). \nOur RV measurements, from 2002 through 2011, show RV shifts of $<$3 km~s$^{-1}$.\nWe did not detect any clear evidence in our spectra for additional motion resulting from the A-component moving\nin a relatively short-period spectroscopic orbit; however, this could conceivably be the result of binary properties and\/or viewing geometry (\\S 4.1).\n\n\\subsubsection{2MASS J15074769-1627386 $-$ L5.5}\n\nOver a 6-year baseline,\nBlake et al. (2010) detect a marginally significant ($<$2 $\\sigma$) trend in the RV of 2M1507, a\nnearby (d$=$7.3 pc) L dwarf. They obtain a false alarm probability of 2.2 \\% and suggest the possibility that 2M1507 is a $>$5000 day\nbinary with an angular separation of 0$\\farcs$4. However, deep, high-resolution imaging sensitive to a\ncontrast ratio of 5 magnitudes (Bouy et al. 2003; Reid et al. 2006)\nhas not revealed any companions. No significant RV variations are evident in the 5 high-resolution spectra we obtained between\n2000 and 2008; cross-correlation of the highest S\/N ratio spectrum (UT 2000 April 25) against the other 4 epochs resulted in RV\nshifts of $<$1.7 km~s$^{-1}$ with an uncertainty of $\\sim$2 km~s$^{-1}$. This result, however, does not rule out multiplicity;\nBlake et al. observed $\\sim$0.5 km~s$^{-1}$ of motion over 6.5 years, thus we would not expect much more than that over our\n8 year baseline. Given the lack of definitive evidence for multiplicity, this system is not included in Table 3.\n\n\\subsubsection{DENIS-P J0205.4-1159 $-$ L5.5}\n\nKoerner et al. (1999) initially identified this system as binary. Bouy et al. (2005) describe evidence for a third\nobject in a bound orbit with the secondary component. The estimated properties of the wide binary orbit are uncertain but the period is at least 47 years\nand the $\\Delta$(RV)$_{max}$ is at most 4.4 km~s$^{-1}$ (Table 3). For the presumed close binary, Bouy et al. estimate an orbital period of\n8 years and a semi-major axis of 1.9 AU, implying a $\\Delta$(RV)$_{max}$ of $\\sim$7 km~s$^{-1}$. Our three spectra of DENIS 0205,\ntaken in 2001 and in 2006, are of low S\/N ratio. Cross-correlation between the epochs yields $-$2.7 and $-$2.1 km~s$^{-1}$\nwith a correlation coefficient of only $\\sim$0.4, reflecting the poor quality of the data. Sufficiently frequent and deep\nimaging and RV monitoring of this system may provide\nthe requisite phase coverage, preferably with better precision than 2 km~s$^{-1}$, to determine a full orbital solution for\nthe inner binary over the course of one orbital period.\n\n\n\\subsubsection{DENIS-P J1228.2-1547 $-$ L6} \n\nUsing the {\\it Hubble Space Telescope}, Mart\\'{i}n et al. (1999) identified DENIS 1228 as the first angularly resolved brown dwarf - brown dwarf\npair with a separation of 0$\\farcs$275$\\pm$0$\\farcs$002 (Bouy et al. 2003). After several years of monitoring the components' positions,\nBrandner et al. (2004) estimated the orbital properties of the system, listed in Table 3. The $\\Delta$(RV)$_{max}$ for this binary, 4.3 km~s$^{-1}$,\nin combination with the period of $\\sim$44 years from Brandner et al., is not favorable for the detection of an RV shift over the 4 year time scale\nof our NIRSPEC observations. Cross-correlating our 2007 May spectrum with those taken in 2011 February and June yields a\n0 km~s$^{-1}$ RV shift. Continued monitoring of the visual orbit with high angular resolution imaging and high precision RV spectroscopic\ntechniques will help to refine the parameters\nover the next decades, necessary to determine individual component masses in the long term. \n\n\\subsubsection{SDSSp J042348.57-041403.5 $-$ T0}\n\nSDSS 0423 is one of the visual brown dwarf binary systems which spans the L and T classes. Burgasser et al. (2005) measured a\nseparation of 0$\\farcs$16 and estimated a total mass of 0.08$-$0.14 M$_{\\odot}$. Assuming that the separation is equal\nto the semi-major axis of the system, 2.50$\\pm$0.07 AU, the period falls in the range of 10.5 to 13.9 years (Table 3)\nand the $\\Delta$(RV)$_{max}$ is 5.3$-$7.1 km~s$^{-1}$. We observed the system in 2001, 2005, and 2006, covering close to half of the\nestimated orbital period. However, the cross-correlation between the 2001 and 2005 spectra yielded a shift of only $-$0.4 km~s$^{-1}$\nand between 2001 and 2006 of 1.77 km~s$^{-1}$, indistinguishable within the uncertainty of our RV measurements, especially because the\n2006 spectrum was particularly noisy. Thus we find no evidence for significant orbital motion, implying a longer period or an\nunfavorable viewing geometry for the detection of an RV shift, or both.\n\n\n\\subsection{2MASS J05591914-1404488}\n\nThe T4.5 dwarf 2M0559 presents an enigmatic case of an over-luminous, extremely low-mass object. Observers and theorists\nalike have speculated (Burgasser 2001; Dahn et al. 2002; Burrows et al. 2006; Dupuy \\& Liu 2012) that this\nsystem is an equal mass binary. Alternatively, there may be fundamental processes at play\nin the mid-T dwarf atmospheres that are not yet well-understood. Specifically, this source is the lynchpin in the J-band\nbrightening\/cloud disruption scenario (Burgasser et al. 2002a). Zapatero Osorio et al. (2007) estimate limits on possible planetary\nmass companions in this system, but such a secondary component would not explain the unusually high brightness.\n\nWe obtained four observations of 2M0559 over a time baseline of 6.5 years. For an age of 1 Gyr, our Monte Carlo simulation (\\S 4.1)\nindicates a 50\\% detection probability for a threshold semi-major axis of 0.13 AU and a 90\\% detection probability for a threshold\nsemi-major axis of 0.003 AU. The threshold semi-major axis is the separation below which a spectroscopic companion would\nbe detected with a particular probability. Burgasser et al. (2003) rule out the presence of a relatively bright companion object closer than \n0$\\farcs$09. At the $\\sim$10 pc distance to 2M0559 (Dahn et al. 2002), 0$\\farcs$09 corresponds to $\\sim$0.9 AU. Thus, ample\nparameter space for a bright binary companion to this object remains unexplored and our confidence in a null result for a \ncompanion object is only high ($\\ge$90\\%) for extremely short periods of days or less. Monitoring this system with extremely\nprecise RV measurements (see next section) with regular cadence over a considerable time baseline will help to fill in the\npotential binary parameter space gap and might also provide insight into the atmospheric properties.\n\n\n\\subsection{The Importance of High Precision RV Measurements}\n\nFor spectroscopic binary systems, Figure 7\nillustrates the relationships between the primary object's mass, the primary orbital velocity, and the orbital period on the basis of Kepler's third law.\nWe show results for three distinct values of the mass ratio (q); a circular, edge-on orbit is assumed\nfor simplicity. For a system with a primary of mass 0.08, the\nsubstellar limit, and a mass ratio of 1.0, the primary object's RV is $\\sim$3.5 km~s$^{-1}$ for a period of 12 years, approximately the shortest period system\namong the visual binaries in our sample (Table 3). With a precision of 2 km~s$^{-1}$, motion of the primary (or the secondary, given a mass ratio of unity)\nin such a system is only detectable for very specific phases and viewing angles. The probability of detection with 2 km~s$^{-1}$\nprecision increases for shorter-period binaries; however, again, this is only true under certain specialized conditions (\\S 4.1).\nNone of the multi-epoch spectra in our sample of 25 brown dwarf systems reveals more than $\\sim$3 km~s$^{-1}$ of RV variability. Even\nfor the seven known brown dwarf binaries observed, some with a cadence that regularly sampled a significant fraction of the estimated orbital period,\nwe were unable to unambiguously detect any RV variability.\n\nSpecialized techniques for the\nhighly precise measurement of small RV shifts, such as those applied to high-resolution K-band spectra\nby Blake et al. (2007, 2010), Prato et al. (2008), Konopacky et al. (2010), Bailey et al. (2012), Burgasser et al. (2012), and others,\nare required to reliably detect motion in brown dwarf binaries, even for those with orbital periods as short as days. In their 6-year study of\nlate-type M and L dwarfs with NIRSPEC on the Keck II telescope, Blake et al. (2010) obtained a precision of 200 m~s$^{-1}$ on slowly rotating\nL dwarfs, providing sensitivity to orbital motion of brown dwarf binaries with periods of decades and mass ratios as low as $\\sim$10\\% (Figure 7),\nthe upper limit for the detection of giant planetary companions.\n\nIn the study described here, even for our sample of 25 systems with zero detections of spectroscopic binaries, it is still not\npossible to use the results to definitively characterize short-period low mass binaries as rare. The sampling and geometry of such systems\nare simply not well-suited to identification with our 2 km~s$^{-1}$ precision and random observing cadence. Thus as far as it is possible to say\nwith the extant data, very low mass spectroscopic binaries are not necessarily intrinsically rare, but even with one of the largest samples\navailable, statistics show (\\S 4.1) that 2 km~s$^{-1}$ uncertainties provide relatively weak constraints.\n\n\\section{Summary}\n\nWe obtained multiple-epoch spectra of a sample of 25 very low-mass field dwarfs, three M dwarfs, sixteen L dwarfs, and six T dwarfs,\nbetween 2000 April and 2011 June to search for\nRV variability and spectral evidence for multiple components. With a precision of $\\sim$2 km~s$^{-1}$, we were sensitive to RV\nvariability at a statistically significant level only in systems with periods of about a day or less, assuming a favorable distribution of\norbital properties and viewing geometries relative to our line of sight.\nIn none of the systems studied, including the seven known, wide binaries observed, did we detect any RV variability \n$>$3 km~s$^{-1}$. For over a dozen objects in our sample we present the\nfirst published high-resolution spectra and provide RVs and rotational velocities for the entire sample, either based on\nthis work or taken from the more precise measurements in Blake et al. (2010). We show multi-epoch spectral sequences for two\nobjects of particular interest, Kelu-1 and 2M0937, an L2 and a peculiar T6, respectively. No significant variations are seen in these\nor the other target spectra, some of which boast an exquisite S\/N ratio in excess of 100. \n\nRV measurements of brown dwarfs are important both for the ultimate measurement of brown dwarf\nmasses (Konopacky et al. 2010) and for the spectroscopic detection of very low-mass, even planetary, companions to presumed single brown dwarfs\n(Blake et al. 2010). The close binary fraction of very low mass systems is highly uncertain (e.g., Bardalez Gagliuffi et al. 2014).\nWe conclude with the observation that to satisfy these scientific goals requires\nhigh S\/N ratio, strategic sampling cadence, and relatively high precision measurements: with the 200 m~s$^{-1}$ precision\nof Blake et al., it is possible to detect several-Jupiter mass companions even in orbits of decades (bottom panel, Figure 7). Long-term\nmonitoring programs of binary brown dwarfs, and in particular candidate spectroscopic binary brown dwarfs (Bardalez Gagliuffi et al.), with high spectral resolution,\ncomponent-resolved spectroscopy (Konopacky et al.), with high spectral resolution unresolved spectroscopy (Burgasser et al. 2012), and with\nhigh-angular resolution imaging (e.g., Radigan et al. 2013), over time scales of days to years are required. Results of these efforts\nwill yield component mass measurements with sufficient precision to stringently test models of\nbrown dwarf structure and evolution, and, in the case of younger systems, formation (e.g., Schaefer et al. 2014). It is crucial\nthat RV monitoring programs take advantage of high-precision techniques for a future high-yield science return.\n\n\n\\bigskip\n\\bigskip\n\nWe thank the Keck Observatory OAs and SAs and B. Schaefer, probably all of whom helped with these runs and observations during the\n11 year period over which the data were gathered, for their exceptional support.\nWe are grateful to Q. Konopacky and M. McGovern for assistance with some of the later observing runs.\nL.P. thanks O. Franz and L. Wasserman for helpful discussions on orbital dynamics. We are grateful to the\nanonymous referee for comments which improved this manuscript.\nPartial support to L.P. for this work was provided by NSF grant AST 04-44017.\nThis research has benefited from the M, L, T, and Y dwarf compendium housed at DwarfArchives.org.\nThis work made use of the SIMBAD reference database, the NASA\nAstrophysics Data System, and the data products from the Two Micron All\nSky Survey, which is a joint project of the University of Massachusetts\nand the Infrared Processing and Analysis Center\/California Institute\nof Technology, funded by the National Aeronautics and Space\nAdministration and the National Science Foundation.\nData presented herein were obtained at the W. M. Keck\nObservatory, which is operated as a scientific partnership among the California Institute of Technology,\nthe University of California, and the National Aeronautics and Space Administration. The Observatory\nwas made possible by the generous financial support of the W. M. Keck Foundation.\nThe authors recognize and acknowledge the\nsignificant cultural role that the summit of Mauna Kea\nplays within the indigenous Hawaiian community. We are\ngrateful for the opportunity to conduct observations from this special mountain.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAs exemplified by GW170817, neutron star mergers are empirically known to produce a rich array of multimessenger\nemission \\cite{2017ApJ...848L..12A,LIGO-GW170817-mma}. The presence of matter is most unambiguously indicated by electromagnetic\nemission from nuclear matter ejected during the merger itself, which produces distinctive ``kilonova''\nemission \\cite{1974ApJ...192L.145L,1998ApJ...507L..59L,Metzger_2017,2020GReGr..52..108B} via radioactive heating of this expanding\n material. \nKilonova observations can provide insight into uncertain nuclear physics \\cite{2020arXiv201011182B,2020arXiv201003668Z,2020arXiv200604322V,2019AnPhy.41167992H,2020GReGr..52..109C} and help constrain the expansion rate of\nthe universe \n\\cite{2020arXiv201101211C,2020NatCo..11.4129C,2020PhRvR...2b2006C,2020ApJ...892L..16D},\nparticularly in conjunction with gravitational wave observations \n\\cite{LIGO-GW170817-mma,LIGO-GW170817-H0,LIGO-GW170817-EOS,LIGO-GW170817-EOSrank,2020PhRvL.125n1103B,2019LRR....23....1M,2019NatAs...3..940H,2020Sci...370.1450D}.\n\nIn principle, kilonova observations encode the amount and properties of the ejected material in their complex\nmulti-wavelength light curves (and spectra) \\cite{2019LRR....23....1M,2018MNRAS.480.3871C,2017ApJ...851L..21V}. \nFor example, several studies of GW170817 attempted to infer the amount of material ejected\n\\cite{2021arXiv210101201B,gwastro-mergers-em-CoughlinGPKilonova-2020,2018MNRAS.480.3871C,2019MNRAS.489L..91C,2017ApJ...851L..21V,2017Natur.551...75S,tanvir17,2017ApJ...848L..21A,chornock17,2017ApJ...848L..17C}.\nIn practice, these observations have historically been interpreted with semianalytic models, as they can be evaluated quickly and\ncontinuously over the parameters which characterize potential merger ejecta. \nHowever, it is well known that these semianalytic models contain oversimplified physics of already simplified anisotropic\nradiative transfer calculations \\cite{2018MNRAS.478.3298W,2020ApJ...899...24E,kilonova-lanl-WollaegerNewGrid2020} that neglect\ndetailed anisotropy, radiative transfer, opacity, sophisticated nuclear reaction networks, and composition differences.\n\nTo circumvent these biases, some groups have attempted to construct surrogate kilonova light-curve models, calibrated to\ndetailed radiative transfer simulations\n\\cite{gwastro-mergers-em-CoughlinGPKilonova-2020,2018MNRAS.480.3871C,RisticThesis}.\nFor example, Coughlin et al. \\cite{2018MNRAS.480.3871C} used Gaussian process (GP) regression of principal components to construct a\nmultiwavelength surrogate calibrated to a fixed three-dimensional grid of simulations \\cite{2017Natur.551...80K}, describing flux $F_k$ from a single component of ejected material. This study generated a ``two-component''\nejecta model by adding the fluxes of two independent calculations ($F=F_1+F_2$), ignoring any photon reprocessing effects.\nMore recently, Heinzel et al \\cite{gwastro-mergers-em-CoughlinGPKilonova-2020} applied this method to construct an anisotropic\nsurrogate depending on two components $M_{1},M_{2}$ and viewing angle, calibrating to their own anisotropic radiative transfer\ncalculations. They also included reprocessing effects,\nshowing that their previous simplified approach which treats the radiation from each of the two components of the\noutflow independently introduces biases in inference for the components' parameters. \nThese strong reprocessing or morphology-dependent effects are expected in kilonova light curves\n \\cite{2020arXiv200400102K,2017ApJ...850L..37P}. %\nFinally, a recent study by Breschi et al. \\cite{2021arXiv210101201B} favored an anisotropic multicomponent model.\n\nIn this work, extending \\cite{RisticThesis}, we apply an adaptive-learning technique to generate surrogate light\ncurves from simulations of anisotropic kilonovae. Starting with a subset of \\nSimStart{} simulations reported in\n\\cite{kilonova-lanl-WollaegerNewGrid2020}, we use these adaptive learning methods to identify new\nsimulations to perform, refining our model with \\nSimPlaced{} simulations so far. \nWe apply our surrogate light curves to reassess the parameters of GW170817.\nWe distribute the updated simulation archive, our current-best surrogate models, and our training algorithms at \\texttt{https:\/\/github.com\/markoris\/surrogate\\char`_kne}.\n\nThis paper is organized as follows.\nIn Section \\ref{sec:Placement} we describe the kilonova simulation family we explore in this study and the active learning methods we\nemploy to target new simulations to perform. We also briefly comment on our model's physical completeness.\nIn Section \\ref{sec:Interpolation} we describe the specific procedures we employed to interpolate between our simulations\nto construct surrogate light curves.\nIn Section \\ref{sec:PE} we describe how we compare observations to our surrogate light curves to deduce the\n(distribution of) best fitting two-component kilonova model paramers for a given event. We specifically compare our\nmodel to GW170817.\nIn Section \\ref{sec:discussion} we describe how our surrogate models and active learning fit into the broader challenges\nof interpreting kilonova observations. \nWe conclude in Section \\ref{sec:conclude}.\n\n\n\\section{Kilonova Simulation Placement}\n\\label{sec:Placement}\n\n\n\\subsection{Kilonova simulations}\n\\label{sec:kne_sims}\n\nThe kilonova simulations described in this work adopt a similar setup as and expand on the work of \\cite{kilonova-lanl-WollaegerNewGrid2020}. \nThe simulations discussed throughout were generated using the SuperNu \\cite{2014ApJS..214...28W} time-dependent radiative transfer code, \nusing tabulated binned opacities generated with the Los Alamos suite of atomic physics codes \\cite{2015JPhB...48n4014F,2020MNRAS.493.4143F}.\nWe use results from the \\textsc{WinNet} code \\cite{2012ApJ...750L..22W} to determine radioactive heating and\ncomposition effects. We employ the thermalization model of \\cite{2016ApJ...829..110B}, but use a grey Monte Carlo\ntransport scheme for\ngamma ray energy deposition \\cite{2018MNRAS.478.3298W}. \n\nThe ejecta model is based on a symmetrically-shaped ideal fluid expanding in vacuum described by the\nEuler equations of ideal hydrodynamics. The assumption of a radiation-dominated polytropic equation of state allows for\nan analytic representation of the ejected mass $M$ and average velocity $\\bar{v}$ as a function of\ninitial central density $\\rho_0$, initial time $t_0$, and the velocity of the expansion front $v_{max}$\n(Equations 11 and 12 in \\cite{2018MNRAS.478.3298W}). When combined with Monte Carlo-based radiative transfer and a specified\nelemental composition for the ejecta, the code produces time- and orientation-dependent spectra. Convolving these spectra with\nstandard observational filters produces light curves such as the ones in Figures \\ref{fig:sample_lc} and\n\\ref{fig:off_sample_interp}. %\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/initial_grid_lc_logt_longer_times}\n\\includegraphics[width=\\columnwidth]{Figures\/placed_to_date_logt_longer_times}\n\\caption{\\label{fig:sample_lc}\\textbf{Bolometric luminosities of initial and adaptively placed simulations}: The top panel\nshows the $\\log_{10}$ bolometric luminosity in cgs units versus time in days for the simulations we initially used to train\nour grid. These similations all extend out to roughly 8 days. The bottom panel shows the bolometric light curves for our\nadaptively placed simulations overlaid on top of the initial grid light curves. Most of these simulations extend past 32 days. Both panels exhibit\nsignificant diversity in behavior and timescale.}\n\\end{figure}\n\n\n\nReal neutron star mergers have (at least) two mechanisms for ejecting material, denoted as dynamical and wind ejecta \\cite{2020ARNPS..7013120R}.\nDue to the difference in formation mechanisms of dynamical and wind ejecta \\cite{Metzger_2019}, a multi-component\napproach is necessary for accurate modeling. Each of the two types of ejecta, dynamical and wind, is modeled by a\nseparate component with a specified morphology, elemental composition, ejecta mass and ejecta\nvelocity. \nThe components are modeled together as one radiative outflow \\cite{2018MNRAS.478.3298W}. The thermal decay energy is treated by mass-weighting\nbetween the components where they overlap. %\nThe end product represents a time-dependent spectral energy distribution contained in 54 angular bins, equally spaced in\n$\\cos\\theta$ from $1$ to $-1$. For the purposes of this study, the spectra are convolved with broadband filters to produce a series of\nbroadband light curves. Specifically, we use the LSST $grizy$ filters for optical and near-infrared bands, 2MASS $JHK$ filters\nfor longer wavelength near-infrared, and the mid-infrared $S$ filter for the Spitzer $4.5\\;\\mu$m band.\nFor each band and emission direction, we estimate the AB\nmagnitude for that filter, defined for a source at $10\\unit{pc}$ in terms of the CGS energy flux $F_\\nu$ per unit\nfrequency via\n$\nm_{X,AB} = -2.5 \\log_{10} \\E{F_{\\nu}} - 48.6\n$.\nAll observations used in this work are provided or are translated into this AB-magnitude system \\cite{1998A&A...333..231B,2007AJ....133..734B,2006MNRAS.367..454H}.\nBecause our simulations tend toward reflection-symmetric behavior across the $z=0$ plane, we only consider the independent information contained in the upper half ($z > 0$) of these angular bins. \nTo reduce the acquisition cost of each simulation, we evolved each kilonova simulation in our initial grid out to $\\tEndDays$\ndays. To minimize data-handling and training cost, unless otherwise noted, we manipulate a subset of our simulation\noutput based on a log-uniform grid. For the initial simulations, this log-uniform grid consists of \\nTimePoints{} time points ranging from $\\tStartDays{}$ to $\\tEndDays$ days. \nFor the remaining simulations, this grid is extended in log-time to cover their available duration, up to\na maximum of 64 days.\nBecause of several systematics associated with modeling emission at early times (e.g., in the ionization states of the\nmediuim and in the contribution from and interaction with any strong jet), we do not report on behavior prior to 3 hours\npost-merger.\nIn this work, we use the orientation-averaged luminosity for simulation placement, but reconstruct the luminosity\ncontinuously in angle and time.%\n\n\nThe original simulation hypercubes discussed in \\cite{2018MNRAS.478.3298W,kilonova-lanl-WollaegerNewGrid2020} consider multiple wind ejecta morphologies and\ncompositions. To simplify the dimensionality of the problem, this work only considers simulations from the initial grid\nwith a peanut-shaped morphology \\cite{2020arXiv200400102K} and lower $Y_e = 0.27$ composition describing the wind ejecta. Table\n\\ref{tbl:grid_params} %\nsummarizes the parameters for the \\nSimStart{} simulations in our four-dimensional hypercube and highlights\nvariation in only ejected mass $M$ and average velocity $\\bar{v}$ for each of the two components: the mass and velocity of the dynamical and wind\nejecta, denoted henceforth as $M_{d},v_d, M_{w},v_w$. \nEvery simulation in our hypercube adopts the same morphologies for the dynamical and wind ejecta, respectively.\nThis initial simulation hypercube thus consists of only 2 of the 3 velocities and 3 of the 5 masses explored in the\ncompanion study \\cite{kilonova-lanl-WollaegerNewGrid2020}.\n\\begin{table}[h!]\n\\begin{center}\n\\begin{tabular}{|c@{\\hskip 2mm}c@{\\hskip2mm}c@{\\hskip 5mm}c@{\\hskip5mm}c|} \n\\hline\nEjecta & Morphology & $Y_e$ & $M\\textsubscript{ej}$ & $\\bar{v}$ \\\\ [0.5ex] \n & & & $M_\\odot$ & $c$ \\\\ [0.5ex] \n\\hline\\hline\nDynamical & Torus & $0.04$ & \\begin{tabular}{@{}c@{}} 0.001, 0.01, 0.1 \\end{tabular} & \\begin{tabular}{@{}c@{}} 0.05, 0.3 \\end{tabular} \\\\\\hline\nWind & Peanut & $0.27$ & \\begin{tabular}{@{}c@{}} 0.001, 0.01, 0.1 \\end{tabular} & \\begin{tabular}{@{}c@{}} 0.05, 0.3 \\end{tabular} \\\\ %\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{\\textbf{Kilonova simulation parameters}: Within the framework of models explored in\n \\cite{kilonova-lanl-WollaegerNewGrid2020}, parameters of the initial kilonova simulations used to initialize our\n adaptive learning prcess in this work. All simulations used in this work adopt a two-component model where the\n morphology and composition of each component is fixed. }\n\\label{tbl:grid_params}\n\\end{table}\n\n\nAs expected and discussed elsewhere \\cite{kilonova-lanl-WollaegerNewGrid2020}, these simulations exhibit significant viewing-angle\ndependence on the relative speed of the components. %\nThe obscuration of the wind by the dynamical ejecta becomes less significant closer to the symmetry axis and the \npeanut morphology itself also produces orientation dependence.\nThe two-component model shows ``blanketing'' of slow\nblue components by fast red components \\cite{2015MNRAS.450.1777K}.\nAlso expected and observed are qualitative trends versus the component masses and velocities: more wind ejecta mass\nincreases the $g$ band luminosity along the symmetry axis.\n\n\n\n\\subsubsection*{Illustrating systematics of kilonova simulations}\nBefore extensively discussing our ability to reproduce this specific family of simulations, we first comment on their\nsystematic limitations. Our simulation archive explores only a limited range of initial conditions for the ejecta, with specific assumptions\nabout the composition, morphology, and velocity profiles; with specific assumptions about nucleosynthetic heating; and\nwith specific assumptions about (the absence of) additional power and components, such as a jet or a central source to\nprovide additional power or light \\cite{2021MNRAS.500.1772N,2021MNRAS.502..865K}.\n %\nSeveral previous studies have indicated that these and other aspects of kilonova simulations can noticably impact the\noutcome\n\\cite{2018MNRAS.478.3298W,2020ApJ...899...24E,2019ApJ...880...22W,2020arXiv201214711K,2021ApJ...906...94Z,2021ApJ...910..116K,2017ApJ...850L..37P}. Where possible, we very briefly comment on how current and previous SuperNu\nsimulations' results change when making similar changes in assumptions. \n\nPrior work with SuperNu has explored the impact of composition \\cite{2020ApJ...899...24E}. \nHowever, recently, Kawaguchi et al 2020 \\cite{2020arXiv201214711K} (henceforth K20) demonstrated that Zr makes a substantial contribution to the final light\ncurve. Figure \\ref{fig:Zr} shows how our simulations depend on a similar change in composition, noting substantial\nchange in the late-time optical light curves when we remove Zr.\n\nAs demonstrated by many previous studies using SuperNu, the morphology and velocity structure also has a notable impact on the post-day light curve behavior\n\\cite{2018MNRAS.478.3298W,2021ApJ...910..116K,2017ApJ...850L..37P}. \nSeveral other groups have demonstrated similar strong morphology and orientation dependence in their work \\cite{2020ApJ...897..150D,2020arXiv201214711K,2021arXiv210101201B,gwastro-mergers-em-CoughlinGPKilonova-2020}. \nFor example, in their Figure 8, K20 demonstrate how the light curve changes when a specific polar component of the\nejecta is removed.\n\nUncertain nuclear physics inputs also propagate into notable uncertainties about the expected light curve; see, e.g., \\cite{2021ApJ...906...94Z,2020arXiv201011182B}.\nEven for the same morphology and amount of ejecta, nuclear physics uncertainties can modify the effective heating rate,\nparticularly for material with low $Y_e$ which has the greatest prospect for producing r-process elements. \n\n\nGiven limited exploration of possible kilonova initial conditions and physics, we can only at present quantify the\nuncertainties of the type listed above. In future work, we will employ our parameterized models to assess the impact of\nthese uncertainties on inferences about kilonova parameters. Future work could require kilonova models which include\nEOS parameters to enable joint inference which also\nsimultaneously constrains the equation of state.\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/rHmags_noZr_axis}\n\\includegraphics[width=\\columnwidth]{Figures\/gKmags_noZr_axis}\n\\caption{\\label{fig:Zr}\\textbf{Impact of removing Zirconium}: Solid and dashed lines show simulations with otherwise identical\n assumptions about composition, morphology, and velocity structure, differing only by the presence (solid) and\n elimination (dashed) of Zr. The selected simulation parameters, $M_{d}=0.01 M_{\\odot}$, $v_d=0.3 c$, $M_{w}=0.01 M_{\\odot}$,\nand $v_w=0.15 c$, are our closest-matching representation of the simulation parameters considered\nduring the Zr-omitting study in \\cite{2020arXiv201214711K}.\n}\n\\end{figure}\n\n\n\n\n\n\nAs discussed elsewhere \\cite{kilonova-lanl-WollaegerNewGrid2020}, at late times some light curves show a modest deficit of blue light ($g$-band) \nrelative to observations of GW170817 (unless the dynamical ejecta mass is large). Notably, our $g$-band light curves fall off significantly more rapidly after\ntheir peak in all viewing directions and for most parameters considered here. \nPrevious work with other morpologies also recovers similar falloff in these bands,\nsee e.g. \\cite{tanvir17}, though additional components could conceivably contribute. \n Similar $g$-band behavior has been seen in other\ndetailed kilonova simulations; see, e.g, Figure 12 in \\cite{2020ApJ...889..171K}.\nAs noted above, this behavior depends on the assumed composition, notably Zr. \n\n\n\n\n\n\n\\subsection{Interpolation Methodology}\n\\label{sec:interp_method}\n\n\nIn this work, we principally interpolate using Gaussian process (GP) regression. In GP regression, given\ntraining data pairs $(x_a,y_a)$, the estimated function $\\hat{y}(x)$ and its variance $s(x)^2$ are approximated by\n\\begin{subequations}\n\\label{eq:gp}\n\\begin{align}\n\\hat{y}(x) &= \\sum_{a,a'}k(x,x_a) K^{-1}_{aa'}y_{a'} \\\\\ns(x)^2 &= k(x,x) - k(x,x_a)K^{-1}_{aa'} k(x_{a'},x)\n\\end{align}\n\\end{subequations}\nwhere the matrix $K_{aa'} = k(x_a,x_{a'})$ and where the function $k(x,x')$ is called the kernel of the Gaussian\nprocess. In this work, unless otherwise noted, we used a squared-exponential kernel and a white noise (diagonal) kernel\n\\begin{eqnarray}\nk(x,x') = \\sigma_o^2 e^{-(x-x')Q(x-x')\/2} + \\sigma_n^2 \\delta_{x,x'}\n\\end{eqnarray}\nwhere $Q$ is a diagonal matrix of possible length scales and $\\sigma_0,\\sigma_n$ are hyperparameters that characterize\nthe amount of noise allowed in the problem. \nThe other interpolation method considered in this work was random forest (RF) regression \\cite{breiman2001}. Unlike the GP, the \nRF output had no error quantification and was used primarily as a consistency check on the Gaussian process\nprediction. \nUnless otherwise noted, we performed all GP and RF regression with \\textsc{scikit-learn} \\cite{scikit-learn}.\n\n\nBecause of the substantial dynamic range of our many outputs, we interpolate the $\\log_{10}$ luminosity (for\nplacement) or AB magnitudes (for all other results).\nUnless otherwise noted, we quantify the performance of our interpolation with the RMS difference between our prediction\nand the true value\n\\begin{equation}\n\\ell^2 = \\frac{1}{n} \\sum_{j=1}^{n} (y_j - \\log_{10}(L_\\text{bol})_j)^2,\n \\label{eq:simple_loss}\n\\end{equation}\n[This expression overweights the importance of large errors when the source is not detectable at late times; see Appendix\n\\ref{ap:validate_pe}].\n\nWe employ GP interpolation in two standard use cases. In the first case, used for our exported production results, we interpolate the AB magnitude $\nm_\\alpha(t_*|\\Lambda)$ at some fixed reference time $t_*$ and band $\\alpha$ versus our four simulation hyperparameters\n (and, in the end, also across the extrinsic parameters of angle and wavelength) contained in $\\Lambda$. In this case, the prediction $y(x_a)$ has a single scalar value at each point; the\n$x_a$ refer to model hyperparameters; and the interpolation provides us with a scalar function of four or more\nvariables. GP regression [Eq. (\\ref{eq:gp})] provides an error estimate for $m_\\alpha$ at this specific time\n$t_*$, which in general will depend on time.\n\n\nIn the second case, used for simulation placement, we interpolate the log bolometric luminosity\n\\emph{light curve} $\\log_{10} L_{bol}(t|\\Lambda)$ versus \\emph{all time}. \n[In terms of each simulation's spectrum, the bolometric luminosity is\n$\nL_{bol} = 4 \\pi R^2 \\int_0^{\\infty} F_{\\nu}d\\nu\n$ \nwhere R = 10 pc.]\nIn this case, the prediction $\\vec{y}(x_a)$ is vector-valued at each point; the $x_a$ refer to model\nhyperparameters; and the interpolation provides us with a vector-valued function of four or more variables.\nFor simplicity and given our use case, we reduce our error estimate to a single overall value for the entire light curve, reflecting the overall\nuncertainty in $\\vec{y}(x_a)$. \n\n\n\n\\subsection{Active Learning Scheme}\n\\label{sec:active_learning}\n\nGaussian processes have long been used for active learning because they provide an error estimate: follow-up simulations\ncan be targeted in regions with the largest expected error (and thus improvement) \\cite{book-Murphy-MachineLearning}.\nWe follow this approach in our active learning scheme; see \\cite{ZacksSolomon1970,krause07nonmyopic,Cohn1996,Gal2017,MacKay92bayesianmethods,Srinivas10,Mockus78,Wu16} for a broader discussion of active\nlearning methods and their tradeoffs. \nTo reduce the data volume needed for targeting followup simulations, we used vector-valued interpolation as described\nabove, applied to \\emph{orientation-averaged outputs} of our simulations. This\napproach has the substantial advantage of providing a single error estimate per light curve (both in training and off-sample), which we can immediately\nuse as an objective function in a minimization algorithm. \n\n\n\nWe pursued an active learning simulation placement approach in order to maximally explore the parameter space and reduce\nthe amount of redundant information obtained from each new simulation. The subset of \\nSimStart{} light curves discussed in Section\n\\ref{sec:kne_sims} was used as the initial training set.\nThousands of parameter combinations were subsequently drawn from uniform distributions with maxima and minima matching\nthose of the varied parameters in Table \\ref{tbl:grid_params}. Each of these parameter combinations was evaluated by an\ninterpolator to produce an initial light-curve prediction as well as an error on the entire light-curve output. The prediction with\nthe largest error across all the tested paremeter combinations was selected as the next placed simulation.\n\n\n\n\n %\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/active_learning_before_logt}\n\\includegraphics[width=\\columnwidth]{Figures\/active_learning_after_logt}\n\\caption{\\label{fig:GP_before_after} \\textbf{Impact of adaptive placement on interpolation:}\n Example of interpolation output at a point with large predicted fitting error, both before and after placing the\n simulation. In both panels, the solid black curve shows the true simulated bolometric light curve versus time.\nThe red band shows the GP-predicted one-sigma error bar about the expected value.\n \\emph{Top panel}: Predictions from our RF and GP interpolations versus time. The large error and low practical\n utility of the GP fit is apparent. \\emph{Bottom panel}: After including this simulation in the training set, the revised RF and\n GP predictions much more closely conform with this specific simulation as expected.\n}\n\\end{figure}\n\n\n\n\\subsection{Prediction Improvement and Interpolation Results}\n\n\nWe verified our active learning strategy for simulation placement by randomly sampling combinations of\nparameters and creating two light curve predictions based on those parameters. The first prediction was trained solely on our\ninitial grid of simulations from Section \\ref{sec:kne_sims}, while the second prediction was trained on the same initial\ngrid, but with an added simulation output characterized by the aforementioned random combination of parameters. Figure\n\\ref{fig:GP_before_after} shows these before- and after-inclusion predictions which show that, as expected, the GP\ninterpolation capability is improved. \nThis pair of figures anecdotally illustrates the degree to which new training data improves our surrogate light curve models.\n\nWith over 400 placed simulations since the start of the active learning process, the training library is built up\nenough to allow for physically meaningful interpolation of off-sample events. The performance of our adaptive learning\nis best illustrated with our production-quality interpolation scheme, illustrated in Figure \\ref{fig:off_sample_interp}\nand described in the next section.\n\n\n\n\n\n\nDespite producing many follow-up simulations, we achieve success with a\nvery sparse coverage of our parameter space. To illustrate the sparsity of our parameter space coverage, and how slowly\nour added simulations increase coverage, we evaluated the median ``inter-simulation'' distance,\nusing a simple Euclidean ($L^2$) norm over $\\log_{10} L_{bol}(t_k)$ for several reference times $t_k$. \nAs expected given the high apparent dimension of our output, this median distance changes very\nslowly with $n$, owing to the large effective dimension of the output light curves. The median distance is also\n larger than the residual error in our fit, as reported below. The success of our interpolation relies not on an\noverwhelmingly large training sample, but on the smoothness and predictability of our physics-based light curves.\n\n\n\n\\section{Light curve interpolation}\n\\label{sec:Interpolation}\n\n\\subsection{Stitched fixed-time interpolation}\nTo efficiently interpolate across the whole model space, we follow a strategy illustrated in Figure 1 of\n\\cite{2014PhRvX...4c1006F}: we pick several fiducial reference times $t_{q}$ (and angles); use GP interpolation to produce an\nestimate $m_\\alpha(t_q|\\Lambda)$ versus $\\Lambda$; interpolate in time to construct a continuous\nlight curve at the model hyperparameters $\\Lambda$ at each reference angle; and then interpolate in angle to construct a\nlight curve for an arbitrary orientation. For an error estimate, we stitch together the error estimates in\neach band to produce a continuous function of time. \nFigure \\ref{fig:off_sample_interp} shows the output of our interpolation (smooth lines), compared to a validation\nsimulation at the same parameters (dashed lines). Our predictions generally agree, though less so for the shortest wavelengths at the\nlatest times. \nSubsequent figures also illustrate the typical GP error estimate, which is usually $O(0.1)$ in $\\log_{10} L$ for most bands\nand times considered. \n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/initial_grid_off_sample_interp_ABmag}\n\\includegraphics[width=\\columnwidth]{Figures\/off_sample_interp_logt_longer_times_ABmag}\n\\caption{\\textbf{Off-sample interpolation with original and refined grid: } Example of an interpolated stitched fixed-time prediction compared to a simulation\noutput created from the same corresponding input parameters. The top panel shows our estimate based on the initial\n$\\nSimStart$ simulations; the bottom panel shows the result after adaptive learning. Different colors denote different filter bands, described\nin the legend. The dashed lines show full simulation output for each band. The colored points show our interpolated\nbolometric magnitude predictions at the $\\nTimePoints$ evaluation times. The solid lines show our final\ninterpolated light curves, interpolating between the points shown. The largest error in this example occurs for the\n$g$ band at late times. The simulated parameters and viewing angle for this configuration are $M_{d}=0.097050 M_{\\odot}$, $M_{w}=0.083748 M_{\\odot}$,\n$v_d=0.197642 c$ and $v_w=0.297978 c$, viewed on axis ($\\theta=0$). The exaggerated modulations in the top panel's solid\nlines and dotted curves illustrate interpolation failures, arising from adopting an initially insufficient training set.}\n\\label{fig:off_sample_interp}\n\\end{figure}\n\n\n\n\\subsection{Trends identified with interpolated light curves}\nIn Figure \\ref{fig:CharacterizeTrends:OneParameter} we show the results of our fit evaluated at a fixed viewing angle ($\\theta=0$), varying one parameter at a time\ncontinuously, relative to a fiducial configuration with $M_{d}=M_{w}=0.01 M_\\odot$, $v_w\/c=v_d\/c=0.05$.\nThe fixed value for the ejected mass of $M=0.01 M_\\odot$ was chosen as the middle ground of the initial grid's sampled mass space, which\ndoes not introduce any biases toward lighter or heavier masses. Since no similar central value was initially available for the velocity\nparameters, the lower value was selected in the case of both components. The slower velocity resulted in the ejecta not dissipating\nas quickly and allowed for more variation in the light curves as the non-static parameter was varied. \nFor this viewing angle, changes in the amount and velocity of the dynamical ejecta have relatively modest effect,\nin large part because that ejecta is concentrated in the equatorial plane. By contrast, changes in the mostly polar\nwind ejecta has a much more substantial impact on the polar light curve ($\\theta=0$). \nSpecifically, increasing the amount of wind ejecta brightens and broadens the light curve, as expected from classic\nanalytic arguments pertaining to how much material the light must diffuse through \\cite{1980ApJ...237..541A,1982ApJ...253..785A,Chatzopoulos_2012,Metzger_2019}.\nSimilarly, increasing the velocity of wind ejecta causes the peak to occur at earlier times (diffusion is easier) and be\nbrighter.\n\n\n\\begin{figure}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/filling_md_logt_longer_times_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/filling_mw_logt_longer_times_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/filling_vd_logt_longer_times_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/filling_vw_logt_longer_times_ABmag}\n\\caption{\\label{fig:CharacterizeTrends:OneParameter}\\textbf{Interpolated and simulated g-band light curves}:\n In this figure, we generate $\\log L_g(t|\\Lambda)$ for a\n one-parameter family of simulations $\\Lambda$ where either one of the $M$ parameters vary from $0.001 M_\\odot$ to $0.1\n M_\\odot$ or one of the $v$ parameters vary from $0.05c$ to $0.3c$, and the viewing angle is $\\theta=0$. \n The remaining model parameters are fixed to $(M\/M_\\odot,v\/$c$) = (0.01,0.05)$. Contours in $M$\n are uniform in $\\log M$, while those for $v$ are linearly uniform. For\n comparison, the heavy dashed lines show the initial training simulation results for the two parameter endpoints.\nThe $g$ band light curve has the largest dynamic range and is the most sensitive to interpolation errors; notably, the\ninterpolation does not always conform tightly to the underlying simulation data at late times.\n}\n\\end{figure}\n\n\\subsection{Interpolation in viewing angle}\n\nAll of the interpolated light curves discussed thus far have been trained at some fixed viewing angle. \nIn Figure \\ref{fig:AngleInterpolation}, we explore the interpolation of several families of models, each of which was \ntrained using simulation data at a different viewing angle. The symmetry of the ejecta across the orbital plane allows for\nthe assumption that any angular variation between $0$ and $\\pi\/2$ can simply be mirrored across the symmetry axis. \n\nFigure \\ref{fig:AngleInterpolation} indicates that the first day post-merger does not introduce much angular variation and, as such,\nis quite well predicted even when interpolating across only 11\nangles. After 1 day, the luminosity across different angles begins to change considerably as the peanut-shaped wind\nejecta becomes more dominant.\nAt late times, there is a strong periodic variability which manifests near the orbital plane, most strongly apparent in\nthe blue (g) and near infrared (K) bands. In the blue bands, the angular variation reflects lanthanide curtaining; in\nthe red bands, the angular variation reflects red emission from the late-peaking red dynamical ejecta. \n[At the latest times and faintest luminosities along the equatorial plane, numerical uncertainty in our Monte Carlo simulations are apparent in the\nlight-curve results.]\nIn all panels, the solid band denotes an estimated error bar from our GP fit in time, extended in angle.\n\n\\begin{figure}[ht!]\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_interp_off_sample_g_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_interp_off_sample_g_zoom_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_interp_off_sample_y_ABmag}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_interp_off_sample_K_ABmag}\n\\caption{\\label{fig:AngleInterpolation} \\textbf{Interpolation of g-, y-, and K-band luminosity at different viewing angles}: This figure\ncompares the $g$-, $y$-, and $K$-band luminosity at select times as a function of viewing angle. The solid points represent fixed angles at which the\ndifferent families of models were trained. The solid lines connecting the points indicate the interpolated prediction of the\nangular variation at some given time in the light curve. The dashed lines represent the simulation data and show the true angular \nvariation.\nThe shaded regions denote the $1\\sigma$ error estimate derived from our Gaussian process fit versus time, extended in angle.\n}\n\\end{figure}\n\n\\subsection{Predictive Accuracy versus time, angle grid sizes}\n\nTo better understand the systematic limitations and computational inefficiencies introduced by our stitched-time\ninterpolation grid, we investigated the accuracy of our fits when only using a subset of the time or angular grid.\n\nFirst, we consider a simple analysis of loss of predictive accuracy as the number of GP interpolators used to make a surrogate light curve is decreased. We denote $t \\in T$ as the subset of \ntimes represented by the GP interpolators used to make a prediction, $T$ as the total available number of time points, and thus interpolators, which can be used to make a light curve, and $\\bar{t}$ \nas all the other times in T which are not represented by $t$ such that $t \\cap \\bar{t} = 0$ and $t \\cup \\bar{t} = T$.\n\nThus, when using any number of interpolators at times $t \\in T$ which is less than the total number of possible time points $T$, we first generate predictions $y(t)$ with the chosen subset of \ninterpolators. These predictions $y(t)$, along with the times $\\bar{t}$ which are outside of the chosen subset of interpolators, are then used as inputs for \\texttt{SciPy}'s UnivariateSpline method \nfrom which the remainder of the light curve $z(t) = f(\\bar{t}, y(t))$ is constructed.\n\nFigure \\ref{fig:residuals_vs_nintps} shows how the average residual between on-sample light curve predictions and the respective simulation data changes\nas a function of the number of time points used as the base for constructing the time-interpolated light curve. For the\ncurrent scheme, we can remove up to roughly 75\\% of the initial set of time points without substantially diminishing our\noverall accuracy. Future work will explore smarter selection of representative time points in an effort to further reduce\nthe number of interpolators which can be removed without significant loss of accuracy.\n\n\n\\begin{figure}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/angle_comparison_kawaguchi20_fig7}\n\\caption{\\label{fig:AngularDependence:2} \\textbf{Light curve versus time for selected angles and bands}: Comparison to Figure 7 of \\cite{2020ApJ...889..171K}\nindicating angular dependence of light-curve predictions across the $g-$, $z-$ and $K-$bands.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=0.925\\columnwidth]{Figures\/offset_vs_interps_used}\n\\caption{\\label{fig:residuals_vs_nintps} \\textbf{Average residual as a function of number of considered time points}:\nA plot of the average residuals between on-sample time-interpolated light curves and the respective simulation data as a function of how many time points are used to generate the light curves.\nIn each case, we drew the respective number of samples from a log-uniform distribution between the start and end time of our light curves.}\n\\end{figure}\n\n\n\\section{Parameter inference of radioactively-powered kilonovae }\n\\label{sec:PE}\n\nIn this section, we describe and demonstrate the algorithm we use to infer kilonova parameters given observations, using\nthe interpolated light-curve model above. \nUnless otherwise noted, for simplicity all\ncalculations in this section assume the kilonova event\ntime and distance are known parameters. We likewise assume observational errors are understood and well\ncharacterized by independent gaussian magnitude errors in each observation, and that our model families include the\nunderlying properties of the source (i.e., we neglect systematic modeling errors due to the parameters held constant in\nour simulation grid: morphology, initial composition, et cetera). \n\n\n\n\n\\subsection{Framework and validation}\nAs in many previous applications of Bayesian inference to infer parameters of kilonovae\n\\cite{gwastro-mergers-em-CoughlinGPKilonova-2020,2018MNRAS.480.3871C,2019MNRAS.489L..91C,2017ApJ...851L..21V,2017Natur.551...75S},\nwe seek to compare the observed magnitudes $x_i$ at evaluation points $i$ (denoting a combination of band and time) to a\ncontinuous model that makes predictions $m(i|{\\bm \\theta})$ [henceforth denoted by $m_i({\\bm \\theta})$ for brevity] which depend on some model parameters $\\theta$. Bayes\ntheorem expresses the posterior probability $p({\\bm\\theta})$ in terms of a prior probability $p_{\\rm\n prior}({\\bm\\theta})$ for the model parameters $\\bm\\theta$ and a likelihood ${\\cal L}(\\theta)$ of all observations,\ngiven the model parameters, as \n\\begin{equation}\np({\\bm \\theta}) = \\frac{{\\cal L}({\\bm \\theta}) p_{\\rm prior}({\\bm \\theta})}{\n \\int d {\\bm \\theta} {\\cal L}({\\bm \\theta}) p_{\\rm prior}({\\bm \\theta})\n}\n\\end{equation}\nUnless otherwise noted, for simplicity we assume the source sky location, distance, and merger time are known.\nWe adopt a uniform prior on the ejecta velocity $v\/c\\in[0.05,0.3]$ and a log uniform prior on the ejecta masses\n$m\/M_\\odot \\in [10^{-3},0.1]$. \nWe assume the observations have Gaussian-distributed \\emph{magnitude} errors with presumed known observational\n(statistical) uncertainties $\\sigma_i$, convolved with\nsome additional unknown systematic uncertainty $\\sigma$, so that \n\\begin{equation}\n \\ln \\mathcal{L}(\\bm{\\theta}) = -0.5 \\sum_{i=1}^n \\left [ \\frac {(x_i - m_i(\\bm{\\theta}))^2} {\\sigma_i^2 + \\sigma^2} + \\ln(2 \\pi (\\sigma_i^2 + \\sigma^2)) \\right ]\n\\end{equation}\nwhere the sum is taken over every data point in every band used in the analysis. In tests, we treat $\\sigma$ as an uncertain model\nparameter, de facto allowing for additional systematic observational uncertainty (or for some systematic theoretical\nuncertainty). For our GP surrogate models, we set $\\sigma$ to the estimated GP model error.\n\n\nUnlike prior work, we eschew Markov-chain Monte Carlo, instead constructing the posterior distribution by direct Monte\nCarlo integration as in \\cite{2015PhRvD..92b3002P,gwastro-PENR-RIFT}. To efficiently capture correlations, we employ a\ncustom adaptive Monte Carlo integrator; see \\citet{gwastro-RIFT-Update} for implementation details.\nIn Appendix \\ref{ap:validate_pe}, we describe several tests we performed to validate this inference technique using\nsynthetic kilonova data drawn from a previously published semianalytic kilonova model. Our tests include recovering\nthe parameters of a hundred synthetic kilonova sources.\nIn future work, we will demonstrate how our parameter inference method can be incorporated efficiently and\nsimultaneously with gravitational wave (GW)\nparameter inference with the rapid iterative fitting (RIFT) parameter estimation pipeline \\cite{gwastro-PENR-RIFT}.\n\n\\subsection{Inference with surrogate kilonova model}\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/lc_test_kn_interp_angle_20210421}\n\\includegraphics[width=\\columnwidth]{Figures\/corner_test_kn_interp_angle_combined}\n\\caption{\\label{fig:pedemo:interp}\\textbf{Synthetic source recovery with surrogate model: }\nRecovery of a parameters of a known two-component surrogate kilonova model, using inference based on our interpolated model.\nSolid black curves show results adopting a strong angular prior motivated by radio observations of GW170817.\n\\emph{Top panel}: Synthetic light cuve data in several bands.\n\\emph{Bottom panel}: Inferred distribution of the four model parameters, and viewing angle. The blue cross denotes the\ninjected values. Red contours show results without adopting a prior on observing angle; black contours show results\ninferred when adopting a prior on viewing angle consistent with observations of GW170817.\n}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/lc_simulation_injection_kn_interp_angle_20210413}\n\\includegraphics[width=\\columnwidth]{Figures\/corner_simulation_injection_kn_interp_angle_20210413}\n\\caption{\\label{fig:pedemo:interp_on_sim}\\textbf{Simulation parameter recovery with surrogate model: }\nRecovery of a parameters of a known two-component kilonova \\emph{simulation}, using inference based on our interpolated\nmodel. The parameters corresponding to the relevant simulation are $M_{d}=0.052780 M_{\\odot}$, $v_d=0.164316 c$, $M_{w}=0.026494 M_{\\odot}$,\nand $v_w=0.174017 c$.\n\\emph{Top panel}: Synthetic light cuve data in several bands.\n\\emph{Bottom panel}: Inferred distribution of the three model parameters. \n}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{Figures\/lc_simulation_injection_kilonova_20210118}\n\\includegraphics[width=\\columnwidth]{Figures\/corner_simulation_injection_kilonova_20210118}\n\\caption{\\label{fig:pedemo:model_on_sim}\\textbf{Simulation parameter recovery with analytic model: }\nRecovery of a parameters of a known two-component kilonova \\emph{simulation}, using inference based on the simplified\nanalytic model described in the appendix. The analytic model cannot fit our simuation data. While only a one-component\nfit is shown, similar results arise when employing multiple components.\nThe parameters corresponding to the relevant simulation are $M_{d}=0.01 M_{\\odot}$, $v_d=0.3 c$, $M_{w}=0.01 M_{\\odot}$,\nand $v_w=0.3 c$.\n\\emph{Top panel}: Synthetic light cuve data in several bands, including error bars on both the synthetic data and\nposterior light curve predictions. The analytic model cannot fit our simulated data well.\n\\emph{Bottom panel}: Inferred distribution of the three four model parameters. The blue cross denotes the injected\nvalues.\n}\n\\end{figure}\n\nFigure \\ref{fig:pedemo:interp} demonstrates parameter inference using our surrogate light curves, for a synthetic source\ngenerated using our own model. As expected, we can recover a known source, including constraining the viewing angle $\\theta$.\nFigure \\ref{fig:pedemo:interp_on_sim} performs a similar test, but now using a specific simulation, without\n interpolation. As expected given our adopted systematic error, we recover the simulation parameters.\nFinally, Figure \\ref{fig:pedemo:model_on_sim} repeats the test above, using the semianalytic model described in the\nappendix. This comparison emphatically demonstrates large systematic differences between this semianalytic model and\nour detailed simulations. \n\n\n\n\n\n\n\\subsection{Example: GW170817}\n\nSuperNu-based kilonova models have already been successfully used to interpret GW170817, \nthough as noted previously these models have a rapid falloff in the late-time optical magnitudes that is not present in\nthe observations; see \\cite{tanvir17}.\n Because\nof the close proximity of GW170817, only distance modulus (but not redshift) corrections are needed to translate our\npredictions to apparent magnitudes which can be directly compared to electromagnetic observations.\nObservational results are taken from \\citep{2017ApJ...851L..21V}'s compilation of\n photometry reported in \n\\cite{2017Natur.551...64A,tanvir17,troja17,2017ApJ...848L..17C,2017Sci...358.1570D,2017arXiv171005841S,2017ApJ...848L..24V,2017Natur.551...67P,2017Sci...358.1559K,2017Sci...358.1574S,2017PASJ...69..101U}.\nFigure \\ref{fig:170817:SimulationsOnly} shows the results of directly comparing our extended simulation archive directly\nto observations of GW170817, selecting for simulations (parameters and angles) with the highest overall likelihood. The\nsolid black curves in these figures show the 50 highest-likelihood configurations, where the likelihood requires\nsimultaneously reproducing all observed bands. Except for reddest three bands\n(JHK), many simulations compare extremely favorably to the observations. \nThe parameters of these simulations, however, do not represent the optimal parameters of this model family: because our placement\nalgorithm minimizes interpolation error, the selected points preferentially occur at the edges of our domain. \nFinally, for the reddest band (K), our fits exhibit notable systematic uncertainty relative to the underlying\nsimulation grid.\n\nWe have performed parametric inference on GW170817 using our surrogate light curve model to the underlying SuperNu\nresults. \nMotivated by the direct comparisons above, we perform two analyses. In the first, we use all observing data at all\ntimes.\nIn the second, we omit the reddest (K) band.\nFigure \\ref{fig:170817:ToyModel} shows the results of these comparisons. Beccause of the systematic fitting\nuncertainties at late times, we highlight the analysis omitting K band observations as our preferred result.\nThough previously-reported inferences about ejecta masses cover a considerable dynamic range (see, e.g., Fig. 1 in \\cite{2019EPJA...55..203S}), our inferred masses are\nqualitatively consistent with selected previous estimates including previous inferences with similar SuperNu models \n\\cite{tanvir17} and recent surrogate models adapted to simplified multidimensional radiative transfer\n\\cite{2020Sci...370.1450D}.\nNotably, however, we infer a large amount of ``dynamical'' (red, lanthanide-rich) ejecta mass (i.e.,\n$M_{ej}\\simeq O(1\/30) M_\\odot$), more dynamical ejecta than wind, and the velocities for the dynamical and wind component are inverted relative to customary expectations (i.e.,\n$v_d