diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzoath" "b/data_all_eng_slimpj/shuffled/split2/finalzzoath" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzoath" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\IEEEPARstart{T}{he} neurons in our brain have a capacity to process a large amount of high dimensional data from various sensory inputs while still focusing on the most relevant components for decision making \\cite{pillow2006dimensionality} \\cite{cunningham2014dimensionality}. This implies that the biological neural networks have a capacity to perform dimensionality reduction to facilitate decision making. In the field of machine learning, artificial neural networks also require a similar capability because of the availability of massive amounts of high dimensional data being generated everyday through various sources for digital information. Thus it becomes imperative to derive an efficient method for dimensionality reduction to facilitate tasks like classification, feature learning, storage, etc. Deep generative networks such as Autoencoders have been shown to perform better than many commonly used statistical techniques such as PCA (principal component analysis), ICA (Independent Component Analysis) for encoding and decoding of high dimensional data \\cite{hinton1994autoencoders}. These networks are traditionally trained using gradient descent based on back-propagation. However it is observed that for deep networks, gradient descent doesn't converge and gets stuck in a local minima in case of purely randomized initialization \\cite{bengio2007greedy}. A solution to this problem is, weight initialization by utilizing a generative layer-by-layer training procedure based on Contrastive Divergence (CD) algorithm \\cite{hot06}.\n\nTo maximize the performance of this algorithm, a dedicated hardware implementation is required to accelerate computation speed. Traditionally CMOS based designs have been used for this by utilizing commonly available accelerator like GPUs \\cite{raina2009large}, FPGAs \\cite{kim2009highly}, ASICs \\cite{maaimm11}\\cite{stromatias2015robustness}, etc. Recently with the introduction of the emerging non-volatile memory devices such as PCM, CBRAM, OxRAM, MRAM, etc, there is further optimization possible in design of a dedicated hardware accelerators given the fact that they allow replacement of certain large CMOS blocks while simultaneously emulating storage and compute functionalities \\cite{sqcpsvgd11}\\cite{alibart2013pattern}\\cite{yang2013memristive}\\cite{de2013silicon}\\cite{jackson2013nanoscale}\\cite{vlzrbgkgq14},\\cite{wong2015memory}\\cite{burr2015experimental}\\cite{milo2016demonstration}. \n\nRecent works that present designs of Contrastive Divergence based learning using resistive memory devices are \\cite{srpj15}, \\cite{stanford_rbm_prob}. In \\cite{srpj15} the authors propose the use of a two-memristor model as a synapse to store one synaptic weight. In \\cite{stanford_rbm_prob} the authors have experimentally demonstrated a 45-synapse RBM realized with 90 resistive phase change memory (PCM) elements trained with a bio-inspired variant of the contrastive divergence algorithm, implementing Hebbian and anti-Hebbian weight update. Both these designs justify the use of RRAM devices as dense non-volatile synaptic arrays. Also both make use of a spike based programming mechanism for gradually tuning the weights. Negative weights have been implemented by using two devices in place of a single device per synapse. It is apparent that in order to implement more complex learning rules with larger and deeper networks the hardware complexity and area footprint increases considerably while using this simplistic design strategy. As a result, there is a need to increase further increase the functionality of the RRAM devices in the design beyond simple synaptic weight storage. In \\cite{tnanoelm} we have described a design exploiting the intrinsic device-to-device variability as a substitute for the randomly distributed hidden layer weights in order to gain both area and power savings. In \\cite{rramrbm}, we have made use of another property of the RRAM devices by exploiting the cycle-to-cycle variability in device switching to create a stochastic neuron as a basic building block for a hybrid CMOS-OxRAM based Restricted Boltzmann Machines (RBM) circuit. \n\n\n\n\nIn this paper we build upon our previous work on hybrid CMOS-OxRAM RBM with the following novel contributions: \n\\begin{itemize}\n\\item\nDesign of deep generative models (DGM) that utilize the hybrid CMOS-RRAM RBM as a building block.\n\\item\nDesign of programmable output normalization block for stacking multiple hybrid RBMs.\n\\item\nSimulation and performance analysis of two types of DGM architectures at 8-bit synaptic weight resolution: (i) Deep Belief Networks (DBN) and (ii) Stacked Denoising Autoencoders (SDA) \\item\nAnalysis of learning performance (accuracy, MSE) while using only greedy layer-wise training (without backprop).\n\\item\nAnalysis of learning impact on RRAM device endurance. \n\\end{itemize}\nIn our hybrid CMOS-OxRAM DGM implementation the OxRAM devices have been exploited for four different storage and compute functions: (i) Synaptic weight matrix, (ii) neuron internal state storage, (iii) stochastic neuron firing and (iv) programmable gain control block. \nSection \\ref{s2} discusses the basics of OxRAM and deep generative networks. Section \\ref{s4} describes the implementation details of our proposed hybrid CMOS-OxRAM DGM architectures. Section \\ref{s5} discusses simulation results and Section \\ref{sc} gives the conclusions.\n\n\\section{Basics of OxRAM and DGM Architectures}\n\\label{s2}\n\\subsection{OxRAM Working}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=.42]{iv_curve.png}\n\\caption{Basic IV characteristics for HfOx OxRAM device with switching principle indicated. Experimental data corresponding to device presented in \\cite{rramrbm}.}\n\\label{fig:1a}\n\\end{figure}\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[scale=.3]{resdist.png}\n\\caption{Cycle-to-Cycle ON\/OFF-state resistance distribution for HfOx device presented in \\cite{rramrbm}.}\n\\label{fig:1b}\n\\end{figure}\nOxRAM devices are two-terminal MIM-type structures (metal-insulator-metal) sandwiching an active metal-oxide based insulator layer, between metallic electrodes (see Fig. \\ref{fig:1a}). The active layer exhibits reversible non-volatile switching behavior on application of appropriate programming current\/voltage across the device terminals. In the case of filamentary- OxRAM devices, formation of a conductive filament in the active layer, leads the device to a low-resistance (LRS\/On) SET-state, while dissolution of the filament puts the device in a high-resistance (HRS\/Off) RESET-state. The conductive filament is composed of oxygen vacancies and defects \\cite{wlycwclct12}. SET-state resistance (LRS) level can be defined by controlling the dimensions of the conductive filament \\cite{sqbpvvgd13}\\cite{wlycwclct12}, which depends on the amount of current flowing through the active layer. Current flowing through the active layer is controlled either by externally imposed current compliance or by using an optional selector device (i.e. 1R-1T\/1D configuration). \nOxRAM devices are known to demonstrate cycle-to-cycle (C2C) (shown in Fig. \\ref{fig:1b}), and device-to-device (D2D) variability \\cite{baeumer2017subfilamentary}\\cite{li2015variation},\\cite{ielmini2016resistive}. In our proposed architecture, we exploit OxRAM (a) C2C switching variability for realization of stochastic neuron circuit, (b) binary resistive switching for realization of synaptic weight arrays\/neuron internal state storage and (c) SET-state resistance modulation for normalization block. \n\n\n\\label{s3}\n\\subsection{Restricted Boltzmann Machines (RBM)}\nUnsupervised learning based on generative models has gained importance with use of deep neural networks. Besides being useful for pre-training a supervised predictor, unsupervised learning in deep architectures can be of interest to learn a distribution and generate samples from it \\cite{bengio2009learning}. \nRBMs in particular, are widely used as building blocks for deep generative models such as DBN and SDA. Both these models are made by stacking RBM blocks on top of each other. Training of such models using traditional back-propagation based approaches is a computationally intensive problem. Hinton et.al.\\cite{hot06} showed that such models can be trained very fast through greedy layer-wise training making the task of training deep networks based on stacking of RBMs more feasible.\n\\begin{figure}[b]\n\\centering\n \\includegraphics[scale=0.5]{rbm_basic.png}\n \\caption{Graphical representation of RBM hidden\/visible nodes.}\n \\label{fig1a}\n\\end{figure}\nEach RBM block consists of two layers of fully connected stochastic sigmoid neurons as shown in Fig. \\ref{fig1a}. The input or the first layer of the RBM is called the visible layer and the second (feature detector) layer is called the hidden layer. Each RBM is trained using CD algorithm as described in \\cite{h10}. The output layer of the bottom RBM acts as the visible layer for the next RBM.\n\n\\subsection{Stacked Denoising Autoencoder (SDA)}\nAn autoencoder network is a deep learning framework mostly used for denoising corrupted data \\cite{vincent}, dimensionality reduction \\cite{hinton1994autoencoders} and weight initialization applications. In recent years random weight initialization techniques have been preferred over use of generative training networks \\cite{xavier}, however DGMs continue to be the ideal candidate for dimensionality reduction and denoising applications. \nAutoencoder network is basically realized using two networks:\n\\begin{enumerate}\n \\item An 'encoder' network which has layers of RBMs stacked on the top of one another.\n \\item A mirrored 'decoder' network with same weights as that of the encoder layer for data reconstruction.\n\\end{enumerate}\n\n\\begin{figure}[b]\n\\centering\n \\includegraphics[width=\\linewidth]{sda_basic.png}\n \\caption{(a) Basic RBM blocks stacked to form a deep autoencoder. (b) Denoising noisy image using autoencoder.}\n \\label{fig1}\n\\end{figure}\nThe stack of RBMs in autoencoder are trained layer-wise one after the other. An 'unrolled' autoencoder network with the encoder and decoder is shown in Fig. \\ref{fig1}. \n\n\\subsection{Deep Belief Network (DBN)}\nDBNs are probabilistic generative models that are composed of multiple layers of stochastic, latent variables \\cite{hinton2009deep}. The latent variables typically have binary values and are often called hidden units or feature detectors. The top two layers have undirected, symmetric connections between them and form an associative memory. The lower layers receive top-down, directed connections from the layer above. The states of the units in the lowest layer represent a data vector. A typical DBN is shown in Fig. \\ref{fig1b} which uses a single RBM as the first two layers followed by a sigmoid belief network (logistic regression layer) for the final classification output.\n\n\\begin{figure}[b]\n\\centering\n \\includegraphics[width=0.7\\linewidth]{dbn_struct.png}\n \\caption{DBN architecture comprising of stacked RBMs}\n \\label{fig1b}\n\\end{figure}\n\nThe two most significant DBN properties are:\n\\begin{enumerate}\n\\item There is an efficient, layer-by-layer procedure for learning the top-down, generative weights that determine how the variables in one layer depend on the variables in the layer above.\n\\item After learning, the values of the latent variables in every layer can be inferred by a single, bottom-up pass that starts with an observed data vector in the bottom layer and uses the generative weights in the reverse direction.\n\\end{enumerate}\nDBNs have been used for generating and recognizing images \\cite{hot06}, \\cite{huang2007unsupervised},\\cite{bengio2007greedy}, video sequences \\cite{sutskever2007learning}, and motion-capture data \\cite{taylor2007modeling}. With low number of units in the highest layer, DBNs perform non-linear dimensionality reduction and can learn short binary codes, allowing very fast retrieval of documents or images \\cite{hinton2006reducing}.\n\n\n\\section{Implementation of Proposed Architectures}\n\\label{s4} \n\\begin{figure*}[htbp]\n \\includegraphics[width=\\linewidth]{dgm_rram_new.png}\n \\caption{(a) Individual RBM training layer architecture. RBM training block symbols, 'H', 'V' and 'S' represent hidden layer memory, visible layer memory, and synaptic network respectively. (b) Cascaded RBM blocks for realizing the proposed deep autoencoder with shared weight update module (c) Fully digital CD based weight update module. (d) Block level design of single stochastic sigmoid neuron}\n \\label{fig2}\n\\end{figure*}\nBasic building block of both SDA and DBN is the RBM. In our simulated architectures, within a single RBM block OxRAM devices are used for multiple functionalities. The basic RBM block (shown in Fig. \\ref{fig2}(a)) is replicated, with the hidden layer memory states of the first RBM acting as visible layer memory for the next RBM block and so on (Fig. \\ref{fig2}(b)). All RBM blocks have a common weight update module described in Section \\ref{CDup}. Post training, the learned synaptic weights along with the sigmoid block can be used for reconstructing the test data. Architecture sub-blocks consist of:\n\n\\subsection{Synaptic Network}\nSynaptic network of each RBM block was simulated using a 1T-1R HfOx OxRAM matrix. Each synaptic weight is digitally encoded in a group of binary switching OxRAM devices, where the number of devices used per synapse depends on the required weight resolution. For all architectures simulated in this work we have used 8-bit resolution (i.e. 8 OxRAM devices\/per synapse).\n\n\\subsection{Stochastic Neuron Block}\n\nFig \\ref{fig2}(d), shows the stochastic sigmoid neuron block. Each neuron (hidden or visible) has a sigmoid response, which was implemented using a low-power 6-T sigmoid circuit (\\cite{suri5}). Gain of the sigmoid circuit can be tuned by optimizing the scaling of the six transistors. Voltage output of the sigmoid circuit is compared with the voltage drop across the OxRAM device, with the help of a comparator. The HfOx based device is repeatedly cycled ON\/OFF. C2C intrinsic $R_{ON}$ and $R_{OFF}$ variability of the OxRAM device leads to a variable reference voltage for the comparator. This helps to translate the deterministic sigmoid output to a neuron output, which is effectively stochastic in nature. At any given moment, a specific neuron's output determines it's internal state, which needs to be stored for RBM driven learning. Neuron internal state is stored using individual OxRAM devices placed after the comparator. Single OxRAM\/per neuron is sufficient for state storage, since RBM requires each neuron to only have a binary activation state.\n\n\n\n\\subsection{CD Weight Update Block}\n\\label{CDup}\nThe weight update module is a purely digital circuit that reads the synaptic weights and internal neuron states. It updates the synaptic weights during learning based on the CD RBM algorithm (\\cite{h10}). The block consists of an array of weight update circuits, one of which is shown in Fig \\ref{fig2}(c). Synaptic weight is updated by ${\\bigtriangleup}$Wij \\ref{eqcd}, based on the previous (v, h) and current (v', h') internal neuron states of the mutually connected neurons in the hidden and visible layers. CD is is realized using two AND gates and a comparator (having outputs -1, 0, +1). Input to the first AND gate is previous internal neuron states, while the input to second AND gate is the current internal neuron states. Based on the comparator output, ${\\epsilon}$ (learning rate) will either be added, subtracted, or not applied to the current synaptic weight (Wij). \n\n\\begin{equation}\n\\label{eqcd}\n\\bigtriangleup W_{ij}= \\epsilon (vh^T-v^{'}h^{'T})\n\\end{equation}\n\\subsection{Output Normalization block}\n\\begin{figure*}[htbp]\n\\centering\n \\includegraphics[width=\\textwidth]{gain_ckt_new}\n \\caption{Programmable normalization circuit: (a) Circuit Schematic, (b) Gain variation w.r.to variation in OxRAM resistance state}\n \\label{fig_norm}\n\\end{figure*}\n\n\nIn order to chain the mixed-signal design of RBM we need to ensure the signal output at each layer is having an enhancement in the dynamic range so that the signal doesn't deteriorate as the network depth increases. For this purpose we proposed a hybrid CMOS-OxRAM programmable normalization circuit (see Fig. \\ref{fig_norm}) whose gain and bias can be tuned based on OxRAM resistance programming.\n\nThe circuit schematic of the programmable normalization block is shown in Fig. \\ref{fig_norm}(a)). In order to check variation in gain, we have considered programming the OxRAM in three different SET states ($\\sim$ 3.2 k$\\Omega$, 6.6 k$\\Omega$, and 22.6 k$\\Omega$).\n\nThe differential amplifier consisting of a DC gain control circuit and a biasing circuit is used to implement the normalization function. A two stage amplifier consisting of transistors N3, N4, N5, N6, P3, P4, P5, P6, and P7 is used. DC gain of the circuit is controlled using a constant $g_m$ circuit whose output is fed into N3. The constant $g_m$ circuit consists of transistors N1, N2, P1, P2 and one OxRAM. Based on the OxRAM resistance, $g_m$ of the circuit can be changed thereby changing the output potential. This affects $V_{gs}$ of N3 thereby controlling the gain of the circuit. \n\nTo validate the design, we performed simulation of the circuit using an OxRAM device compact model (\\cite{li2015variation}) and 90 nm CMOS design kit. The simulated variation in the gain of the circuit based on the resistance state of the OxRAM is shown in Fig. \\ref{fig_norm}(b). Gain control through OxRAM programming was found to be more prominent at higher operating frequencies. \n\nBias control is implemented by a potential divider circuit ($R_f$ and the OxRAM). The potential divider circuit determines the potential across $V_g$ of P6. Input $V_2$ is swept from 0 V to 1 V. If the potential across P6 increases for a fixed $V_2$ the output switching voltage also increases thereby controlling the bias of the output.\n\n\\section{Deep Learning Simulations and Results}\n\\label{s5}\nSimulations of the proposed architectures (DBN, SDA) were performed in MATLAB. Both generative networks with CD algorithm and behavioral model of all blocks described in section \\ref{s4} were simulated. Stochastic sigmoid neuron activation and normalization circuits were simulated in Cadence Virtuoso using 90 nm CMOS design kit and Verilog-A OxRAM compact model \\cite{li2015variation}.\n\n\\subsection{Stacked Denoising Autoencoder performance analysis}\nWe trained two autoencoder networks each having the same number of neurons in the final encoding layer, but varying levels of depth, and compared their denoising performance (see Fig. \\ref{fig8}). In each network a single synaptic weight was realized using 8 OxRAM devices (8-bit resolution). All neurons have a logistic activation except for the last ten units in the classification layer, which are linear. The networks were trained on a reduced MNIST dataset of 5000 images and tested for denoising 1000 new salt-and-pepper noise corrupted images (see Fig. \\ref{fig8}).\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=\\linewidth]{sda_results.png}\n\\caption{(a) 3-layer deep SDA-1, (b) 5-layer deep SDA-2. Denoising results of 100 corrupted MNIST images for: (c) SDA1 and (d) SDA2.}\n\\label{fig8}\n\\end{figure}\n\n\\begin{table}[!htbp]\n\\caption{Proposed Autoencoder performance for reduced MNIST}\n\\label{mnist1}\n\\begin{center}\n\n\n \\begin{tabular}{|c|c|c|}\n \\hline\n Network & Implementation & MSE \\bigstrut\\\\\n \\hline\n \\multirow{2}[4]{*}{784x100x784} & Software & 0.010 \\bigstrut\\\\\n\\cline{2-3} & Hybrid OxRAM SDA & 0.003 \\bigstrut\\\\\n \\hline\n \\multirow{2}[4]{*}{784x100x40x100x784} & Software & 0.049 \\bigstrut\\\\\n\\cline{2-3} & Hybrid OxRAM SDA & 1.095 \\bigstrut\\\\\n \\hline\n \\end{tabular}%\n\\end{center}\n\\end{table}\nTable \\ref{mnist1} presents the learning performance of the proposed SDA-1 and SDA-2. Increasing depth in the network was not useful with the current learning algorithm and tuning parameters. \n\n\\subsection{Deep Belief Network performance analysis}\nWe simulated two deep belief network architectures shown in Fig. \\ref{fig6b}. (4 and 5 layer variants) Performance of the network was measured by testing on 1000 samples from the reduced MNIST dataset. The results for the same are shown in Table \\ref{tab:dbn}. We measured test accuracy using 3 parameters :\n\\begin{enumerate}\n\\item Top 1 accuracy : correct class corresponds to output neuron with highest response. \n\\item Top 3 accuracy : correct class corresponds to the top 3 output neurons with highest response.\n\\item Top 5 accuracy : correct class corresponds to the top 5 output neurons with highest response.\n\\end{enumerate}\n\nFrom Table \\ref{tab:dbn}, the performance of simulated Hybrid CMOS-OxRAM DBN matches closely with software based accuracy (2-3\\% lower) for a DBN formed with 2 RBMs. There is a significant drop in test accuracy for the DBN with 3 RBMs. This is acceptable as the goal of the greedy layer-wise training is to pre-train the network to a good state before using back-propagation to allow faster convergence. Thus lower accuracy after layer-wise training for a deeper network is acceptable as the weights would be further optimized using back-propagation. \n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=\\linewidth]{deep_net_dbn_rbm.png}\n \\caption{Simulated 4 and 5 layer DBN architecture.}\n \\label{fig6b}\n\\end{figure}\n\n\\begin{table}[htbp]\n \\centering\n \\caption{Proposed DBN Performance for Reduced MNIST}\n\\scalebox{0.75}{\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n \\multirow{2}[4]{*}{Network} & \\multirow{2}[4]{*}{Implementation} & \\multicolumn{3}{c|}{Test accuracy} \\bigstrut\\\\\n\\cline{3-5} & & Top-1 & Top-3 & Top-5 \\bigstrut\\\\\n \\hline\n \\multirow{2}[8]{*}{784x100x40x10} & Software & 93.10\\% & 98.70\\% & 99.40\\% \\bigstrut\\\\\n\\cline{2-5} & Hybrid OxRAM DBN & 78.70\\% & 95.50\\% & 98.80\\% \\bigstrut\\\\\n \\hline\n \\multirow{2}[8]{*}{784x160x80x40x10} & Software & 93.70\\% & 98.50\\% & 99.40\\% \\bigstrut\\\\\n\\cline{2-5} & Hybrid OxRAM DBN & 21.30\\% & 61.40\\% & 79.60\\% \\bigstrut\\\\\n \\hline\n \\end{tabular}%\n }\n \\label{tab:dbn}%\n\\end{table}%\n\n\n\\subsection{Tuning $V_{OxRAM}$ amplifying gain}\nThe sigmoid activation circuits in the network use a gain factor in order to balance for the low current values obtained as a result of the OxRAM device resistance values. If the amplification is low it will lead to saturation and the network will not learn a proper reconstruction of the data. This necessitates proper tuning of the amplifier gain for effective learning. In our architecture, amplifier gain for $V_{OxRAM}$ is an important hyper-parameter along with the standard ones (momentum, decay rate, learning rate, etc.) and is different for each consecutive pair of layers. A higher dimensional input to a layer will require a lower amplifying gain for $V_{OxRAM}$ and vice-versa.\n\n\\subsection{Switching activity analysis for the Proposed architecture}\n\\label{s6}\nResistive switching of OxRAM devices is observed in following sections of the architecture:\n\\begin{enumerate}\n\\item Synaptic matrix\n\\item Stochastic neuron activation\n\\item Internal neuron state storage. \n\\end{enumerate}\nRRAM devices suffer from limited cycling endurance ($\\sim$ 0.1 million cycles) \\cite{balatti2014pulsed}.For stochastic neuron activation, the OxRAM device is repeatedly cycled to OFF state and the voltage drop across the device is used to generate the stochastic signal fed to one of the comparator inputs. Thus the neuron activation block related switching activity depends on the number of data samples as well as number of epochs. The maximum switching per device for any layer can be estimated by using (\\ref{eq1}):\n\n\\begin{equation}\n\\label{eq1}\nN_{events} = N_{epochs} * N_{samples} * N_{batch}\n\\end{equation}\nAnother part of the architecture where the OxRAM device may observe a significant number of switching events is the synaptic matrix. Since we are interested in device endurance, we consider the worst case, i.e. the of maximum number of hits a particular OxRAM device will take during the entire weight update procedure. For worst case analysis we make the following assumptions-\n\\begin{itemize} \n\\item While bit encoding the synaptic weight (4 or 8 or 16), there exists an OxRAM device that is switched every single time. \n\\end{itemize}\nThus the maximum possible number of hits a device would take during the synaptic weight update procedure can be estimated using (\\ref{eq2}): \n\n\\begin{equation}\n\\label{eq2}\nN_{switch events}=N_{batch}*N_{epochs}\n\\end{equation}\nSimulated switching activity for reduced MNIST training for each neuron layer and synaptic matrix is shown in Table \\ref{table:sw} and Table \\ref{sw_act} corresponding to both SDA and DBN architectures respectively. Key observations can be summarized as:\n\\begin{itemize}\n\\item Increasing depth of the network increases amount of switching for hidden layers.\n\\item Increasing depth of the network doesn't have significant impact on the the switching events in the synaptic matrix.\n\\end{itemize}\n\n\\begin{table}\n\\centering\n\\caption{Maximum OxRAM switching activity for 5 layer SDA (training)} \n\\begin{tabular}{|c|c|}\n\\hline\nDevice placement & Max Switching activity\\\\ \n\\hline\nL1-784 & 596\\\\ \n\\hline\nL2-100 & 3074\\\\ \n\\hline\nL3-40 & 542\\\\ \n\\hline\nW1 & 6808\\\\ \n\\hline\nW2 & 5000\\\\ \n\\hline\n\\end{tabular}\n\\label{table:sw}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\caption{Maximum OxRAM switching activity for 5 layer DBN (training)} \n\\begin{tabular}{|c|c|}\n\\hline\nDevice placement & Max Switching activity\\\\ \n\\hline\nL1-784 & 596\\\\ \n\\hline\nL1-784 & 428 \\\\\n\\hline\nL2-160 & 2069 \\\\\n\\hline\nL3-80 & 3026 \\\\\n\\hline\nL4-40 & 420 \\\\\n\\hline\nW1 & 6798 \\\\\n\\hline\nW2 & 5000 \\\\\n\\hline\nW3 & 2500 \\\\\n\\hline\nW4 & 2500 \\\\\n\\hline\n\\end{tabular}\n\\label{sw_act}\n\\end{table}\n\n\\section{Conclusion}\n\\label{sc}\nIn this paper we proposed a novel methodology to realize DGM architectures using mixed-signal type hybrid CMOS-RRAM design framework. We achieve deep generative models by proposing a strategy to stack multiple RBM blocks. \nOverall learning rule used in this study is based on greedy-layer wise learning with no back propagation which allows the network to be trained to a good pre-training stage. RRAM devices are used extensively in the proposed architecture for multiple computing and storage actions. Total RRAM requirement for the largest simulated network was 139 kB for DBN and 169 kB for SDA.\nSimulated architectures show that the performance of the proposed DGM models matches closely with software based models for 2 layers deep network. The top-3 test accuracy achieved by the DBN for reduced MNIST was $\\sim$ 95.5\\%. MSE of SDA network was 0.003. Endurance analysis shows resonable maximum switching activity. Future work would focus on realizing an optimal strategy to implement back-propagation with the proposed architecture to enable complete training of the DGM on the hybrid DGM architecture. \n\n\\section*{Acknowledgement}\nThis research activity under the PI Prof. M. Suri is partially supported by the Department of Science \\& Technology (DST), Government of India and IIT-D FIRP Grant. Authors would like to express gratitude to S. Chakraborty. The authors would like to thank F. Alibart and D. Querlioz for the HfOx device data.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\nWith the advent of large-scale automated time-series photometric\nsurveys to search for microlensing events, transiting planets, and\nsupernovae, well-sampled light curves for millions of stars have been,\nand are continuing to be, collected. While the primary scientific\ngoals of these surveys are to search for rare phenomena, the enormous\ndatabases of light curves that are a by-product of these surveys\npresents a great opportunity to study many other topics related to\nstellar variability \\citep[e.g.][]{Paczynski.97}. With some exceptions\n\\citep[e.g.][]{Hartman.04,Creevey.05,Parley.06,Norton.07, Beatty.07,\n Karoff.07, Shporer.07, Fernandez.09} the copious data produced by\ndedicated wide-field transit surveys has been relatively\nunder-utilized for studying topics unrelated to transiting planets. In\nthis paper we use data from the Hungarian-made Automated Telescope\nNetwork \\citep[HATNet;][]{Bakos.04} project to study the variability\nof probable K and M dwarf field stars.\n\nCombining photometric observations with proper motion measurements is\nan effective method for selecting nearby dwarf stars. This technique\nis routinely used in the search for cool stars in the solar\nneighborhood \\citep[e.g.][]{Reyle.04}, and has been suggested as an\neffective method for screening giants from transit surveys\n\\citep{Gould.03}. By selecting red, high proper motion stars, it is\npossible to obtain a sample that consists predominately of nearby K\nand M-dwarfs, with very few luminous, distant giants. Whereas a\ngeneral variability survey of Galactic field stars will yield a\nm\\'{e}lange of objects that are often difficult to classify without\ndetailed follow-up \\citep[e.g.][]{Hartman.04, Shporer.07}, by focusing\na survey on red, high proper motion stars, one can narrow in on a few\nspecific topics related to low-mass stars.\n\nMain-sequence stars smaller than the Sun are not known to exhibit\nsignificant pulsational instabilities, but they may exhibit the\nfollowing types of photometric variability: 1. Variability due to\nbinarity (either eclipses or proximity effects such as ellipsoidal\nvariability). 2. Variability due to the rotational modulation or\ntemporal evolution of starspots. 3. Flares. We discuss each type of\nvariability, and what might be learned from studying it, below.\n\n\\subsection{Low-Mass Eclipsing Binaries}\nIn recent years there has been an increasing number of EBs discovered\nwith K and M dwarf components \\citep[see the list of 13 such binaries\n compiled by][note that most of these were found in the last 5\n years]{Devor.08}. From these discoveries it has become clear that\nthe radii of early M dwarfs and late K dwarfs are somewhat larger than\npredicted by theory \\citep[the number typically stated is\n 10\\%;][]{Torres.02,Ribas.03,Lopez-Morales.05,Ribas.06,Beatty.07,Fernandez.09}.\n\nMost of the binaries found to date have periods shorter than a few\ndays and are expected to have rotation periods that are tidally\nsynchronized to the orbital period. The rapid rotation in turn yields\nenhanced magnetic activity on these stars compared to isolated, slowly\nrotating stars. It has been suggested that the discrepancy between\ntheory and observation for these binary star components may be due to\ntheir enhanced magnetic activity inhibiting convection.\n\\citep{Ribas.06,Torres.06,Lopez-Morales.07,Chabrier.07}. Support for\nthis hypothesis comes from interferometric measurements of the\nluminosity-radius relation for inactive single K and M dwarfs which\nappears to be in agreement with theoretical predictions\n\\citep{Demory.09}. There is also evidence that the discrepancy may be\ncorrelated with metallicity \\citep{Lopez-Morales.07}. Testing these\nhypotheses will require finding additional binaries spanning a range\nof parameters (mass, rotation\/orbital period, metallicity, etc.). \n\nAs transiting planets are discovered around smaller stars, the need\nfor models that provide accurate masses and radii for these stars has\nbecome acute. For example, the errors in the planetary parameters for\nthe transiting Super-Neptune GJ 436b appear to be dominated by the\nuncertainties in the stellar parameters of the $0.45~M_{\\odot}$\nM-dwarf host star \\citep[][note that the author gives errors in the\n mass and radius for the star of $\\sim 3\\%$ assuming that the\n theoretical models for the luminosity of M-dwarfs are accurate while\n making no such assumption about the radius]{Torres.07}. Having a\nlarge sample of low mass stars with measured masses and radii would\nenable the determination of precise empirical relations between the\nparameters for these stars.\n\n\\subsection{Stellar Rotation}\nThe rotation period is a fundamental measurable property of a\nstar. For F, G, K and early M main sequence stars there is a\nwell-established relation between rotation, magnetic activity and age\n\\citep[e.g.][]{Skumanich.72,Noyes.84,Pizzolato.03}. In addition to\nilluminating aspects of stellar physics, this relation in practice\nprovides the best method for measuring the ages of field main sequence\nstars \\citep[][]{Mamajek.08,Barnes.07}.\n\nFor late M-dwarfs the picture is less clear. From a theoretical\nstandpoint one might expect that fully convective stars should not\nexhibit a rotation-activity connection if the connection is due to the\n$\\alpha\\Omega$-dynamo process, which operates at the interface of the\nradiative and convective zones in a star, and is thought to generate\nthe large scale magnetic fields in the Sun \\citep[see for example the\n discussion by][]{Mohanty.03}. Nonetheless, several studies have found\nevidence that the rotation-activity connection continues to very late\nM-dwarfs \\citep{Delfosse.98, Mohanty.03, Reiners.07}. In these studies\nthe rotation period is inferred from the projected rotation velocity\n$v \\sin i$, which is measured spectroscopically, while the degree of\nmagnetic activity is estimated by measuring either the H$\\alpha$\nemission or X-ray emission. Rotation studies of this sort suffer both\nfrom the inclination axis ambiguity, and from low sensitivity to slow\nrotation. In practice it is very difficult to measure $v \\sin i$\nvalues less than $\\sim 1$~km\/s, which generally means that it is only\npossible to place lower limits on the period for late M-dwarf stars\nwith periods longer than $\\sim 10~{\\rm days}$. Moreover, because\nlow-mass stars are intrinsically faint, these studies require large\ntelescopes to obtain high-resolution, high S\/N spectra, so that\ntypically only a few tens of stars are studied at a time.\n\nThere are two techniques that have been used to directly measure\nstellar rotation periods. The first technique, pioneered by\n\\citet{Wilson.57}, is to monitor the emission from the cores of the\nCaII H and K lines, searching for periodic variations. The venerable\nMount-Wilson Observatory HK project has used this technique to measure\nthe rotation periods of more than 100 slowly rotating dwarfs and giant\nstars\n\\citep[][]{Wilson.78,Duncan.91,Baliunas.95,Baliunas.96}. Alternatively,\nif a star has significant spot-coverage it may be possible to measure\nits rotation period by detecting quasi-periodic variations in its\nbroad-band photometric brightness. Studies of this sort have been\ncarried out in abundance for open clusters \\citep[e.g.][]{Radick.87}\nas well as for some field stars \\citep[e.g.][]{Strassmeier.00}. While\nthere are rich samples of rotation periods for K and M dwarfs in open\nclusters with ages $\\la 600~{\\rm Myr}$\n\\citep{Irwin.06,Irwin.07,Irwin.09a,Meibom.09,Hartman.09}, the data for\nolder K and M dwarfs is quite sparse. As such, there are few\nobservational constraints on the rotational evolution of these stars\nafter $\\sim 0.5~{\\rm Gyr}$.\n\nUnlike spectroscopic studies, photometric surveys may yield rotation\nperiods for hundreds of stars at a time. There are, however, some\ndrawbacks to these surveys. The spot distribution on the surface of a\nstar may in general be quite complex, so the resulting signal in the\nlight curve will not always take a simple form. Since the number of\nbrightness minima per cycle is not known a priori, there is a risk\nthat the true rotation period may be a harmonic of the measured\nperiod. Spots on the Sun come and go on time-scales shorter than the\nSolar rotation period, and indeed stellar light curves also exhibit\nsecular trends. For long period stars the measured variation\ntime-scale may actually correspond to a spot evolution time-scale\nrather than the rotation period of the star. For short period stars\nthere may be difficulties in distinguishing spot modulation from\nbinarity effects (though for these stars the rotation period is\nexpected to be tidally synchronized to the orbital period).\n\nDespite these caveats, given the existing uncertainties in the\nrotation-activity connection for low mass stars and the potential to\nuse rotation as a proxy for age, a large, homogeneously collected\nsample of photometric rotation periods for field K and M dwarfs could\npotentially be of high value.\n\n\\subsection{Flares}\nFlaring is known to be a common phenomenon among K and M\ndwarfs. Studies of open cluster and field flare stars have shown that\nthe frequency of flaring increases with decreasing stellar mass, and\ndecreases with increasing stellar age \\citep[e.g.][]{Ambartsumyan.70,\n Mirzoyan.89}. Significant flaring on these low-mass dwarfs is likely\nto impact the habitability of any planets they may harbor\n\\citep[e.g.][]{Kasting.93,Lammer.07,Guinan.09}, so determining the\nfrequency of flares, and its connection with other stellar properties\nsuch as rotation, has important implications for the study of\nexoplanets.\n\n\\subsection{The HATNet Survey}\nTo address these topics we use data from HATNet to conduct a\nvariability survey of K and M dwarfs. The on-going HATNet project is a\nwide-field search for transiting extrasolar planets (TEPs) orbiting\nrelatively bright stars. The project employs a network of 7 robotic\ntelescopes (4 in Arizona at Fred Lawrence Whipple Observatory, 2 in\nHawaii on the roof of the Sub-Millimeter Array at Mauna Kea\nObservatory, and 1 in Israel at Wise Observatory; the latter is\nreferred to as WHAT, see \\citealp{Shporer.09}) which have been used to\nobtain some $\\sim 700,000$ images covering approximately $10\\%$\nof the sky. The survey has generated light curves for approximately\n2.5 million stars, from which $\\sim 900$ candidate TEPs have been\nidentified. To date, the survey has announced the discovery of 12\nTEPs, including HAT-P-11b \\citep{Bakos.09}, a Super-Neptune\n($0.08~M_{J}$) planet that is the smallest found so far by a\nground-based transit survey. While the primary focus of the HATNet\nproject has been the discovery of TEPs, some results not related to\nplanets have also been presented. This includes the discovery and\nanalysis of a low-mass M dwarf in a single-lined eclipsing binary (EB)\nsystem \\citep{Beatty.07}, searches for variable stars in two\nHATNet\/WHAT fields \\citep{Hartman.04, Shporer.07}.\n\n\\subsection{Overview of the Paper}\nThe structure of the paper is as follows. In \\S~\\ref{sec:data} we\ndescribe both the HATNet photometric data, and select the sample of\nfield K and M dwarfs. In \\S~\\ref{sec:selection} we discuss our methods\nfor selecting variable stars. We estimate the degree of blending for\npotential variables in \\S~\\ref{sec:blend}. We match our catalog of\nvariables to other catalogs in \\S~\\ref{sec:match}. We discuss the\nproperties of the variables in \\S~\\ref{sec:discussion} including an\nanalysis of one of the EB systems found in the survey. We conclude in\n\\S~\\ref{sec:conclusion}. In appendix~\\ref{sec:mcsimulations} we\ndescribe the Monte Carlo simulations used in establishing our\nvariability selection thresholds, while in appendix~\\ref{sec:cat} we\npresent the catalog of variable stars.\n\n\\section{Observational Data}\\label{sec:data}\n\n\\subsection{HATNet Data}\\label{sec:hatnet}\n\nThe HATNet project, which has been in operation since 2003, uses a\nnetwork of 7 small (11\\,cm aperture), autonomous telescopes to obtain\ntime-series observations of stars. For details on the system design\nsee \\citet{Bakos.04}; here we briefly review a few points that are\nrelevant to the survey presented in this paper. Prior to 2007\nSeptember each telescope employed a 2K$\\times$2K CCD and a Cousins\n$I_{C}$ filter \\citep{Cousins.76}. The 2K$\\times$2K CCDs covered an\n$8.2\\degr \\times 8.2\\degr$ field of view (FOV) at a pixel scale of\n$14\\arcsec$. With these CCDs, stars with $7.5 \\la I_{C} \\la 14.0$ were\nobserved with a typical per-image photometric precision of a few mmag\nat the bright end, 0.01 mag at $I_{C} \\sim 11$, and 0.1 mag at $I_{C}\n\\sim 13.5$. After this date the telescopes were refitted with\n4K$\\times$4K CCDs and Cousins $R_{C}$ filters. The new CCDs cover a\n$10.6\\degr \\times 10.6\\degr$ FOV at a pixel scale of $9\\arcsec$. With\nthese CCDs, stars with $8.0 \\la R_{C} \\la 15.0$ are observed with a\ntypical per-image photometric precision of a few mmag at the bright\nend, 0.01 mag at $R_{C} \\sim 12$, and 0.1 mag at $R_{C} \\sim 15$. The\nexact magnitude limits and precision as a function of magnitude vary\nwithin a field due to vignetting, and from field to field due to\ndifferences in the reduction procedure used and in the degree of\nstellar crowding. In 2008 September the filters were changed to Sloan\n$r$, though we do not include any observations taken through the new\nfilters in the survey presented here. For both CCD formats, the\ntypical full width at half maximum (FWHM) of the point spread function\n(PSF) is $\\sim 2$ pixels (i.e. $\\sim 30\\arcsec$ for the 2K fields and\n$\\sim 20\\arcsec$ for the 4K fields).\n\nThe data for this survey comes from 72 HATNet fields with declinations between $+15\\degr$ and $+52\\degr$. These fields are\ndefined by dividing the sky into 838 $7.5\\degr \\times 7.5\\degr$\ntiles. The survey covers approximately 4000 square degrees, or roughly\n10\\% of the sky.\n\nThe data reduction pipeline has evolved over time, as such the fields\nstudied in this survey have not all been reduced in a uniform\nmanner. For simplicity we choose to use the available light curves as\nis, rather than re-reducing the fields in a consistent manner\noptimized for finding variable stars rather than TEPs. Both aperture\nphotometry (AP) and image subtraction photometry (ISM) have been used\nfor reductions. Both pipelines were developed from scratch for\nHATNet. See \\citet{Pal.09} for detailed descriptions of both methods.\n\nFor both pipelines the Two Micron All-Sky Survey\n\\citep[2MASS;][]{Skrutskie.06} is used as the astrometric\nreference. The astrometric solutions for the images are determined\nusing the methods described by \\citet{Pal.06}~and\n\\citet{Pal.09}. Photometry is performed at the positions of 2MASS\nsources transformed to the image coordinate system. For each resulting\nlight curve the median magnitude is fixed to the $I_{C}$ or $R_{C}$\nmagnitude of the source based on a transformation from the 2MASS\n$J$,$H$ and $K_{S}$ magnitudes. For the ISM reduction this magnitude\nis also used as the reference magnitude for each source in converting\ndifferential flux measurements into magnitudes.\n\nBoth the AP and ISM pipelines produce light curves that are calibrated\nagainst ensemble variations in the flux scale (for AP this is done as\na step in the pipeline, for ISM this is an automatic result of the\nmethod). For each source, light curves are obtained using three\nseparate apertures. The set of apertures used has changed over time;\nthe most recent reductions use aperture radii of 1.45, 1.95 and 2.35\npixels. Following the post-processing routines discussed below, we\nadopt a single ``best'' aperture for each light curve.\n\nThe calibrated light curves for each aperture are passed through two\nroutines that remove systematic variations from the light curves that\nare not corrected in calibrating the ensemble. The first routine (EPD)\ndecorrelates each light curve against a set of external parameters\nincluding parameters describing the shape of the PSF, the sub-pixel\nposition of the star on the image, the zenith angle, the hour angle,\nthe local sky background, and the variance of the background\n\\citep[see][]{Bakos.09}. After applying EPD, the light curves are then\nprocessed with the Trend-Filtering Algorithm \\citep[TFA;][]{Kovacs.05}\nwhich decorrelates each light curve against a representative sample of\nother light curves from the field. The number of template light curves\nused differs between the fields, typically the number is $\\sim 8\\%$ of\nthe total number of images for that field. In applying the TFA routine\nwe also perform $\\sigma$-clipping on the light curves since this\ngenerally reduces the number of false alarms when searching for\ntransits. For the remainder of the paper we will refer to light curves\nthat have been processed through EPD only, without application of TFA,\nas EPD light curves, and will refer to light curves that have been\nprocessed through both EPD and TFA as TFA light curves. We note that\nfor some fields the EPD light curves were not stored and only TFA\nlight curves are available.\n\nBoth of these algorithms tend to improve the signal to noise ratio of\ntransit signals in the light curves, but they may distort the light\ncurves of stars that show large-amplitude, long-period, continuous\nvariability. Additionally the decorrelation against the zenith and\nhour angles in the EPD routine will tend to filter out real variable\nstar signals with periods very close to a sidereal day or an integer\nmultiple of a sidereal day. The TFA routine in particular may distort\nlong-period signals while increasing the signal to noise ratio of\nshort-period signals. For this reason we analyze both the EPD and TFA\nlight curves, when available, to select variable stars\n(\\S~\\ref{sec:selection}). We note that for the analysis in this paper\nwe do not use the signal-reconstruction mode TFA presented by\n\\citet{Kovacs.05}. Once a signal is detected, TFA can be run in this\nmode to obtain a trend-filtered light curve that is free of signal\ndistortions, however for signal detection one must use general TFA\nsince the signal is not known a priori.\n\nFinally, an optimal aperture is chosen for\neach star. For stars fainter than a fixed limit the smallest aperture\nis used (to minimize the sky noise), for brighter stars the aperture\nwith the smallest root-mean-square (RMS) light curve is used.\n\n\\subsection{Composite Light Curves}\\label{sec:lightcurve}\n\nBecause the separation between the HATNet field centers is smaller\nthan the FOV of the HATNet telescopes for both the 2K and 4K CCDs,\nsome stars are observed in multiple fields. These stars may have more\nthan one light curve, which we combine into composite light curves. In\nmaking a composite light curve we subtract the median magnitude from\neach component light curve. For fields reduced with both ISM and AP we\nuse the light curve with the lowest RMS, and in the case of equal RMS\nwe use the ISM light curve. Note that the composite light curve for a\nstar may include a mix of $I_{C}$ and $R_{C}$ photometry. While the\namplitude of variability may be different from filter to filter, the\nperiod and phasing for variations due to eclipses or the rotational\nmodulation of starspots will be independent of bandpass. For\nsimplicity we do not allow for independent amplitudes of different\nfilters in searching for variability using the methods described in\n\\S~\\ref{sec:selection}; we do not expect this to make a significant\ndifference to period detections. However, we note that a simple\nmerging of photometry from different filters may result in spurious\nside lobes in the power spectrum; for a more detailed analysis of\nindividual objects this effect should be considered.\n\n\\subsection{Selection of the K and M Dwarf Sample}\\label{sec:otherdat}\n\nTo select the sample of stars that are probable K and M dwarfs we\napply cuts on the proper motion and on the color. Proper motion\nmeasurements are taken from the PPM-Extended catalog\n\\citep[PPMX;][]{Roser.08} which provides proper motions with\nprecisions ranging from 2~mas\/yr to 10~mas\/yr for 18 million stars\nover the full sky down to a limiting magnitude of $V \\sim 15.2$. The\nPPMX catalog provides complete coverage of the HATNet survey for stars\nwith $V-I_{C} \\la 1.2$ for the 2K fields and for stars with $V - R \\la\n0.7$ for the 4K fields. For stars redder than these limits, the faint\nlimit of HATNet is deeper than the faint limit of PPMX. We select all\nstars from this catalog with a proper motion $\\mu > 30~{\\rm\n mas\/yr}$. For the color selection we use the 2MASS $JHK_{S}$\nphotometry and, where available, $V$-band photometry from the PPMX\ncatalog, which is taken from the Tycho-2 catalog \\citep{Hog.00} and\ntransformed to the Johnson system by \\citet{Kharchenko.01}. Only $\\sim\n4\\%$ of the stars have $V$ photometry given in the PPMX catalog, for\nthe majority of stars that do not have $V$ photometry we calculate an\napproximate $V$ magnitude using\n\\begin{equation}\\label{eq:2MASSVtrans}\nV = -0.0053 + 3.5326J + 1.3141H - 3.8331K_{S}.\n\\end{equation}\nwhich is determined from 590 Landolt Standard stars \\citep{Landolt.92}\nwith 2MASS photometry, and is used internally by the HATNet project to\nestimate the $V$ magnitudes of transit candidates for follow-up\nobservations. We then select stars with $V-K_{S} > 3.0$ which\ncorresponds roughly to stars with spectral types later than K6\n\\citep{Bessell.88}. This selects a total of \\ensuremath{471,970}{} stars from\nthe PPMX catalog, of which \\ensuremath{33,177}{} fall in a reduced HATNet\nfield; of these \\ensuremath{32,831}{} have a HATNet light curve\ncontaining more than 1000 points.\n\nNote that extrapolating a $V$ magnitude from near-infrared photometry\nis not generally reliable to more than a few tenths of a magnitude. We\ntherefore also obtained $V$ magnitudes for stars in our sample by\nmatching to the USNO-B1.0 catalog \\citep{Monet.03}. This matching was\ndone after the variability search described in \\S~\\ref{sec:selection};\nwe choose not to redo the sample selection and the subsequent\nvariability selection. Note that low-mass sub-dwarfs, which have\nanomalously blue $V-K_{S}$ values, will pass a selection on $V-K_{S} >\n3.0$ computed using eq.~\\ref{eq:2MASSVtrans} to extrapolate the $V$\nmagnitude, while they would not necessarily pass a selection using the\nmeasured value of $V-K_{S}$. To transform from the photographic\n$B_{U}$, $R_{U}$ magnitudes in the USNO-B1.0 catalog to the $V$-band\nwe use a relation of the form:\n\\begin{equation}\\label{eq:BURUVtrans}\nV = aB_{U} + bB_{U}^2 + cR_{U} + dR_{U}^2 + eB_{U}R_{U} + f\n\\end{equation}\nwith coefficients given separately in table~\\ref{tab:fitparams} for\nthe $(B_{U,1},R_{U,1})$, $(B_{U,1},R_{U,2})$, $(B_{U,2},R_{U,1})$ and\n$(B_{U,2},R_{U,2})$ combinations. These transformations were\ndetermined using $\\sim 1100$ stars with both $V$ photometry in the PPMX\ncatalog and USNO-B1.0 $(B_{U},R_{U})$ photometry. Based on the\nroot-mean-square (RMS) scatter of the post-transformation residuals,\nwe used the $(B_{U,2},R_{U,2})$, $(B_{U,1},R_{U,2})$,\n$(B_{U,2},R_{U,1})$ and $(B_{U,1},R_{U,1})$ transformations in order\nof preference. For stars with neither PPMX $V$ photometry nor\nUSNO-B1.0 photometry, we used eq.~\\ref{eq:2MASSVtrans}. For the\nremainder of the analysis in this paper, the $V$ magnitude is taken\nfrom PPMX (Tycho-2) for 3.1\\% of the stars in our sample, from\nUSNO-B1.0 for 93.6\\% of the stars, and is transformed from the 2MASS\nmagnitudes for 3.3\\% of the stars.\n\nFigure~\\ref{fig:HKJH} shows the $J-H$ vs. $H-K_{S}$ color-color\ndiagram for the selected sample. We also show the expected relations\nfor dwarf stars and for giants. The relation for dwarfs is taken from\na combination of the \\citet{Baraffe.98} 1.0 Gyr isochrone for solar\nmetallicity stars with $0.15~M_{\\odot} \\leq M \\leq 0.7~M_{\\odot}$ and\nthe \\citet{Chabrier.00} models for objects with $M \\leq\n0.075~M_{\\odot}$. The $JHK$ magnitudes for these isochrones were\ntransformed from the CIT system \\citep{Elias.82,Elias.83} to the 2MASS\nsystem using the transformations determined by\n\\citet{Carpenter.01}. The relation for giant stars with $\\log g < 2.0$\nis taken from the 1.0 Gyr, solar metallicity Padova isochrone\n\\citep{Marigo.08, Bonatto.04}\\footnote{The isochrone was obtained from\n the CMD 2.1 web interface\n http:\/\/stev.oapd.inaf.it\/cgi-bin\/cmd\\_2.1}. While the majority of\nstars lie in the expected dwarf range, a significant number of stars\nfall along the giant branch. Some of these stars may be rare carbon\ndwarfs, but the majority are most likely giants with inaccurate proper\nmotion measurements in the PPMX catalog. Of the 2445 selected stars\nwith $J - H > 0.8$ that have HATNet light curves, 87\\% have undetected\nproper motions or proper motions less than 10 mas\/yr in the USNO-B1.0\ncatalog, this is compared to 28\\% of the sample with $J - H < 0.8$. A\nvisual inspection of the POSS-I and POSS-II Digitized Sky Survey\nimages for a number of the sources with $J - H > 0.8$ and $\\mu >\n100~{\\rm mas\/yr}$ revealed none with visually detectable proper\nmotion, and in many cases the object consists of two close, comparably\nbright stars, for which misidentification of sources may be to blame\nfor the spurious proper motion detection. This includes several stars\nwhere the PPMX and USNO-B1.0 proper motion values are comparable. We\ntherefore apply an additional cut in the $J - H$ vs $H-K_{S}$\ncolor-color diagram as shown in figure~\\ref{fig:HKJH} to reduce the\nsample to \\ensuremath{28,785}{} stars.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{1.0}}\n\\plotone{f1_small.eps}\n\\caption{$J-H$ vs. $H-K_{S}$ color-color diagram for\n \\ensuremath{32,831}{} stars that have $V-K_{S} > 3.0$ with $V$\n taken either from the PPMX catalog (Tycho-2) or extrapolated from\n the 2MASS $J$, $H$ and $K_{S}$ magnitudes using\n eq.~\\ref{eq:2MASSVtrans}, $\\mu > 30~{\\rm mas\/yr}$ from the PPMX\n catalog, and that have a HATNet light curve containing more than\n 1000 points (gray-scale points). The solid line shows the expected\n relation for cool dwarfs \\citep{Baraffe.98,Chabrier.00}, while the\n dot-dashed line shows the expected relation for giants\n \\citep{Marigo.08, Bonatto.04}. Stars outside the area enclosed by\n the dotted line are rejected. For display purposes we have added\n slight Gaussian noise to the observed colors in the plot.}\n\\label{fig:HKJH}\n\\end{figure}\n\nFigure~\\ref{fig:rpm} shows a $V-J$ vs. $H_{J}$ reduced proper-motion\n\\citep[RPM;][]{Luyten.22} diagram for the sample. Here the RPM, $H_{J}$, is calculated as\n\\begin{equation}\nH_{J} = J + 5\\log_{10}(\\mu\/1000)\n\\end{equation} \nand gives a rough measure of the absolute magnitude $M_{J}$ of a\nstar. We show roughly the lines separating main sequence dwarfs from\nsub-dwarfs and giants. In figure~\\ref{fig:MJRPMcomp} we compare the\nRPM to $M_{J}$ for 239 stars in the sample which have a Hipparcos\nparallax \\citep{Perryman.97}. To remove additional giants from the\nsample we reject \\ensuremath{1225}{} stars with\n$H_{J} < 3.0$, leaving our final sample of \\ensuremath{27,560}{} stars.\n\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{1.0}}\n\\plotone{f2_small.eps}\n\\caption{$V-J$ vs. $H_{J}$ RPM diagram for stars with $\\mu > 30~{\\rm\n mas\/yr}$ passing $V-K_{S}$ and $JHK$ color cuts. Note that in this\n plot we use $V$ magnitudes that are transformed from the USNO-B1.0\n photographic magnitudes for the majority stars, and not the $V$\n magnitudes transformed from $JHK$ that were used in applying the\n initial $V-K_{S}$ cut. The lines separate main sequence dwarfs from\n sub-dwarfs and giants. We reject the\n \\ensuremath{1225}{} stars with $H_{J} < 3.0$. The\n lines separating main sequence dwarfs from sub-dwarfs are taken from\n \\citet{Yong.03}.}\n\\label{fig:rpm}\n\\end{figure}\n\n\\ifthenelse{\\boolean{emulateapj}}{\\begin{deluxetable*}{ccrrrrrrr}}{\\begin{deluxetable}{ccrrrrrrr}}\n\\tabletypesize{\\scriptsize}\n\\tablewidth{0pc}\n\\tablecaption{Coefficients for transformations from USNO-B1.0 $B_U$ and $R_U$ magnitudes to $V$ (eq.~\\ref{eq:BURUVtrans}).}\n\\tablehead{\n\\colhead{$B_{U}$} &\n\\colhead{$R_{U}$} &\n\\colhead{$a$} &\n\\colhead{$b$} &\n\\colhead{$c$} &\n\\colhead{$d$} &\n\\colhead{$e$} &\n\\colhead{$f$} &\n\\colhead{RMS [mag]}\n}\n\\startdata\n1 & 1 & $-0.76 \\pm 0.19$ & $-0.02 \\pm 0.01$ & $1.71 \\pm 0.15$ & $-0.14 \\pm 0.01$ & $0.15 \\pm 0.02$ & $1.63 \\pm 0.52$ & 0.27 \\\\\n1 & 2 & $0.72 \\pm 0.08$ & $-0.061 \\pm 0.002$ & $0.38 \\pm 0.06$ & $-0.057 \\pm 0.006$ & $0.114 \\pm 0.007$ & $-0.89 \\pm 0.26$ & 0.19 \\\\\n2 & 1 & $0.85 \\pm 0.09$ & $-0.045 \\pm 0.006$ & $0.44 \\pm 0.08$ & $-0.051 \\pm 0.003$ & $0.079 \\pm 0.008$ & $-1.52 \\pm 0.24$ & 0.22 \\\\\n2 & 2 & $0.73 \\pm 0.14$ & $-0.164 \\pm 0.008$ & $0.46 \\pm 0.13$ & $-0.20 \\pm 0.01$ & $0.35 \\pm 0.02$ & $-0.75 \\pm 0.26$ & 0.19 \\\\\n\\enddata\n\\label{tab:fitparams}\n\\ifthenelse{\\boolean{emulateapj}}{\\end{deluxetable*}}{\\end{deluxetable}}\n\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{1.0}}\n\\plotone{f3.eps}\n\\caption{The absolute magnitude $M_{J}$ vs. the RPM for 239 stars in\n the sample which have a Hipparcos parallax. This confirms that the\n RPM provides a rough estimate of the absolute magnitude for this\n sample of stars. We reject stars with $H_{J} < 3.0$, as these appear to be\n predominately giants.}\n\\label{fig:MJRPMcomp}\n\\end{figure}\n\n\\section{Selection of Variable Stars}\\label{sec:selection}\n\nWe use a number of techniques to search for light curves that show\nsignificant variability. The techniques include the phase-binning and\nharmonic-fitting Analysis of Variance periodograms\n\\citep[AoV\/AoVHarm;][]{SchwarzenbergCzerny.89,SchwarzenbergCzerny.96},\nthe Discrete Auto-Correlation Function \\citep[DACF;][]{Edelson.88},\nand the Box-Least-Squares \\citep[BLS;][]{Kovacs.02} algorithms as\nimplemented in the VARTOOLS program\\footnote{The VARTOOLS program is\n available at http:\/\/www.cfa.harvard.edu\/\\~{}jhartman\/vartools\/}\n\\citep{Hartman.08}. We also conduct a search for flare-like events in\nthe light curves. We apply the algorithms to both the EPD and TFA\nlight curves of sources, when available. For the flare-search we use\nonly the EPD light curves because the $\\sigma$-clipping applied to the\nTFA light curves may remove real flares.\n\nBecause the light curves contain non-Gaussian, temporally-correlated\nnoise, formal estimates for the variability detection significance are\nunreliable. We therefore have conducted Monte Carlo simulations of\nlight curves with realistic noise properties to inform our choice of\nselection thresholds for several of the algorithms mentioned above,\nthese are described in appendix~\\ref{sec:mcsimulations}. In the\nfollowing subsections we discuss the use of each of the variability\nselection algorithms in turn. We finish with a comparison of the\ndifferent methods. The resulting catalog of variable stars is\npresented in appendix~\\ref{sec:cat}.\n\n\\subsection{AoV}\nThe AoV periodogram is a method for detecting continuous periodic\nvariations suggested by \\citet{SchwarzenbergCzerny.89} which typically\nyields higher S\/N detections than other periodograms. The original\nmethod suggested by \\citet{SchwarzenbergCzerny.89} uses phase-binning\nfor the model signal, so it is most comparable to the popular Phase\nDispersion Minimization technique\n\\citep[PDM;][]{Stellingwerf.78}. Following this\n\\citet{SchwarzenbergCzerny.96} introduced an efficient method for\nfitting a Fourier series to a non-uniformly sampled light curve using\na basis of orthogonal polynomials. When combined with an ANalysis Of\nVAriance (ANOVA) statistic, the result is the AoVHarm periodogram. We\napplied both methods to our light curves, and discuss each in turn.\n\n\\ifthenelse{\\boolean{emulateapj}}{\\begin{deluxetable*}{ccrrrrrrr}}{\\begin{deluxetable}{ccrrrrrrr}}\n\\tabletypesize{\\scriptsize}\n\\tablewidth{0pc}\n\\tablecaption{Selection Threshold Parameters for AoV and AoVHarm}\n\\tablehead{\n& & & \\multicolumn{3}{c}{AoV} & \\multicolumn{3}{c}{AoVHarm} \\\\\n\\colhead{Chip} &\n\\colhead{LC Type} &\n\\colhead{Mag.} &\n\\colhead{$S\/N_{0}$} &\n\\colhead{$P_{0}$ [days]} &\n\\colhead{$\\alpha$} &\n\\colhead{$S\/N_{0}$} &\n\\colhead{$P_{0}$ [days]} &\n\\colhead{$\\alpha$}\n}\n\\startdata\nEPD & 2K & 6.5 & 30.0 & 5 & 0.40 & 40.0 & 4 & 0.41\\\\\nEPD & 2K & 7.5 & 30.0 & 5 & 0.40 & 40.0 & 4 & 0.41\\\\\nEPD & 2K & 8.5 & 30.0 & 5 & 0.40 & 40.0 & 4 & 0.41\\\\\nEPD & 2K & 9.5 & 30.0 & 5 & 0.33 & 30.0 & 4 & 0.37\\\\\nEPD & 2K & 10.5 & 25.0 & 5 & 0.23 & 30.0 & 4 & 0.37\\\\\nEPD & 2K & 11.5 & 20.0 & 5 & 0.23 & 25.0 & 4 & 0.43\\\\\nEPD & 2K & 12.5 & 20.0 & 5 & 0.23 & 25.0 & 4 & 0.36\\\\\nEPD & 2K & 13.5 & 20.0 & 5 & 0.14 & 25.0 & 4 & 0.22\\\\\nTFA & 2K & 6.5 & 20.0 & 5 & 0.14 & 30.0 & 4 & 0.16\\\\\nTFA & 2K & 7.5 & 20.0 & 5 & 0.14 & 30.0 & 4 & 0.16\\\\\nTFA & 2K & 8.5 & 20.0 & 5 & 0.14 & 30.0 & 4 & 0.16\\\\\nTFA & 2K & 9.5 & 20.0 & 5 & 0.00 & 20.0 & 4 & 0.22\\\\\nTFA & 2K & 10.5 & 15.0 & 5 & 0.00 & 20.0 & 4 & 0.13\\\\\nTFA & 2K & 11.5 & 15.0 & 5 & 0.00 & 20.0 & 4 & 0.00\\\\\nTFA & 2K & 12.5 & 15.0 & 5 & 0.00 & 20.0 & 4 & 0.00\\\\\nTFA & 2K & 13.5 & 15.0 & 5 & 0.00 & 20.0 & 4 & 0.00\\\\\nEPD & 4K & 7.5 & 30.0 & 5 & 0.33 & 30.0 & 4 & 0.50\\\\\nEPD & 4K & 8.5 & 30.0 & 5 & 0.33 & 30.0 & 4 & 0.50\\\\\nEPD & 4K & 9.5 & 30.0 & 5 & 0.33 & 30.0 & 4 & 0.50\\\\\nEPD & 4K & 10.5 & 30.0 & 5 & 0.33 & 30.0 & 4 & 0.50\\\\\nEPD & 4K & 11.5 & 25.0 & 5 & 0.23 & 25.0 & 4 & 0.43\\\\\nEPD & 4K & 12.5 & 20.0 & 5 & 0.23 & 25.0 & 4 & 0.36\\\\\nEPD & 4K & 12.5 & 20.0 & 5 & 0.23 & 20.0 & 4 & 0.43\\\\\nEPD & 4K & 13.5 & 20.0 & 5 & 0.14 & 20.0 & 4 & 0.28\\\\\nEPD & 4K & 14.5 & 20.0 & 5 & 0.00 & 20.0 & 4 & 0.22\\\\\nTFA & 4K & 7.5 & 20.0 & 5 & 0.00 & 25.0 & 4 & 0.15\\\\\nTFA & 4K & 8.5 & 20.0 & 5 & 0.00 & 25.0 & 4 & 0.15\\\\\nTFA & 4K & 9.5 & 20.0 & 5 & 0.00 & 25.0 & 4 & 0.15\\\\\nTFA & 4K & 10.5 & 20.0 & 5 & 0.00 & 25.0 & 4 & 0.06\\\\\nTFA & 4K & 11.5 & 15.0 & 5 & 0.00 & 20.0 & 4 & 0.00\\\\\nTFA & 4K & 12.5 & 15.0 & 5 & 0.00 & 20.0 & 4 & 0.00\\\\\nTFA & 4K & 12.5 & 15.0 & 5 & 0.00 & 20.0 & 4 & 0.00\\\\\nTFA & 4K & 13.5 & 15.0 & 5 & 0.00 & 20.0 & 4 & 0.00\\\\\nTFA & 4K & 14.5 & 15.0 & 5 & 0.00 & 20.0 & 4 & 0.00\\\\\n\\enddata\n\\label{tab:aovcutoff}\n\\ifthenelse{\\boolean{emulateapj}}{\\end{deluxetable*}}{\\end{deluxetable}}\n\n\\subsubsection{Phase Binning AoV - Search for General Periodic Variability}\n\nWe run the AoV algorithm with phase-binning on the full sample of\nstars. We first apply a $5\\sigma$ iterative clipping to the light\ncurve before searching for periods between $0.1$ and $100~{\\rm\n days}$. We use 8 phase bins, and generate the periodogram at a\nfrequency resolution of $0.1 \/ T$ where $T$ is the total time-span\ncovered by a given light curve. We then determine the peak at 10 times\nhigher resolution. As our figure of merit we use the signal-to-noise\nratio (S\/N), with a $5\\sigma$ iterative clipping applied to the\nperiodogram in calculating the noise (the RMS of the\nperiodogram). Figure~\\ref{fig:AOV_SNvsP} shows the AoV S\/N as a\nfunction of the peak period for various light curve subsamples. For\ncomparison we also show the Monte Carlo simulation results for each\nsubset. We adopt a separate selection threshold on S\/N for each\nsubsample. The thresholds have the form\n\\begin{equation}\nS\/N_{\\rm min} = \\left\\{ \\begin{array}{ll}\nS\/N_{0} & \\mbox{if $P < P_{0}$} \\\\\nS\/N_{0}\\left( P\/P_{0} \\right) ^{\\alpha} & \\mbox{if $P \\geq P_{0}$}\n\\end{array}\n\\right..\n\\end{equation}\nThe adopted values of $S\/N_{0}$, $P_{0}$ and $\\alpha$ are listed for\neach subsample in Table~\\ref{tab:aovcutoff}. In addition to this\nselection we also reject detections with periods near 1 sidereal day,\none of its harmonics, or other periods which appear as spikes in the\nhistogram of detected periods (the latter includes periods between\n5.71 and 5.80 days). For composite light curves that contain both 2K\nand 4K observations we take\n\\begin{equation}\nS\/N_{\\rm min} = f_{2K}S\/N_{\\rm min,2K} + f_{4K}S\/N_{\\rm min,4K}\n\\label{eqn:aovsnmin}\n\\end{equation}\nwhere $f_{2K}$ and $f_{4K}$ are the fraction of points in the light\ncurve that come from 2K and 4K observations, and $S\/N_{\\rm min,2K}$\nand $S\/N_{\\rm min,4K}$ are the 2K and 4K thresholds at the period of\nthe composite light curve.\n\nAs summarized in table~\\ref{tab:selectionstats}, our selection\nthreshold passes a total of \\ensuremath{1320}{} EPD light curves, and\n\\ensuremath{1729}{} TFA light curves. These are inspected by eye to\nreject obvious false alarms and to identify EBs. There are\n\\ensuremath{753}{} EPD light curves that we judge to show clear,\ncontinuous, periodic variability, \\ensuremath{47}{} that show eclipses,\n\\ensuremath{419}{} that we consider to be questionable (these are\nincluded in the catalog, but flagged as questionable), and\n\\ensuremath{101}{} that we reject. For the TFA light curves the\nrespective numbers are \\ensuremath{1210}{}, \\ensuremath{64}{},\n\\ensuremath{400}{}, and \\ensuremath{55}{}. Note that the distinction\nbetween ``clear'' variability and ``questionable'' cases is fairly\nsubjective. Generally we require the variations to be obvious to the\neye for periods $\\ga$ 30 days, for shorter periods we consider the\nselection to be questionable if it appears that the variability\nselection may be due to enhanced scatter on a few nights.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.7}}\n\\plotone{f4.eps}\n\\caption{Period vs. S\/N from AoV for 4 representative light curve\n subsamples. On the left column we plot the observed values showing\n the light curves that pass the selection (dark filled points) and\n the light curves that do not pass the selection (grey filled points)\n separately, on the right column we plot the results from the Monte\n Carlo simulation for the corresponding subsample. The lines show the\n adopted S\/N cut-off as a function of period. Note that in addition\n to the cut-off shown with the line, we also reject light curves for\n which the peak period is close to one sidereal day or a harmonic of\n one sidereal day.}\n\\label{fig:AOV_SNvsP}\n\\end{figure}\n\n\\ifthenelse{\\boolean{emulateapj}}{\\begin{deluxetable*}{llrrrrr}}{\\begin{deluxetable}{llrrrrr}}\n\\tabletypesize{\\scriptsize}\n\\tablewidth{0pc}\n\\tablecaption{Summary of variable star selection}\n\\tablehead{\n\\multicolumn{1}{c}{Variability} &\n\\multicolumn{1}{c}{LC Reduction} &\n\\multicolumn{1}{c}{Automatically} &\n\\multicolumn{1}{c}{Non-EB} &\n&\n& \\\\\n\\colhead{Selection Type} &\n\\colhead{Type} &\n\\colhead{Selected} &\n\\colhead{Var.} &\n\\colhead{EB} &\n\\colhead{Quest.} &\n\\colhead{Rej.}\n}\n\\startdata\nAoV & EPD & \\ensuremath{1320}{} & \\ensuremath{753}{} & \\ensuremath{47}{} & \\ensuremath{419}{} & \\ensuremath{101}{} \\\\\nAoV & TFA & \\ensuremath{1729}{} & \\ensuremath{1210}{} & \\ensuremath{64}{} & \\ensuremath{400}{} & \\ensuremath{55}{} \\\\\nAoV & Both\\tablenotemark{a} & \\ensuremath{781}{} & \\ensuremath{576}{} & \\ensuremath{41}{} & \\ensuremath{53}{} & \\ensuremath{5}{} \\\\\nAoV & Either\\tablenotemark{b} & \\ensuremath{2268}{} & \\ensuremath{1387}{} & \\ensuremath{70}{} & \\ensuremath{766}{} & \\ensuremath{151}{} \\\\\nAoV only\\tablenotemark{c} & Either\\tablenotemark{b} & \\ensuremath{450}{} & \\ensuremath{120}{} & \\ensuremath{4}{} & \\ensuremath{317}{} & \\ensuremath{79}{} \\\\\nAovHarm & EPD & \\ensuremath{1337}{} & \\ensuremath{1082}{} & \\ensuremath{34}{} & \\ensuremath{185}{} & \\ensuremath{36}{} \\\\\nAovHarm & TFA & \\ensuremath{1717}{} & \\ensuremath{1443}{} & \\ensuremath{43}{} & \\ensuremath{217}{} & \\ensuremath{14}{} \\\\\nAovHarm & Both\\tablenotemark{a} & \\ensuremath{936}{} & \\ensuremath{806}{} & \\ensuremath{24}{} & \\ensuremath{27}{} & \\ensuremath{5}{} \\\\\nAovHarm & Either\\tablenotemark{b} & \\ensuremath{2118}{} & \\ensuremath{1719}{} & \\ensuremath{53}{} & \\ensuremath{375}{} & \\ensuremath{45}{} \\\\\nAovHarm only\\tablenotemark{c} & Either\\tablenotemark{b} & \\ensuremath{450}{} & \\ensuremath{120}{} & \\ensuremath{4}{} & \\ensuremath{317}{} & \\ensuremath{79}{} \\\\\nBLS & EPD & \\ensuremath{463}{} & \\ensuremath{79}{} & \\ensuremath{60}{} & \\ensuremath{39}{} & \\ensuremath{281}{} \\\\\nBLS & TFA & \\ensuremath{444}{} & \\ensuremath{160}{} & \\ensuremath{81}{} & \\ensuremath{39}{} & \\ensuremath{160}{} \\\\\nBLS & Both\\tablenotemark{a} & \\ensuremath{155}{} & \\ensuremath{62}{} & \\ensuremath{52}{} & \\ensuremath{4}{} & \\ensuremath{13}{} \\\\\nBLS & Either\\tablenotemark{b} & \\ensuremath{752}{} & \\ensuremath{177}{} & \\ensuremath{89}{} & \\ensuremath{74}{} & \\ensuremath{428}{} \\\\\nBLS only\\tablenotemark{c} & Either\\tablenotemark{b} & \\ensuremath{439}{} & \\ensuremath{1}{}\\tablenotemark{d} & \\ensuremath{21}{} & \\ensuremath{64}{}\\tablenotemark{d} & \\ensuremath{428}{} \\\\\nDACF & EPD & \\ensuremath{1491}{} & \\ensuremath{620}{} & \\ensuremath{0}{} & \\ensuremath{534}{} & \\ensuremath{243}{} \\\\\nDACF & TFA & \\ensuremath{1190}{} & \\ensuremath{507}{} & \\ensuremath{0}{} & \\ensuremath{318}{} & \\ensuremath{95}{} \\\\\nDACF & Both\\tablenotemark{a} & \\ensuremath{353}{} & \\ensuremath{203}{} & \\ensuremath{0}{} & \\ensuremath{27}{} & \\ensuremath{21}{} \\\\\nDACF & Either\\tablenotemark{b} & \\ensuremath{2328}{} & \\ensuremath{924}{} & \\ensuremath{0}{} & \\ensuremath{825}{} & \\ensuremath{317}{} \\\\\nDACF only\\tablenotemark{c} & Either\\tablenotemark{b} & \\ensuremath{1344}{} & \\ensuremath{373}{} & \\ensuremath{0}{} & \\ensuremath{582}{} & \\ensuremath{528}{} \\\\\n\\hline\nTotals\\tablenotemark{e} & & \\ensuremath{4579}{} & \\ensuremath{2239}{} & \\ensuremath{95}{} & \\ensuremath{1176}{} & \\ensuremath{1105}{} \\\\\n\\enddata\n\\label{tab:selectionstats}\n\\tablenotetext{a}{Stars that are selected by this method for both the EPD and TFA light curves. Note that stars that are automatically selected by this method for both EPD and TFA light curves but for which the by eye classification differs between the two light curve types will be included in the number of automatically selected variables, but will not be included in any of the subsequent columns for this row.}\n\\tablenotetext{b}{Stars that are selected by this method for either the EPD or the TFA light curves.}\n\\tablenotetext{c}{Stars that are selected exclusively by this method for either the EPD or the TFA light curves. For the Non-EB Var. and EB types stars are included in this row if they were only classified as that type during the visual inspection for this method. The questionable column lists the total number of stars that were flagged as questionable by this method and were either rejected during the visual inspection, or not selected, by all other methods. The rejected column lists the total number of stars that were rejected during the visual inspection for this method and did not pass the automatic selection for any of the other methods.}\n\\tablenotetext{d}{These stars are not included in the catalog of variables.}\n\\tablenotetext{e}{For the EB and non-EB variable classes we list the total number of stars that were classified as this type during the visual inspection for at least one method. Note that \\ensuremath{36}{} stars are flagged as EBs during the visual inspection for one method and as non-EB variables during the visual inspection for another method, so a total of \\ensuremath{2298}{} stars are flagged as either an EB or as a robust non-EB variable for at least one method. This total does not include flare stars selected in \\S~\\ref{sec:flares}, unless they are also identified as a variable by AoV, AoVHarm, BLS or DACF. Including flare stars, the total number of stars with a robust variability detection is \\ensuremath{2321}{}. Of these, \\ensuremath{1928}{} are not flagged as a probable blend or as having a problematic amplitude. For the questionable detections we list the total number of stars that are classified as questionable for at least one method and are not classified as a robust detection for any of the methods. For the rejections we list the total number of stars that were rejected during the visual inspection for all methods by which they were automatically selected.}\n\\ifthenelse{\\boolean{emulateapj}}{\\end{deluxetable*}}{\\end{deluxetable}}\n\n\\subsubsection{Harmonic AoV - Search for Sinusoidal Periodic Variability}\n\nThe AoVHarm periodogram is generated for each star in a similar manner\nto the AoV periodogram. We run the algorithm using a sinusoid model\nwith no higher harmonics (it is thus comparable to DFT methods, or to\nthe popular Lomb-Scargle technique; \\citealp{Lomb.76,Scargle.82}). As\nfor the phase-binning AoV search we use eq.~\\ref{eqn:aovsnmin} for the\nselection threshold, with parameters given for each subsample in\ntable~\\ref{tab:aovcutoff}. Figure~\\ref{fig:AOVHARM_SNvsP} shows the\nAoVHarm S\/N as a function of peak period for several light curve\nsubsamples.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.7}}\n\\plotone{f5.eps}\n\\caption{Same as Figure~\\ref{fig:AOV_SNvsP}, here we show the results for the AoVHarm period search.}\n\\label{fig:AOVHARM_SNvsP}\n\\end{figure}\n\nOur selection threshold passes a total of \\ensuremath{1337}{} EPD\nlight curves and \\ensuremath{1717}{} TFA light curves. Inspecting\nthese by eye we find \\ensuremath{1082}{} EPD light curves that show\nclear, continuous, periodic variability, \\ensuremath{34}{} that show\neclipses, \\ensuremath{185}{} that we consider to be questionable,\nand \\ensuremath{36}{} that we reject. For the TFA light curves the\nnumbers are \\ensuremath{1443}{}, \\ensuremath{43}{},\n\\ensuremath{217}{}, and \\ensuremath{14}{} respectively (see also\ntable~\\ref{tab:selectionstats}).\n\n\\subsection{DACF - Search for Quasi-Periodic Variability}\nThe DACF is a technique that has frequently been used to determine the\nvariation time-scale of quasi-periodic signals. In particular this\ntechnique is commonly applied in measuring the rotation period of a\nstar from a light curve that exhibits variations due to the rotational\nmodulation of starspots which may be varying in size, intensity,\nposition, or number over time \\citep[e.g.][]{Aigrain.08}. It is not\nobvious, however, that the DACF method provides better period\ndeterminations than Fourier methods such as AoVHarm even in\nquasi-periodic cases. Since even in cases where the coherence\ntime-scale is short compared to the period, the Fourier power spectrum\nshould still have a predominant peak near the period of the star that\ncan be determined in a straightforward fashion, whereas the automatic\nidentification of the predominant variation time-scale from an\nautocorrelation function is non-trivial, as seen below. It is,\ntherefore, with some skepticism that we attempt to use the DACF method\nto identify periods.\n\nFor each light curve we calculate the DACF at time lags ranging from 0\nto 100 days with a step-size of 1 day. The binning time-scale of 1 day\nmeans that we will only be sensitive to variation time-scales $\\ga 2$\ndays. In practice, our peak finding algorithm limits us to periods\n$\\ga 10$ days. A light curve with a significant periodic or\nquasi-periodic signal with a timescale $T$ will have a DACF that peaks\nat $T$, as well as at $2T$, $3T$, $\\ldots$ depending on the coherence\nof the signal.\n\nTo automate the selection of peaks in the DACF we use the following\nroutine (below, $y_{i}$ is the DACF value for time-lag $t_{i}$):\n\\begin{enumerate}\n\\item Identify connected sets of points $(t_{i},y_{i})$ with $y_{i} >\n 0$.\n\\item Extend the left and right boundaries of each set until a local\n minimum is found in both directions.\n\\item Reject sets with 3 or fewer points, with 0 for the left\n boundary, or with the maximum time-lag computed for the right\n boundary. This leaves $N_{\\rm peak}$ sets of points (peaks) to\n consider.\n\\item Fit a quadratic function $y(t)$ to each of the $N_{\\rm peak}$\n sets.\n\\item Letting $\\chi^{2}_{N - 3}$ and $\\chi^{2}_{N-1}$ be the\n $\\chi^{2}$ values from fitting a quadratic function and a constant\n function respectively to the set, we perform an F-test on the\n statistic\n\\begin{equation}\nf=\\frac{(\\chi^{2}_{N-1} - \\chi^{2}_{N-3})\/2}{\\chi^{2}_{N-3}\/(N-3)}\n\\label{eqn:ftest}\n\\end{equation}\nto determine the significance of the quadratic fit relative to the\nconstant function fit \\citep[see][]{Lupton.93}. For each of the\n$N_{\\rm peak}$ sets we record the time-lag of the peak and its error\nfrom the quadratic fit ($t_{p}$ and $\\sigma t_{p}$), the peak DACF\nvalue and its error ($y_{p}$ and $\\sigma y_{p}$), and the false alarm\nprobability of the fit from the F-test ($Pr_{p}$). Note that this is\nnot the false alarm probability of finding any connected set of points\nin the DACF that is well-fit by a quadratic in a random signal. In\ngeneral, that false alarm probability will be higher.\n\\item Starting from the peak with the shortest time lag $t_{p}$,\n identify the first peak with $Pr_{p} < Pr_{\\rm lim1}$ and $y_{p} >\n y_{\\rm lim}$ or with $Pr_{p} < Pr_{\\rm lim2}$, choose this peak as\n the period for the star. If there is no such peak in the DACF, then\n the star is not selected as a variable by this method. We adopt\n $Pr_{\\rm lim1} = 10^{-4}$; $Pr_{\\rm lim2}$ and $y_{\\rm lim}$ are\n determined independently for each subsample from the\n simulations. For a given subsample we take $Pr_{\\rm lim2}$ and\n $y_{\\rm lim}$ to be the fifth smallest and largest values\n respectively from the simulations (i.e. the 99.5 percentile\n values).\\label{step:DACFlimits}\n\\end{enumerate}\n\nIn Figure~\\ref{fig:AutoCorrpeaks_ymaxvsFAP_vssim} we plot the minimum\n$Pr_{p}$ found in each DACF against the maximum $y_{p}$ found and show\nthe adopted cutoffs. We plot the results from the observed light\ncurves and the simulations for four representative\nsubsamples. Figure~\\ref{fig:ExampleDACF} shows an example of the DACF\nand selected peak for a periodic variable, and for one of the least\nsignificant detections that pass our selection. The un-phased light\ncurves for each of these cases are also presented.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.6}}\n\\plotone{f6.eps}\n\\caption{Minimum false alarm probability $Pr_{p}$ vs. maximum peak\n value $y_{p}$ found in each DACF for 4 representative light curve\n subsamples. On the left column we plot the observed values showing\n the light curves that pass the selection (dark filled points) and\n the light curves that do not pass the selection (grey filled points)\n separately, on the right column we plot the results from the Monte\n Carlo simulation for the corresponding subsample. The lines show the\n adopted $Pr_{p}$\/$y_{p}$ cut-off. Note that the plotted $Pr_{p}$ and\n $y_{p}$ values for a given DACF may not come from the same peak. In\n selecting peaks from the DACF we require that both the $Pr_{p}$ and\n the $y_{p}$ values for a given peak pass the selection. This plot is\n meant only to provide an approximate visualization of the\n selection.}\n\\label{fig:AutoCorrpeaks_ymaxvsFAP_vssim}\n\\end{figure}\n\n\\ifthenelse{\\boolean{emulateapj}}{\\begin{figure*}[!ht]}{\\begin{figure}[t]}\n\\epsscale{1.0}\n\\plotone{f7.eps}\n\\caption{Top: Examples of the DACF for one of the highest significance\n detections of variability (left) and for one of the lowest\n significance detections (right). In each case the dark line shows\n the quadratic fit to the DACF used to determine the period of\n variation. Bottom: Un-phased light curves for the two stars. Note\n that the period for the star on the right is likely half the value\n determined by the peak identification algorithm. In this case the\n peak at $P \\sim 10~{\\rm days}$ did not pass the cut on $Pr_{p}$,\n while the second peak at $P \\sim 20~{\\rm days}$ did. The $P \\sim\n 10~{\\rm days}$ signal is the top peak in the AoV and AoVHarm\n periodograms for this star, however the the S\/N for both\n periodograms is below our selection threshold.}\n\\label{fig:ExampleDACF}\n\\ifthenelse{\\boolean{emulateapj}}{\\end{figure*}}{\\end{figure}}\n\nWe find significant peaks in the DACF for a total of\n\\ensuremath{1491}{} EPD light curves and \\ensuremath{1190}{} TFA light\ncurves. We inspect these by eye to eliminate obvious false alarms\n(typically cases where a light curve shows significant scatter on a\nfew nights, often of several magnitudes or more), we also note whether\nor not the detection appears to be robust and whether or not the\nperiod determination is likely to be accurate. For the EPD light\ncurves we consider \\ensuremath{465}{} of the detections to be\nrobust and with a correct period, \\ensuremath{155}{} to be\nrobust but at the wrong period (typically the detected period is an\ninteger multiple of the likely true period, see for example\nfigure~\\ref{fig:ExampleDACF}), \\ensuremath{534}{} to be non-robust,\nand we reject \\ensuremath{243}{} of the detections as clear false\nalarms. For the TFA light curves the respective numbers are\n\\ensuremath{353}{}, \\ensuremath{154}{},\n\\ensuremath{318}{}, and \\ensuremath{95}{} (see also\ntable~\\ref{tab:selectionstats}). The distinction between a robust and\na non-robust detection is subjective, typically we consider a\ndetection to be robust if the variability in the phased or unphased\nlight curve is obvious to the eye and\/or the DACF shows a set of clear\nregularly spaced peaks. There are some cases where the DACF shows\nclear regularly spaced peaks; however the scatter in the light curve\nappears to correlate with the phase. We consider many of these cases\nto be non-robust (most are light curves from 4K fields with periods\nnear the lunar cycle). We include all non-rejected detections in the\nfinal catalog together with flags indicating the reliability of the\ndetection and the reliability of the period.\n\n\\subsection{BLS - Search for Eclipses}\nThe BLS algorithm, primarily used in searches for transiting planets,\ndetects periodic box-like dips in a light curve. This algorithm may be\nmore sensitive to detached binaries with sharp-featured light curves\nthan the other methods used. For our implementation of the BLS\nalgorithm we search 10,000 frequency points within a period range of\n0.1 to 20.0 days. To search for long period events where the eclipse\nduration may be only a very short fraction of the orbital period, we\nrepeated the search at a higher frequency resolution using 100,000\npoints within a period range of 1.0 to 20.0 days. At each trial\nfrequency we bin the phased light curve into 200 bins, and search over\nfractional eclipse durations ranging from 0.01 to 0.1 in\nphase. Figure~\\ref{fig:BLS_SNvsP} shows the $S\/N$ vs the period for\nthe EPD and TFA light curves. We select \\ensuremath{752}{} stars\nwith $S\/N > 10.0$ and with a period not close to 1 sidereal day or a\nharmonic of a sidereal day as potentially eclipsing systems. We do not\nuse the Monte Carlo simulations to set the selection thresholds for\nthis method because distinguishing between red noise and an eclipse\nsignal by eye is less ambiguous than distinguishing between red noise\nand general periodic or quasi-periodic variability. The selected light\ncurves are inspected by eye to identify eclipsing systems. A total of\n\\ensuremath{89}{} candidate EB systems are found in this manner,\n\\ensuremath{8}{} are found in the EPD light curves only,\n\\ensuremath{29}{} are found in the TFA light curves only, and\n\\ensuremath{52}{} are found in both the EPD and TFA light curves (see\nalso table~\\ref{tab:selectionstats}).\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.7}}\n\\plotone{f8_small.eps}\n\\caption{Period vs. S\/N from BLS for EPD and TFA light curves. The\n dark points show light curves that pass the $S\/N > 10.$ cut and do\n not have a period near a sidereal day or one of its harmonics. The\n grey points show light curves that do not pass this cut.}\n\\label{fig:BLS_SNvsP}\n\\end{figure}\n\n\\subsection{Search for Flares}\\label{sec:findflares}\n\nAs noted in the introduction, flaring is a common phenomenon among K\nand M dwarfs. While we were inspecting the light curves of candidate\nvariable stars we noticed a number of stars showing significant\nflares. We therefore decided to conduct a systematic search for flare\nevents in the light curves. Most optical stellar flares show a very\nsteep rise typically lasting from a few seconds to several\nminutes. \\citet{Krautter.96} notes that flares can be divided into two\nclasses based on their decay times: ``impulsive'' flares have decay\ntimes of a few minutes, to a few tens of minutes, while ``long-decay''\nflares have decay times of up to a few hours. Due to the 5-minute\nsampling of the HATNet light curves, flares of the former type will\nonly affect one or two observations in a light curve, while flares of\nthe latter type might affect tens of observations. In general it is\nvery difficult to determine whether a given outlier in a light curve\nis due to a flare or bad photometry without inspecting the images from\nwhich an observation was obtained. This is impractical to do for tens\nof thousands of light curves when each light curve may contain tens to\nhundreds of outliers. While observations that are potentially\ncorrupted are flagged, in practice the automated routines that\ngenerate these flags do not catch all cases of bad photometry. We\ntherefore do not attempt to identify individual ``impulsive'' flares\nin the light curves, and instead conduct a statistical study of the\nfrequency of these flares (\\S~\\ref{sec:flares}). Long-decay flares, on\nthe other hand, may be searched for in an automated fashion if a\nfunctional form for the decay is assumed \\citep[this is similar to\n searching for microlensing events, see for example][]{Nataf.09}. To\nsearch for long-decay flares we used the following algorithm:\n\\begin{enumerate}\n\\item Compute $m_{0}$, the median magnitude of the light curve, and\n $\\delta_{0}$ the median deviation from the median.\n\\item Identify all sets of consecutive points with $m - m_{0} <\n -3\\delta_{0}$. Let $t_{0}$ be the time of the brightest observation\n in a given set, and let $N$ be the number of consecutive points\n following and including $t_{0}$ with $m - m_{0} < -2\\delta_{0}$. We\n proceed with the set if $N > 3$.\n\\item\\label{step:fitflare} Use the Levenberg-Marquardt algorithm\n \\citep{Marquardt.63} to fit to the $N$ points a function of the\n form:\n\\begin{equation}\nm(t) = -2.5\\log_{10}\\left( A e^{-(t-t_{0})\/\\tau} + 1\\right) + m_{1}\n\\label{eqn:flare}\n\\end{equation}\nwhere $A$, $\\tau$ and $m_{1}$ are the free parameters. Here $A$ is the\npeak intensity of the flare relative to the non-flaring intensity,\n$\\tau$ is the decay timescale, and $m_{1}$ is the magnitude of the\nstar before the flare. For the initial values we take $m_{1} = m_{0}$,\n$\\tau = 0.02~{\\rm days}$, and $A = 10^{-0.4(m_{p} - m_{0})} - 1$,\nwhere $m_{p}$ is the magnitude at the peak.\n\\item\\label{step:Ftestflare} Perform an F-test on the statistic given\n in eq.~\\ref{eqn:ftest} where $\\chi^{2}_{N-3}$ in this case is the\n $\\chi^{2}$ value from fitting eq.~\\ref{eqn:flare}. If the false\n alarm probability is greater than 1\\%, reject the candidate. If not,\n increase the number of points by one and repeat\n step~\\ref{step:fitflare}. Continue as long as the false alarm\n probability decreases.\n\\item We reject any flare candidate for which there are at least two\n other candidate flares from light curves in the same field that\n occur within 0.1 days of the flare candidate.\n\\item Let the number of points with $t< t_{0}$ and $t_{0} - t <\n 0.05~{\\rm days}$ be $N_{\\rm before}$ and the number of points with\n $t > t_{1}$ and $t - t_{1} < 0.05~{\\rm days}$ be $N_{\\rm\n after}$. Here $t_{1}$ is the time of the last observation included\n in the fit. Reject the candidate flare if $N_{\\rm before} < 2$ or\n $N_{\\rm after} < 2$. Also reject the candidate flare if $A < 0.$, $A\n > 10.0$, $\\tau < 0.001~{\\rm days}$, $\\tau > 0.5~{\\rm days}$, $A <\n \\sigma_{A}$, $\\tau < \\sigma_{\\tau}$, or if the false alarm\n probability from step~\\ref{step:Ftestflare} is greater than\n 0.1\\%. Here $\\sigma_{A}$ and $\\sigma_{\\tau}$ are the formal\n uncertainties on $A$ and $\\tau$ respectively. The selection on $A$\n is used to reject numerous light curves with significant outliers\n which appear to be due to artifacts in the data rather than flares.\n\\end{enumerate}\n\nWe apply the above algorithm to the non-composite EPD light curves\n(i.e.~light curves from each field are processed independently for\nstars with light curves from multiple fields). The algorithm is\napplied both on the raw EPD light curves, and on EPD light curves that\nare high-pass filtered by subtracting from each point the median of\nall points that are within 0.1 day of that point. There are a total of\n\\ensuremath{23,589}{} stars with EPD light curves that are analyzed, we\nexclude from the analysis \\ensuremath{4}{} stars from the full\nsample for which only $\\sigma$-clipped TFA light curves are\navailable. A total of \\ensuremath{320}{} candidate flare\nevents from \\ensuremath{281}{} stars are selected. These are\ninspected by eye to yield the final sample of \\ensuremath{64}{} flare\nevents from \\ensuremath{60}{} stars. Figure~\\ref{fig:exampleflares}\nshows two examples of these large-amplitude, long-decay flares. The\nidentified flares have peak intensities that range from $A = 0.09$ to\n$A = 4.21$ and decay time-scales that range from $\\tau = 4$~minutes to\n$\\tau = 1.7$~hours.\n\n\\begin{figure}[]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{1.0}}\n\\plotone{f9.eps}\n\\caption{Two examples of large-amplitude, long-decay flares seen in\n the HATNet light curves. In each case the solid line shows the fit\n of eq.~\\ref{eqn:flare} to the light curve that was done in\n automatically identifying candidate flares.}\n\\label{fig:exampleflares}\n\\end{figure}\n\n\\subsection{Comparison of Selection Methods}\\label{sec:comp}\n\nAs seen in Table~\\ref{tab:selectionstats} more stars pass the\nautomatic selections for AoV or for DACF than pass the automatic\nselections for AoVHarm. On the other hand, there are more robust\ndetections found by AoVHarm than by the other methods. The latter\nresult may be due, in part, to a bias toward sinusoidal signals in the\nby-eye selection. However, taking it at face value, it appears that\nthe AoVHarm method is a more robust period-finder for this sample of\nlight curves than either the AoV or the DACF methods. For the\n\\ensuremath{1075}{} TFA light curves that are classified as\nrobust detections during the visual inspections for both the AoV and\nAoVHarm methods, the S\/N for the AoVHarm detection is greater than the\nS\/N for the AoV detection in all but\n\\ensuremath{12}{} cases. This is in line with the\nlong-known fact that Fourier-based period-finding methods generally\nyield higher S\/N detections than phase-binning methods, even in cases\nwhere the signal is non-sinusoidal \\citep{Kovacs.80}.\n\nA direct comparison between the DACF and the AoVHarm or AoV methods is\ndifficult since the selection for DACF is quite different from the\nselections for AoV or AoVHarm. However, it is apparent from\ntable~\\ref{tab:selectionstats} that, at least for our selection\nthresholds, the DACF detections are generally found to be less\nreliable than the AoVHarm or AoV detections. Again there may be a bias\nin the by-eye selection against the types of variables that pass DACF,\nhowever it is also likely that the increased complexity in identifying\na period from the auto-correlation function results in more false\nalarms than are generated by the periodogram-based methods.\n\nAs expected, BLS appears to be the best technique for identifying\neclipsing binaries; \\ensuremath{89}{} out of the \\ensuremath{95}{} potential\neclipsing binaries identified in our survey are selected by BLS,\nincluding \\ensuremath{21}{} that are identified exlusively by BLS. The\nsecond most successful method for identifying EBs is the AoV method\nwhich identified \\ensuremath{70}{} of the \\ensuremath{95}{} potential eclipsing\nbinaries, and exclusively identified \\ensuremath{4}{} of them.\n\n\\section{Variability Blending}\\label{sec:blend}\n\nWhile the wide FOV of the HATNet telescopes allows a significant\nnumber of bright stars to be simultaneously observed, the downside to\nthis design is that the pixel scale is necessarily large, so a given\nlight curve often includes flux contributions from many\nstars. Blending is a particularly significant issue in high stellar\ndensity fields near the Galactic plane. A star blended with a nearby\nvariable star may be incorrectly identified as a variable based on its\nlight curve. If the stars are separated by more than a pixel or two,\nit may be possible to distinguish the real variable from the blend by\ncomparing the amplitudes of their light curves. However, because\nphotometry is only obtained for stars down to a limiting magnitude\n(the value used varies from field to field), in many cases we do not\nhave light curves for all the faint neighbors near a given candidate\nvariable star, so we cannot easily determine which star is the true\nvariable. In these cases we can still give an indication of whether or\nnot a candidate is likely to be a blend by determining the expected\nflux contribution from all neighboring stars to the candidate's light\ncurve.\n\nTo determine whether or not a candidate is blended with a nearby\nvariable star that has a light curve, we measure the peak-to-peak\nlight curve amplitude (in flux) of all stars within 2$\\arcmin$ of the\ncandidate. If any neighbor has an amplitude that is greater than twice\nthe flux amplitude of the candidate, the candidate is flagged as a\nprobable blend. If any neighbor has an amplitude that is between half\nand twice the flux amplitude of the candidate, the candidate is\nflagged as a potential blend. If any neighbor has an amplitude that is\nbetween 10\\% and half the flux amplitude of the candidate, the\ncandidate is flagged as an unlikely blend. And finally we flag the\ncandidate as a non-blend if all neighbors have amplitudes that are\nless than 10\\% that of the candidate. We determine the amplitude of a\nlight curve by fitting to it 10 different Fourier series of the form:\n\\begin{equation}\nm(t) = m_{0} + \\sum_{i=1}^{N}a_{i}\\sin \\left( 2\\pi it\/P + \\phi_{i}\\right)\n\\end{equation}\nwith $N$ ranging from 1 to 10. Here $P$ is the period of the light\ncurve. We perform an F-test to determine the significance of each fit\nrelative to fitting a constant function to the light curve, and choose\nthe amplitude of the Fourier series with the lowest false alarm\nprobability. If the lowest false alarm probability is greater than\n10\\% we set the amplitude to zero. We try all periods identified for\neach candidate by the variability searches described in\n\\S~\\ref{sec:selection}, and adopt the largest amplitude found. For\ncandidates that have light curves from multiple fields, or that have\nboth ISM and AP reductions, we do the amplitude test on each separate\nfield\/reduction and adopt the most significant blending flag found for\nthe candidate. If the amplitude of the candidate variable star is set\nto zero for a given field\/reduction we do not use that field\/reduction\nin determining the blending flag. If this is true for all\nfields\/reductions we flag the candidate as problematic. We use the EPD\nlight curves in doing this test.\n\nTo determine whether or not a candidate is potentially blended with a\nnearby faint variable star that does not have a light curve, we\ncompare the observed amplitude of the candidate to its expected\namplitude if a neighboring star were variable with an intrinsic\namplitude of $1.0~{\\rm mag}$. We assume that an amplitude of $1.0~{\\rm\n mag}$ is roughly the maximum value that one might expect for a short\nperiod variable star. If the measured amplitude is less than the\nexpected amplitude then we flag the candidate as a potential blend, if\nit is greater than the expected amplitude and less than twice the\nexpected amplitude we flag the candidate as an unlikely blend, and if\nit is greater than twice the expected amplitude, we flag it as a\nnon-blend. The test is done for all stars within 2$\\arcmin$ of the\ncandidate that do not have a light curve, and we adopt the most\nsignificant blending flag found for the candidate. To determine the\nexpected amplitude of the candidate star induced by the neighbor, we\nnote that a star with magnitude $m_{1}$ located near a variable star\nwith magnitude $m_{2}$ and amplitude $\\Delta m_{2} > 0$, has an\nexpected light curve amplitude that is given by\n\\begin{eqnarray}\n\\lefteqn{\\Delta m_{1,AP} = 2.5\\log_{10} \\left[ f_{1}10^{-0.4m_{1}} + f_{2}10^{-0.4(m_{2} - \\Delta m_{2})} \\right]} \\nonumber\\\\\n&&\\mbox{} - 2.5\\log_{10} \\left[ f_{1}10^{-0.4m_{1}} + f_{2}10^{-0.4m_{2}} \\right] \\hspace{0.9in}\n\\label{eqn:blendAP}\n\\end{eqnarray}\nfor the case of aperture photometry, and by\n\\begin{eqnarray}\n\\lefteqn{\\Delta m_{1,ISM} = } \\nonumber \\\\\n&&2.5\\log_{10} \\left[ f_{1}10^{-0.4m_{1}} + f_{2}10^{-0.4m_{2}}\\left( 10^{0.4\\Delta m_{2}} - 1\\right) \\right] \\nonumber \\\\\n&&\\mbox{} - 2.5\\log_{10} \\left[ f_{1}10^{-0.4m_{1}} \\right] \\hspace{2.0in}\n\\label{eqn:blendISM}\n\\end{eqnarray}\nfor the case of image subtraction photometry. The two expressions\ndiffer because in the ISM pipeline photometry is done on difference\nimages (only differential flux is summed in the aperture), whereas in\nthe AP pipeline photometry is done directly on the science images (all\nstellar flux is summed in the aperture). Here $f_{1,2}$ is the\nfraction of the flux from star 1(2) that falls within the aperture, and we use the catalog values (transformed from 2MASS) for $m_{1}$ and $m_{2}$. To\ndetermine $f_{1}$ and $f_{2}$ we integrate the intersection between\nthe circular aperture and a Gaussian PSF which we assume to have a\nFWHM of 2 pixels (this is a typical effective ``seeing'' for both\nthe 2K and 4K images), we do not consider pixelation effects in making\nthis estimate. \n\nWe compare the results from the two blending tests for each candidate,\nand adopt the most significant blending flag from among the two tests\nfor the catalog. We do not run the test on the candidate flare stars,\nunless the star was selected as a variable by another method as\nwell. Out of the \\ensuremath{3474}{} stars that are in either the\nfirst or the second catalog, \\ensuremath{936}{} are\nflagged as unblended, \\ensuremath{451}{} are flagged\nas unlikely blends, \\ensuremath{1397}{} are flagged\nas potential blends, \\ensuremath{399}{} are flagged\nas probable blends, and \\ensuremath{291}{} are found to\nhave problematic amplitudes (cases where the amplitude measuring\nalgorithm failed for the star in question).\n\n\\section{Match to Other Catalogs}\\label{sec:match}\n\n\\subsection{Match to Other Variable Star Surveys}\\label{sec:varmatch}\n\nWe match all \\ensuremath{3496}{} stars selected as potential\nvariables to the combined General Catalogue of Variable Stars\n\\citep[GCVS;][]{Samus.06}, the New Catalogue of Suspected Variable\nStars \\citep[NSV;][]{Kholopov.82} and its supplement\n\\citep[NSVS;][]{Kazarovets.98}\\footnote{The GCVS, NSV and NSVS were\n obtained from http:\/\/www.sai.msu.su\/groups\/cluster\/gcvs\/gcvs\/ on\n 2009 April 7}. We also match to the ROTSE catalog of variable stars\n\\citep{Akerlof.00}, to the ALL Sky Automated Survey Catalogue of\nVariable Stars \\citep[ACVS;][]{Pojmanski.02}\\footnote{Version 1.1\n obtained from http:\/\/www.astrouw.edu.pl\/asas\/?page=catalogues}, and\nto the Super-WASP catalogue of periodic variables coincident with\nROSAT X-ray sources \\citep{Norton.07}. In all cases we use a\n$2\\arcmin$ matching radius. We use a large matching radius to include\nmatches to known variables that may be blended with stars in our\nsample. In total \\ensuremath{77}{} of our candidate variables lie within\n$2\\arcmin$ of a source in one of these catalogs, meaning that\n\\ensuremath{3419}{} are new identifications. This includes \\ensuremath{36}{}\nthat match to a source in the GCVS, \\ensuremath{4}{} that match to a\nsource in the NSV, \\ensuremath{7}{} that match to a source in the NSVS,\n\\ensuremath{4}{} that match to a source in the ACVS, \\ensuremath{8}{}\nthat match to a source in the ROTSE catalog (\\ensuremath{4}{} of\nwhich are in the GCVS as well), and \\ensuremath{23}{} that match to a\nSuper-WASP source (\\ensuremath{2}{} of these are in their catalogue\nof previously identified variables). Two of the \\ensuremath{36}{}\ncandidate variables that match to a source in the GCVS\n(HAT-215-0001451 and HAT-215-0001491) actually match to the same\nsource, V1097~Tau, a weak emission-line T~Tauri star. Both stars are\nflagged as probable blends in our catalog, in this case\nHAT-215-0001491 is the correct variable while HAT-215-0001451 is the\nblend.\n\nWe inspect each of the \\ensuremath{77}{} candidates with a potential match\nand find that the match is incorrect for \\ensuremath{39}{} of them and\ncorrect for \\ensuremath{38}{}. For \\ensuremath{23}{} of\nthe \\ensuremath{39}{} incorrect matches the candidate variable is\nflagged as a probable blend in our catalog. In\n\\ensuremath{8}{} cases the candidate variable is flagged\nas a potential blend, in \\ensuremath{4}{} cases it is\nflagged as an unlikely blend, in \\ensuremath{3}{} cases it is\nflagged as unblended, and in \\ensuremath{1}{} case the\namplitude is considered problematic. The match appears to be correct\nfor four of the candidate variables flagged as probable blends. In\naddition to HAT-215-0001491, the stars HAT-239-0000221 and\nHAT-239-0000513 both match correctly to sources in the GCVS. These\nstars form a common proper motion, low mass binary system. Both stars\nare flagged as probable blends in our catalog. Each matches\nseparately, and correctly, to a flare star in the GCVS (V0647~Her and\nV0639~Her respectively). For the NSV-matching probable blend candidate\nHAT-121-0003519 the match may be correct, though the positional\nuncertainty of the NSV source is high. A variable star classification\nis not available for this source.\n\nThe \\ensuremath{16}{} variables that match correctly to a source\nin the GCVS include 4 BY Draconis-type rotational variables, 6 UV\nCeti-type flare stares, 1 INT class Orion variable of the T Tauri\ntype, and 5 eclipsing systems. The EBs include the two W UMa-type\ncontact systems DY CVn and V1104 Her, the two Algol-type systems DK\nCVn and V1001 Cas, and the M3V\/white dwarf EB DE CVn\n\\citep[][]{VanDenBesselaar.07}.\n\n Table~\\ref{tab:cross}, at the end\nof the paper, lists the first ten cross-identifications, the full\ntable is available electronically with the rest of the catalog.\n\n\\subsection{Match to ROSAT}\\label{sec:xray}\n\nWe match all \\ensuremath{3496}{} stars selected as potential\nvariables to the ROSAT All-sky survey source catalog \\citep{Voges.99}\nusing the US National Virtual Observatory catalog matching\nutilities. We use a 3.5$\\sigma$ positional matching criterion. A total\nof \\ensuremath{248}{} of the variables match to an X-ray source,\nincluding \\ensuremath{237}{} stars in our primary catalog of\nperiodic variables, \\ensuremath{14}{} of the EBs, and\n\\ensuremath{24}{} of the \\ensuremath{60}{} flare stars. A few of\nthe variable stars are close neighbors where one is likely to be a\nblend of the other, so there are \\ensuremath{243}{} distinct\nX-ray sources that are matched to. Table~\\ref{tab:xray} at the end of\nthe paper gives the cross-matches. The full table is available\nelectronically with the rest of the catalog. In Section~\\ref{sec:rot}\nwe discuss the X-ray properties of the rotational variables.\n\n\\section{Discussion}\\label{sec:discussion}\n\n\\subsection{Eclipsing Binaries}\\label{sec:eb}\n\nThe \\ensuremath{95}{} stars that we identify as potential EBs have periods\nranging from $P = \\ensuremath{ 0.193}{}~{\\rm days}$ to\n$P=\\ensuremath{24.381}{}~{\\rm days}$. We flag \\ensuremath{11}{} of\nthe candidate EBs as probable blends, \\ensuremath{23}{} as\npotential blends, \\ensuremath{21}{} as unlikely blends,\n\\ensuremath{35}{} as unblended, and \\ensuremath{5}{} as having\nproblematic amplitudes. Figure~\\ref{fig:exampleEBs} shows phased light\ncurves for 12 of the EBs.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{1.0}}\n\\plotone{f10_small.eps}\n\\caption{Example phased light curves for 12 of the \\ensuremath{95}{} potential EBs found in the survey. The period listed is in days.}\n\\label{fig:exampleEBs}\n\\end{figure}\n\nIn addition to matching the candidate EBs to other variable star\ncatalogs (\\S~\\ref{sec:varmatch}) and to ROSAT (\\S~\\ref{sec:xray}) we\nalso checked for matches to previously studied objects using\nSIMBAD. The following objects had noteworthy matches:\n\\begin{enumerate}\n\\item \\emph{HAT-148-0000574}: matches to the X-ray source 1RXS\n J154727.5+450803, and was previously discovered to be an SB2 system\n by \\citet{Mochnacki.02}, we discuss this system in detail in\n \\S~\\ref{sec:RXJ1547}.\n\\item \\emph{HAT-216-0002918} and \\emph{HAT-216-0003316}: match to CCDM\n J04404+3127A and CCDM J04404+3127B, respectively, which form a\n common proper-motion 15$\\arcsec$ binary system. The two stars are\n both selected as candidate EBs with the same period, and are both\n flagged as potential blends in the catalog; based on a visual\n inspection of the light curves we conclude that the fainter\n component (HAT-216-0003316) is most likely the true $P=2.048~{\\rm\n day}$ EB. The fainter component, which has spectral type M3 on\n SIMBAD, also matches to the X-ray source RX J0440.3+3126.\n\\item \\emph{HAT-127-0008153}: matches to CCDM J03041+4203B, which is\n the fainter component in a common proper-motion $20\\arcsec$ binary\n system. This star is flagged as a potential blend, the brighter\n component appears to match to the X-ray source 1RXS\n J030403.8+420319. Based on a visual inspection of the light curves\n we conclude that HAT-127-0008153 is likely the true variable.\n\\item \\emph{HAT-169-0003847}: is $24\\arcsec$ from the Super-WASP\n variable 1SWASP~J034433.95+395948.0, the two stars are blended in\n the HATNet images, however from a visual inspection of the light\n curves we conclude that HAT-169-0003847 is likely the true variable.\n\\item \\emph{HAT-192-0001841}: is $46\\arcsec$ from a high\n proper-motion, K0 star BD+41~2679. The two stars may be members of a\n common proper-motion binary system (the former has a proper motion\n of $65.79$, $-151.77~{\\rm mas\/yr}$ in RA and DEC respectively, while\n the latter has $89.76$, $-117.14~{\\rm mas\/yr}$).\n\\item \\emph{HAT-169-0003847}: is flagged as an unlikely blend, and we\n confirm that BD+41~2679 is not the true variable.\n\\item \\emph{HAT-193-0008020}: matches to GSC~03063-02208 and has a\n spectral type of M0e listed on SIMBAD \\citep[see also][]{Mason.00}.\n\\item \\emph{HAT-216-0007033}: matches to the X-ray source\n RX~J0436.1+2733 and has spectral type M4 listed on SIMBAD.\n\\item \\emph{HAT-341-0019185}: is $43\\arcsec$ from TYC~1097-291-1, the\n two stars appear to be members of a common proper motion binary\n system. HAT-341-0019185 is flagged as a probable blend in our\n catalog, though it does not appear that TYC~1097-291-1 is the real\n variable.\n\\end{enumerate}\n\n\\subsubsection{The Low-mass EB 1RXS~J154727.5+450803}\\label{sec:RXJ1547}\n\nThe EB HAT-148-0000574 matches to 1RXS J154727.5+450803. Using RV\nobservations obtained with the Cassegrain spectrograph on the David\nDunlap Observatory (DDO) 1.88~m telescope\\footnote{Based on data\n obtained at the David Dunlap Observatory, University of Toronto},\n\\citet{Mochnacki.02} found that this object is a $P=3.54997 \\pm\n0.00005~{\\rm day}$ double-lined spectroscopic binary system with\ncomponent masses $\\ga 0.26~M_{\\odot}$. This system, however, was not\npreviously known to be eclipsing. Here we combine the published RV\ncurves from \\citet{Mochnacki.02} with the HATNet I-band light curve to\nprovide preliminary estimates for the masses and radii of the\ncomponent stars.\n\nFigure~\\ref{fig:RXJ1547lc} shows the EPD HATNet light curve phased at\na period of $P = 3.550018$ days together with a model fit, while\nfigure~\\ref{fig:RXJ1547RV} shows a fit to the radial velocity\nobservations taken from \\citet{Mochnacki.02}. Note the out of eclipse\nvariations in the light curve, presumably due to spots on one or both\nof the components, which indicates that the rotation period of one or\nboth of the stars is tidally locked to the orbital period. Since the\nHATNet light curve is not of high enough quality to measure the radii\nto better than a few percent precision, we do not fit a detailed spot\nmodel to the light curve, and instead simply fit a harmonic series to\nthe out of eclipse observations and then subtract it from the full\nlight curve. We model the light curve using the JKTEBOP program\n\\citep{Southworth.04a,Southworth.04b} which is based on the Eclipsing\nBinary Orbit Program \\citep[EBOP;][]{Popper.81,Etzel.81,Nelson.72},\nbut includes more sophisticated minimization and error analysis\nroutines. We used the DEBiL program \\citep{Devor.05} to measure the\neclipse minimum times from the light curve, which in turn were used\nwith the RV curves to determine the ephemeris. In modeling the RV\ncurves we fix $e = 0$ \\citep[][found $e = 0.008 \\pm\n 0.007$]{Mochnacki.02}, and we fix $k = R_{2}\/R_{1} = 1.0$ in\nmodelling the light curve given $q = 1.00 \\pm 0.02$ from the fit to\nthe RV curves (the light curve is not precise enough to provide a\nmeaningful constraint on $k$). For completeness we note that we\nassumed quadratic limb darkening coefficients of $a = 0.257$, $b =\n0.586$ for both stars \\citep{Claret.00}, which are appropriate for a\n$T_{\\rm eff} = 3000~{\\rm K}$, $\\log g = 4.5$, solar metallicity\nstar. The results are insensitive to the adopted limb darkening\ncoefficients; we also performed the fit using the coefficients\nappropriate for a $T_{\\rm eff} = 4000~{\\rm K}$, $\\log g = 4.5$ star\nand found negligible differences in the resulting parameters and\nuncertainties. The parameters for the system are given in\ntable~\\ref{tab:RXJ1547param}. Note that the $1\\sigma$ errors given on\nthe masses and radii are determined from a Monte Carlo simulation\n\\citep{Southworth.05}. These are likely to be overly optimistic given\nthe inaccurate treatment of the spots, and our assumption that the\ncomponent radii are equal.\n\n\\begin{deluxetable}{lr}\n\\tabletypesize{\\scriptsize}\n\\tablewidth{0pc}\n\\tablecaption{Parameters for the EB 1RXS J154727.5+450803}\n\\ifthenelse{\\boolean{emulateapj}}{}{\\tablehead{\n\\colhead{Parameter} & \\colhead{Value}\n}}\n\\startdata\n\\cutinhead{Coordinates and Photometry}\nRA (J2000) & 15:47:27.42\\tablenotemark{a} \\\\\nDEC (J2000) & +45:07:51.39\\tablenotemark{a} \\\\\nProper Motions [mas\/yr] & -259, 200\\tablenotemark{b} \\\\\nJ & $9.082~{\\rm mag}$\\tablenotemark{c} \\\\\nH & $8.463~{\\rm mag}$\\tablenotemark{c} \\\\\nK & $8.215~{\\rm mag}$\\tablenotemark{c} \\\\\n\\cutinhead{Ephemerides}\nP & $3.5500184 \\pm 0.0000018~{\\rm day}$ \\\\\nHJD & $2451232.89534 \\pm 0.00094$ \\\\\n\\cutinhead{Physical Parameters}\n$M_{1}$ & $0.2576 \\pm 0.0085~M_{\\odot}$\\\\\n$M_{2}$ & $0.2585 \\pm 0.0080~M_{\\odot}$\\\\\n$R_{1}=R_{2}$ & $0.2895 \\pm 0.0068~R_{\\odot}$\\\\\n\\cutinhead{RV Fit Parameters}\n$\\gamma$ & $-21.21 \\pm 0.41~{\\rm km\/s}$ \\\\\n$K_{1}$ & $55.98 \\pm 0.76~{\\rm km\/s}$ \\\\\n$K_{2}$ & $55.78 \\pm 0.83~{\\rm km\/s}$ \\\\\n$e$ & 0.0\\tablenotemark{d}\\\\\n\\cutinhead{LC Fit Parameters}\n$J_{2}\/J_{1}$ & $1.0734 \\pm 0.030$ \\\\\n$(R_{1} + R_{2})\/a$ & $0.0737 \\pm 0.0014$ \\\\\n$R_{2}\/R_{1}$ & 1.0\\tablenotemark{e}\\\\\n$i$ & $86.673^{\\circ} \\pm 0.068^{\\circ}$ \\\\\n\\enddata\n\\tablenotetext{a}{SIMBAD}\n\\tablenotetext{b}{PPMX}\n\\tablenotetext{c}{2MASS}\n\\tablenotetext{d}{fixed}\n\\tablenotetext{e}{fixed based on $q = 1.004 \\pm 0.020$ from fitting the RV curves.}\n\\label{tab:RXJ1547param}\n\\end{deluxetable}\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.8}}\n\\plotone{f11.eps}\n\\caption{Phased HATNet I-band light curve for the low-mass EB 1RXS J154727.5+450803. The top panel shows the full EPD light curve, the bottom two panels show a model fit to the two eclipses after subtracting a harmonic series fit to the out of eclipse portion of the light curve.}\n\\label{fig:RXJ1547lc}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{1.0}}\n\\plotone{f12.eps}\n\\caption{Circular orbit fit to the RV curves for the low-mass EB 1RXS J154727.5+450803. The observations are taken from \\citet{Mochnacki.02}.}\n\\label{fig:RXJ1547RV}\n\\end{figure}\n\nTable~\\ref{tab:otherEBS} lists the masses and radii of the 4 other\nknown double-lined detached EBs with at least one main sequence\ncomponent that has a mass less than $0.3 M_{\\odot}$. We do not include\nRR Cae which is a white-dwarf\/M-dwarf EB that has presumably undergone\nmass transfer \\citep{Maxted.07}. In figure~\\ref{fig:EBmodelcomp} we\nplot the mass-radius relation for stars in the range $0.15 M_{\\odot} <\nM < 0.3 M_{\\odot}$. Like the components of CM Dra and the secondary of\nGJ~3236, and unlike the components of SDSS-MEB-1 and the secondary of\n2MASSJ04463285+190432, the components of 1RXS J154727.5+450803 have\nradii that are larger than predicted from the \\citet{Baraffe.98}\nisochrones (if the age is $\\ga 200~{\\rm Myr}$). The radii are $\\sim\n10$\\% larger than the predicted radius in the $1.0~{\\rm Gyr}$, solar\nmetallicity isochrone. High precision photometric and spectroscopic\nfollow-up observations, and a more sophisticated analysis of the data\nare need to confirm this.\n\n\\ifthenelse{\\boolean{emulateapj}}{\\begin{deluxetable*}{lrrrrr}}{\\begin{deluxetable}{lrrrrr}}\n\\tabletypesize{\\scriptsize}\n\\tablewidth{0pc}\n\\tablecaption{Other Double-Lined EBs with a very late M dwarf component}\n\\tablehead{\n\\colhead{Name} &\n\\colhead{Period [days]} &\n\\colhead{$M_{1}$ [$M_{\\odot}$]} &\n\\colhead{$M_{2}$ [$M_{\\odot}$]} &\n\\colhead{$R_{1}$ [$R_{\\odot}$]} &\n\\colhead{$R_{2}$ [$R_{\\odot}$]}\n}\n\\startdata\nCM Dra & $1.27$ & $0.2310 \\pm 0.0009$ & $0.2141 \\pm 0.0010$ & $0.2534 \\pm 0.0019$ & $0.2396 \\pm 0.0015$ \\\\\nSDSS-MEB-1 & $0.407$ & $0.272 \\pm 0.020$ & $0.240 \\pm 0.022$ & $0.268 \\pm 0.010$ & $0.248 \\pm 0.009$ \\\\\n2MASSJ04463285+190432 & $0.619$ & $0.47 \\pm 0.05$ & $0.19 \\pm 0.02$ & $0.57 \\pm 0.02$ & $0.21 \\pm 0.01$ \\\\\nGJ~3236 & $0.771$ & $0.376 \\pm 0.016$ & $0.281 \\pm 0.015$ & $0.3795 \\pm 0.0084$ & $0.300 \\pm 0.015$ \\\\\n\\enddata\n\\tablerefs{CM Dra: \\citet{Morales.09}; \\citet{Lacy.77}; \\citet{Metcalfe.96}; SDSS-MEB-1: \\citet{Blake.08}; 2MASSJ04463285+190432: \\citet{Hebb.06}; GJ~3236: the parameters listed for this system are determined by giving equal weight to the three models in \\citet{Irwin.09b}}\n\\label{tab:otherEBS}\n\\ifthenelse{\\boolean{emulateapj}}{\\end{deluxetable*}}{\\end{deluxetable}}\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{1.0}}\n\\plotone{f13.eps}\n\\caption{Mass-radius relation for 7 main sequence stars in\n double-lined DEBs with $M < 0.3 M_{\\odot}$. The two points with the\n smallest error bars are the components of CM Dra, the open circles\n are the components of SDSS-MEB-1, the open square is the secondary\n component of 2MASSJ04463285+190432, the X is the secondary component\n of GJ~3236, and the filled square marks the two components of 1RXS\n J154727.5+450803. The solid line shows the $1.0~{\\rm Gyr}$, solar\n metallicity isochrone from \\citet{Baraffe.98}. Note that the error\n bars for 1RXS~J154727.5+450803 do not incorporate systematic errors\n that may result from not properly modelling the spots or allowing\n the stars to have unequal radii.}\n\\label{fig:EBmodelcomp}\n\\end{figure}\n\n\\subsection{Rotational Variables}\\label{sec:rot}\n\nFigure~\\ref{fig:exampleROTlcs} shows example phased light curves for\n12 randomly selected rotational variables found in our survey. For the\nfollowing analysis we only consider stars in our variables catalog\nthat are identified as reliable detections for at least one search\nalgorithm and that are not flagged as probable blends or as having\nproblematic amplitudes. In order of preference, we adopt the AoVHarm\nTFA, AoVHarm EPD, AoV TFA, AoV EPD, DACF TFA or DACF EPD period for\nthe star, choosing the period found by the first method for which the\ndetection is considered secure.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.9}}\n\\plotone{f14_small.eps}\n\\caption{Phased EPD light curves for 12 randomly selected rotational\n variables found in the survey. The period listed is in days. The\n zero-point phase is arbitrary. For this figure we use the period\n found by applying AoVHarm to the TFA light curves.}\n\\label{fig:exampleROTlcs}\n\\end{figure}\n\n\\subsubsection{Period-Amplitude Relation}\\label{sec:periodamp}\n\nFor FGK stars there is a well-known anti-correlation between stellar\nactivity measured from emission in the H and K line cores, or from\nH$\\alpha$ emission, and the Rossby number \\citep[$R_{O}$, the ratio of\n the rotation period to the characteristic time scale of convection,\n see][]{Noyes.84}, which saturates for short periods. Similar\nanti-correlations with saturation have been seen between $R_{O}$ and\nthe X-ray to bolometric luminosity ratio \\citep[e.g.][]{Pizzolato.03},\nand between $R_{O}$ and the amplitude of photometric variability\n\\citep[e.g.][]{Messina.01, Hartman.09}. Main sequence stars with $M\n\\la 0.35 M_{\\odot}$ are fully convective \\citep{Chabrier.97}, so one\nmight expect that the rotation-activity relation breaks down, or\nsignificantly changes, at this mass. Despite this expectation, several\nstudies have indicated that the rotation-activity relation (measured\nusing $v \\sin i$ and H$\\alpha$ respectively) continues for late\nM-dwarfs \\citep{Delfosse.98, Mohanty.03, Reiners.07}. Recently,\nhowever, \\citet{West.09} have found that rotation and activity may not\nalways be linked for these stars.\n\nIn figure~\\ref{fig:PeriodvsAmp} we plot the rotation period against\nthe peak-to-peak amplitude for stars in several color bins. The\npeak-to-peak amplitude is calculated for the EPD light curves as\ndescribed in section~\\ref{sec:blend}; for stars observed in multiple\nfields we take the amplitude of the combined light curve. Stars\nwithout an available EPD light curve, or for which the amplitude\nmeasuring algorithm failed, are not included in the plot. A total of\n\\ensuremath{1525}{} stars are included in the plot. For stars\nwith $V-K_{S} < 5.0$ (corresponding roughly to $M \\ga 0.25 M_{\\odot}$)\nthe photometric amplitude and the period are anti-correlated at high\nsignificance. There appears to be a cut-off period, such that the\nperiod and amplitude are uncorrelated for stars with periods shorter\nthan the cut-off, and are anti-correlated for stars with periods\nlonger than the cut-off. In other words, the relations are saturated\nbelow a critical rotation period. \\citet{Hartman.09} find a saturation\nthreshold of $R_{O} = 0.31$, which for a $0.6~M_{\\odot}$ star\ncorresponds to a period of $\\sim 8~{\\rm days}$ \\citep[assuming\n $(B-V)_{0} = 1.32$ for stars of this mass, and using the empirical\n relation between the convective time scale and $(B-V)_{0}$\n from][]{Noyes.84}. This is consistent with what we find for the\nbluest stars in our sample. For stars with $V-K_{S} > 5.0$ ($M \\la\n0.25~M_{\\odot}$), the period and amplitude are not significantly\ncorrelated, at least for periods $\\la 30~{\\rm days}$. This result\nsuggests that the distribution of starspots on late M dwarfs is\nuncorrelated with rotation period over a large period range, and is\nperhaps at odds with H$\\alpha$\/$v \\sin i$ studies which indicate a\ndrop in activity for very late M-dwarf stars with $v \\sin i \\la\n10~{\\rm km\/s}$ \\citep[e.g.][]{Mohanty.03}.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.8}}\n\\plotone{f15.eps}\n\\caption{Rotation period vs. peak-to-peak photometric amplitude for\n \\ensuremath{1525}{} stars. We divide the sample into 4 bins\n by $V-K_{S}$ color, corresponding roughly to $M \\ga 0.6 M_{\\odot}$,\n $0.5 M_{\\odot} \\la M \\la 0.6 M_{\\odot}$, $0.25 M_{\\odot} \\la M \\la\n 0.5 M_{\\odot}$, and $M \\la 0.25 M_{\\odot}$, from blue to red. We\n also list the Spearman rank-order correlation coefficient and the\n statistical significance of the correlation for each sample (note\n that negative values of $r_{S}$ imply that the period and amplitude\n are anti-correlated). For stars with $V-K_{S} < 5.0$ the period and\n peak-to-peak amplitude are anti-correlate with high\n significance. The relation appears to be saturated for periods $\\la\n 5~{\\rm days}$, with hints that the saturation period increases for\n decreasing stellar mass. For stars with $V-K_{S} > 5.0$ ($M \\la 0.25\n M_{\\odot}$), the period and amplitude are not significantly\n correlated for $P \\la 30~{\\rm days}$.}\n\\label{fig:PeriodvsAmp}\n\\end{figure}\n\n\\subsubsection{Period-Color\/Period-Mass Relation}\n\nIn figure~\\ref{fig:VarColorHist} we compare the distribution of\n$V-K_{S}$ colors for periodic variables to the distribution for all\nstars in the sample. We also show the fraction of stars that are\nvariable with peak-to-peak amplitudes greater than 0.01 mag as a\nfunction of $V-K_{S}$.\n\nThe plotted relation has been corrected for completeness by conducting\nsinusoid injection\/recovery simulations to estimate our detection\nefficiency. In conducting these tests we divide the sample into 90\nperiod\/amplitude\/color bins. We use color bins of $2.0 < V-K_{S} <\n3.5$, $3.5 < V-K_{S} < 4.0$, $4.0 < V-K_{S} < 4.5$, $4.5 < V-K_{S} <\n5.0$ and $5.0 < V-K_{S} < 6.0$, three period bins of 0.1-1~day,\n1-10~days, and 10-100~days, and 5 amplitude bins logarithmically\nspanning 0.01 to 1.0 mag. While ideally a much finer grid would be\nused for these simulations, we were limited by computational resources\nto this fairly coarse sampling. For each bin we randomly select 1000\nstars with the appropriate color (for color bins with fewer than 1000\nstars we select with replacement). For each selected star we then\nchoose a random period and amplitude drawn from uniform-log\ndistributions over the bin and inject a sine curve with that\nperiod\/amplitude and a random phase into the light curve of the\nstar. If both EPD and TFA light curves are available for the star we\ninject the same signal into both light curves. We do not reduce the\namplitude for the injection into the TFA light curve, this may cause\nus to slightly overestimate our detection efficiency. If the star was\nidentified as a variable or a potential variable by our survey, we\nfirst remove the true variable signal from the light curve by fitting\na harmonic series to the phased light curve before injecting the\nsimulated signal. We then process the simulated signals through the\nAoV, AoVHarm and DACF algorithms using the same selection parameters\nas used for selecting the real variables. We do not apply by-eye\nselections on the simulated light curves, so our detection efficiency\nmay be overestimated (particularly for longer period stars where the\nby-eye selection tended to be stricter than the automatic cuts). To\nget the completeness corrected variability fraction we weight each\nreal detected variable by $1\/f$ where $f$ is the fraction of simulated\nsignals that are recovered for the period\/amplitude\/color bin that the\nreal variable falls in. We find that we are roughly $\\sim 80\\%$\ncomplete over our sample of stars for peak-to-peak amplitudes greater\nthan 0.01 mag and periods between 0.1 and 100 days. Considering the\nrecovery fraction separately for each variability method, we find that\n$\\sim 91\\%$ of the simulations are recovered by AoVHarm, and in $\\sim\n97\\%$ of the recovered simulations the recovered frequency is within\n$0.001~{\\rm day}^{-1}$ of the injected frequency. For AoV we again\nfind that $\\sim 91\\%$ of the simulations are recovered, but that\nfraction of recovered simulations where the recovered frequency is\nwithin $0.001~{\\rm day}^{-1}$ of the injected frequency is $\\sim 93\\%$\nin this case. For the DACF method we find that only $\\sim 34\\%$ of the\nsimulations are recovered, although if we consider only simulations\nwhere the injected period is greater than 10 days the fraction of\nsimulations that are recovered by DACF is then $\\sim 81\\%$. The\nfraction of these latter recoveries for which the recovered frequency\nis within $0.001~{\\rm day}^{-1}$ of the injected frequency is only\n$\\sim 51\\%$ in this case, however. For the simulations the recovery\nfrequency is relatively insensitive to the period and color and\ndepends most significantly on the amplitude. For amplitudes between\n0.01 mag and 0.022 mag the recovery fraction is $\\sim 65\\%$, whereas\nfor amplitudes above 0.05 mag the recovery fraction is $\\sim\n90\\%$. Above 0.05 mag the recovery fraction is independent of\namplitude. As noted above, the estimated completeness of $\\sim 80\\%$\nis likely to be overly optimistic since we do not include the by-eye\nselection, do not account for the reduction in signal amplitude by TFA\nand use relatively ``easy-to-find'' sinusoid signals. Nonetheless we\nexpect that the recovery frequency is $\\ga 70\\%$.\n\nAs seen in figure~\\ref{fig:VarColorHist} the fraction of stars that\nare detected as variables increases steeply with decreasing stellar\nmass. While only $\\sim 3\\%$ of stars with $M \\ga 0.7~M_{\\odot}$ are\nfound to be variable, approximately half of the stars with $M \\la\n0.2~M_{\\odot}$ are detected as variables with peak-to-peak amplitudes\n$\\ga 0.01~{\\rm mag}$ (fig.~\\ref{fig:PeriodvsAmp}). We find that an\nexponential relation of the form\n\\begin{equation}\\label{eq:fracvar}\n{\\rm Var.~Frac.} = (0.0034 \\pm 0.0008) e^{(0.84 \\pm 0.06)(V-K_{S})}\n\\end{equation}\nfits the observed relation over the color range $2.0 < V-K_{S} < 6.0$.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.4}}\n\\plotone{f16.eps}\n\\caption{Top: The distribution of $V-K_{S}$ colors for the\n \\ensuremath{1849}{} stars in our catalog of periodic variables\n that are flagged as being secure detections for at least one\n detection method and are not flagged as probable blends or as having\n problematic amplitudes, compared to the distribution of $V-K_{S}$\n colors for all \\ensuremath{27,560}{} stars in the sample. On the top axis\n we show the corresponding main sequence stellar masses determined by\n combining the empirical $V-K_{S}$ vs. $M_{K}$ main-sequence for\n stars in the Solar neighborhood given by \\citet{Johnson.09} with the\n mass-$M_{K}$ relation from the \\citet{Baraffe.98} 4.5~Gyr,\n solar-metallicity isochrone with $L_{\\rm mix} = 1.0$. We used the\n relations from \\citet{Carpenter.01} to convert the CIT $K$\n magnitudes from the isochrones into the 2MASS system. The\n distribution for variable stars is biased toward redder $V-K_{S}$\n colors relative to the distribution for all stars. The decrease in\n the total number of stars in the sample red-ward of $V-K_{S} \\sim\n 3.5$ is due to the $V$-band magnitude limit of the PPMX\n survey. Bottom: The completeness corrected fraction of stars that\n are variable with peak-to-peak $R$ or $I_{C}$ amplitude $>0.01$~mag\n as a function of $V-K_{S}$; this fraction increases exponentially\n with color (solid line, eq.~\\ref{eq:fracvar}). While only $\\sim 3\\%$\n of stars with $M \\ga 0.7~M_{\\odot}$ are found to be variable at the\n $\\ga 1\\%$ level, approximately half of the stars with $M \\la\n 0.2~M_{\\odot}$ are variable at this level.}\n\\label{fig:VarColorHist}\n\\end{figure}\n\nIn figure~\\ref{fig:PeriodDist} we show the relation between period and\ncolor. For stars with $V-K_{S} \\la 4.5$ the measured distribution of\n$\\log P$ is peaked at $\\sim 20~{\\rm days}$ with a broad tail toward\nshorter periods and a more rapid drop-off for longer periods. Note\nthat the cut-off for longer periods may be due to the correlation\nbetween period and amplitude for these stars; stars with periods\nlonger than $\\sim 20~{\\rm days}$ may be harder to detect and not\nintrinsically rare. The peak of the distribution appears to be\ncorrelated with color such that redder stars are found more commonly\nat longer periods than bluer stars. For stars with $V-K_{S} \\ga 4.5$\nthe distribution changes significantly such that the $\\log P$\ndistribution appears to be more or less flat between $\\sim 0.3$ and\n$\\sim 10~{\\rm days}$, while red stars with $P \\ga 10~{\\rm days}$ are\nuncommon. The morphology is broadly consistent with what has been seen\nfrom other surveys.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.5}}\n\\plotone{f17.eps}\n\\caption{Top: Period vs. $V-K_{S}$ for\n \\ensuremath{1785}{} stars in our catalog of periodic\n variables that are flagged as being secure detections for at least\n one detection method, are not flagged as having incorrect period\n determinations for that method, and are not flagged as probable\n blends or as having problematic amplitudes. There appears to be a\n paucity of stars with $P > 10~{\\rm days}$ and $V-K_{S} > 4.5$ or $V\n - K_{S} < 3$. Bottom: The distributions of periods are shown for\n four color bins. For the three bluest color bins there appears to be\n a correlation between period and color, such that the mode period is\n longer for redder stars. For stars with $V-K_{S} \\ga 4.5$, on the\n other hand, the period distribution appears to be biased to shorter\n periods. Using a K-S test, we find that the probability that the\n stars in any two of the different color bins are drawn from the same\n distribution is less than $0.01\\%$ for all combinations except the\n for the combination of the two intermediate color bins. For that\n combination the probability is $\\sim 0.7\\%$.}\n\\label{fig:PeriodDist}\n\\end{figure}\n\nTo demonstrate this, in Figure~\\ref{fig:PeriodMassComp} we compare the\nmass-period distribution for stars in our survey to the results from\nother surveys of field stars and open clusters. We choose to use mass\nfor the comparison rather than observed colors because a consistent\nset of colors is not available for all surveys. The masses for stars\nin our survey are estimated from their $V-K_{S}$ colors (see\nFig.~\\ref{fig:VarColorHist}). We take data from the Mount Wilson and\nVienna-KPNO \\citep{Strassmeier.00} samples of field stars with\nrotation periods. For the Mount Wilson sample we use the compilation\nby \\citet{Barnes.07}, the original data comes from \\citet{Baliunas.96}\nand from \\citet{Noyes.84}. For the Vienna-KPNO sample we only consider\nstars which are listed as luminosity class V. We estimate the masses\nfor stars in these samples using the same $V-K_{S}$ to mass conversion\nthat we use for our own sample. The $V$ and $K_{S}$ magnitudes for\nthese field stars are taken from SIMBAD where available. We also\ncompare our sample to four open clusters with ages between $100 -\n200$~Myr, including M35 \\citep[$\\sim 180~{\\rm Myr}$;][]{Meibom.09},\nand three clusters observed by the MONITOR project: M50 \\citep[$\\sim\n 130~{\\rm Myr}$;][]{Irwin.09a}, NGC~2516 \\citep[$\\sim 150~{\\rm\n Myr}$;][]{Irwin.07}, and M34 \\citep[$\\sim 200~{\\rm\n Myr}$;][]{Irwin.06}. Finally, we compare our sample to the two\noldest open clusters with significant samples of rotation periods: M37\n\\citep[$\\sim 550~{\\rm Myr}$;][]{Hartman.09}, and the Hyades\n\\citep[$\\sim 625~{\\rm Myr}$;][]{Radick.87,Radick.95,Prosser.95}. For\nthe MONITOR clusters we use the mass estimates given in their papers,\nthese are based on the $I_{C}$-mass relation from the appropriate\n\\citet{Baraffe.98} isochrone for the age\/metallicity of each\ncluster. For M35, M37 and the Hyades we use the mass estimates derived\nfrom the $V,I_{C}$-mass relations from the appropriate YREC\n\\citep{An.07} isochrones for each cluster. For M35 we only include 214\nstars from the \\citet{Meibom.09} catalog that lie near the main\nsequence in $V$, $B$ and $I_{C}$, and we exclude any stars which have\na proper motion membership probability less than $80\\%$, or an RV\nmembership probability less than $80\\%$ as determined by\n\\citet{Meibom.09}. We expect that the stars in our sample, and in the\nother field star samples, have a range of ages, but on average will be\nolder than the stars in the open clusters.\n\nThe sample of stars with rotation periods presented here is\nsubstantially richer than is available for other surveys of field\nstars. This is especially the case for later spectral types. The\nMt.~Wilson and Vienna-KPNO surveys primarily targeted G and early K\nstars, so there is not much overlap in stellar mass between those\nsamples and our sample. The few Mt.~Wilson stars with estimated masses\n$\\la 0.8~M_{\\odot}$ do show an anti-correlation between mass and\nperiod, and have periods that are longer than the majority of stars in\nour sample. The Vienna-KPNO stars, on the other hand, have periods\nthat cluster around $\\sim 10~{\\rm days}$, which is closer to the mode\nof the period distribution for stars of comparable mass in our\nsample. Since the Vienna-KPNO stars were selected as showing\nspectroscopic evidence for chromospheric activity before their periods\nwere measured photometrically, this sample is presumably more biased\nto shorter period active stars than the Mt.~Wilson sample. Given the\ncorrelation between photometric amplitude and rotation period, we\nwould also expect our sample to be biased toward shorter period stars\nrelative to the Mt.~Wilson sample. When compared to the open cluster\nsamples we see clear evidence for evolution in the rotation periods of\nlow-mass stars. Stars in the younger clusters have shorter periods at\na given mass, on average, than stars in our sample. The discrepancy\nbecomes more apparent for stars with $M \\la 0.5~M_{\\odot}$ for which\nthe period and mass appear to be positively correlated in the young\ncluster samples while they are anti-correlated in our sample. Looking\nat the $\\sim 600~{\\rm Myr}$ clusters, again the periods of stars in\nour sample are longer at a given mass, on average, than the periods of\nthe cluster stars, however in this case the mode of the period\ndistribution for the cluster stars appears to be closer to the mode of\nthe period distribution for our sample than it is for the younger\nclusters. The lowest mass stars in the older clusters also do not show\nas significant a correlation between mass and period as do the lowest\nmass stars in the younger clusters. For stars with $M \\la\n0.3~M_{\\odot}$ the available field star and older open cluster samples\nare too sparse to draw any conclusions from when comparing to our\nsample. For the younger clusters, we note that the distribution of\nperiods for the lowest mass stars is even more strongly peaked toward\nshort periods than it is in our sample. This suggests that these stars\ndo lose angular momentum over time, despite not having a tachocline. A\nmore detailed comparison of these data to models of stellar angular\nmomentum evolution is beyond the scope of this paper.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.4}}\n\\plotone{f18.eps}\n\\caption{A comparison of the mass-period distribution for stars in our\n survey to the results from other surveys. See the text for a\n description of the data sources. For clarity we make the comparison\n separately for field stars, open clusters with $100~{\\rm Myr} < t <\n 500~{\\rm Myr}$ and for two open clusters with $t \\sim 600~{\\rm\n Myr}$. The rotation periods of stars in our sample at a given mass\n appear to be longer, on average, than the periods of stars in the\n open clusters, this is true across all mass ranges covered by our\n survey. The rotation periods from the Vienna-KPNO survey appear to\n be comparable, at a given mass, to the periods of stars in our\n sample, while the periods from the Mt.~Wilson survey appear to be\n generally longer than the periods from our survey. It is likely that\n our survey and the Vienna-KPNO surveys are biased toward\n high-activity, shorter period stars than the Mt.~Wilson survey is.}\n\\label{fig:PeriodMassComp}\n\\end{figure}\n\n\\subsubsection{Period-X-ray Relation}\n\nFigure~\\ref{fig:XrayFractionvsPeriod} shows the fraction of variables\nthat match to an X-ray source as a function of period. This fraction\nis constant at $\\sim 22\\%$ for periods less than $\\sim$4 days, for\nlonger periods the fraction that matches to an X-ray source decreases\nas $\\sim P^{-0.8}$. Following \\citet{Agueros.09} we calculate the\nratio of X-ray to J-band flux via\n\\begin{equation}\n\\log_{10} (f_{X} \/ f_{J}) = \\log_{10}f_{X} + 0.4J + 6.30\n\\end{equation}\nwhere 1 count s$^{-1}$ in the 0.1-2.4 keV energy range is assumed to\ncorrespond on average to $f_{X} = 10^{-11}$ erg cm$^{-2}$ s$^{-1}$. In\nfigure~\\ref{fig:logfxfjvsperiod} we plot the flux ratio as a function\nof rotation period for samples of variables separated by their $V -\nK_{S}$ color. The X-ray flux is anti-correlated with the rotation\nperiod for stars with $M \\ga 0.25 M_{\\odot}$, for stars with $M \\la\n0.25 M_{\\odot}$ there is still a hint of an anti-correlation, though\nit is of low statistical significance (the false alarm probability is\n$\\sim 20\\%$). This result is similar to what we found for the\nphotometric amplitude-period relation.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{1.0}}\n\\plotone{f19.eps}\n\\caption{The fraction of non-EB periodic variable stars that match to\n a ROSAT source as a function of rotation period. The errorbars for\n the 3 longest period bins show 1$\\sigma$ upper-limits. For stars\n with rotation periods less than $\\sim$4 days the fraction that\n matches to an X-ray source is constant at $\\sim 22\\%$, for longer\n periods the fraction decreases as $\\sim P^{-0.8}$.}\n\\label{fig:XrayFractionvsPeriod}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{1.0}}\n\\plotone{f20.eps}\n\\caption{The ratio of 0.1-2.4 keV X-ray flux to $J$-band near infrared\n flux vs. the rotation period for non-EB periodic variable stars that\n match to a ROSAT source. We divide the sample into the same 4 color\n bins used in fig.~\\ref{fig:PeriodvsAmp}. We also list the Spearman\n rank-order correlation coefficient and the statistical significance\n of the correlation for each sample. The X-ray flux is\n anti-correlated with the rotation period at high significance for\n stars with $V - K_{S} < 5.0$. For stars with $V - K_{S} > 5.0$ ($M\n \\la 0.25 M_{\\odot}$) there is still a hint of an anti-correlation,\n though it is of low statistical significance.}\n\\label{fig:logfxfjvsperiod}\n\\end{figure}\n\n\\subsection{Flares}\\label{sec:flares}\n\nIn Section~\\ref{sec:findflares} we conducted a search for\nlarge-amplitude long-duration flares, finding only \\ensuremath{64}{}\nevents in \\ensuremath{60}{} out of \\ensuremath{23,589}{} stars with EPD\nlight curves that were analyzed. There are likely to be many more\nflare events that have been observed but which cannot be easily\ndistinguished from non-Gaussian noise in an automated fashion. The\npresence of these flares may, however, be identified statistically by\nlooking for an excess of bright outliers relative to faint outliers in\nthe light curves that correlates with other observables, such as the\nrotation period. For each light curve we determine the excess fraction\nof bright $n$-$\\delta$ outliers via\n\\begin{equation}\nf_{n} = \\frac{N_{n,-} - N_{n,+}}{N_{\\rm tot}}\n\\label{eqn:excessbright}\n\\end{equation}\nwhere $N_{n,-}$ is the number of points in the light curve with $m -\nm_{0} < -n\\delta$, $N_{n,+}$ is the number of points with $m - m_{0} >\nn\\delta$, $m_{0}$ is the median of the light curve, $\\delta$ is the\nmedian value of $|m - m_{0}|$, and there are $N_{\\rm tot}$ points in\nthe light curve. We do this for $n = 3$, 5 and 10. Before calculating\n$f_{n}$ we high-pass filter the light curve by subtracting from\neach point the median of all points that are within 0.1 day of that\npoint.\n\nIt is a well known fact that the distribution of magnitudes in the\nlight curve of a faint star is skewed about the median toward faint\nvalues. As such we expect $f_{n}$ to be less than zero and to decrease\nfor fainter stars. Figure~\\ref{fig:RMSvsexcess} shows the excess\nfraction of bright outliers as a function of the light curve scatter\nfor the high-pass filtered light curves. We compare the observed\nrelation to the relation obtained for a simulated set of light curves\ngenerated using Poisson noise for the flux from the star, the sky and\nthe sky annulus. We simulate one light curve for each observed light\ncurve, using the observed sky fluxes and extinctions to determine the\nexpected sky flux and star flux for each point in each light curve. We\nfind that the median relation between the excess fraction of bright\noutliers and the light curve scatter is consistent with the expected\nrelation; however there is greater scatter about this relation than\nexpected from our idealized noise model. It is unclear whether other\nsources of noise, such as systematic variations due to blending,\nflat-fielding errors, pixelation effects, etc., will yield a skewed\ndistribution of magnitude values, and if so, in which direction the\nmagnitude distribution will be skewed. It is unlikely, however, that\nthis would be correlated with parameters such as the rotation\nperiod. Therefore, if a correlation is observed between rotation\nperiod and the excess fraction of bright outliers, then it is likely\nto be physical.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.8}}\n\\plotone{f21_small.eps}\n\\caption{The excess fraction of bright $5\\delta$ outliers vs. the\n light curve scatter $\\delta$, shown for the $\\sim$27,000 observed\n light curves (top) and for a simulated sample of light curves\n (bottom). The dark points with the error bars show the median values\n of $f_{5}$ for the observed light curves, the solid line shows a\n power-law fit to the median values for the observed light curve,\n while the dashed line shows a power-law fit to the median values for\n the simulations.}\n\\label{fig:RMSvsexcess}\n\\end{figure}\n\nIn figure~\\ref{fig:ExcessvsPeriod} we plot the median excess fraction\nof bright 5-$\\delta$ and 10-$\\delta$ outliers as a function of period\nfor stars in our variables catalog that are identified as reliable\ndetections for at least one search algorithm, and that are not flagged\nas probable blends or as having problematic amplitudes. In order of\npreference, we adopt the AoVHarm TFA, AoVHarm EPD, AoV TFA, AoV EPD,\nDACF TFA or DACF EPD period for the star. There appears to be a slight\nanti-correlation between period and excess fraction of bright\n5-$\\delta$ outliers such that stars with short periods ($< 10~{\\rm\n days})$ have slightly more bright 5-$\\delta$ outliers than faint\n5-$\\delta$ outliers compared to the median, while stars with long\nperiods ($> 10~{\\rm days}$) have slightly fewer compared to the\nmedian. A Spearman rank-order correlation test rejects the null\nhypothesis that the excess fraction is not anti-correlated with the\nperiod at $99.92\\%$ confidence. For 10-$\\delta$ outliers, the excess\nfraction of bright outliers also appears to be anti-correlated with\nthe period, though at lower significance ($99.5\\%$).\n\nIn figure~\\ref{fig:ExcessvsPeriod} we also compare the distribution of\nperiods for stars with detected large-amplitude long-duration flares\nto the distribution for stars without such flare detections. A total\nof \\ensuremath{31}{} of the\n\\ensuremath{60}{} flare stars have a robust period determination and\nare not flagged as a probable blend. The period detection frequency of\n$\\sim 50\\%$ for flare stars is significantly higher than that for all\nother stars ($\\la 7\\%$). This result is expected if stellar flaring is\nassociated with the large starspots that give rise to continuous\nphotometric variations. As seen in figure~\\ref{fig:ExcessvsPeriod},\nthe distribution of periods for flare stars is concentrated toward\nshorter periods than the distribution for non-flare stars. The longest\nperiod found for a flare star is 18.2 days whereas 31\\% of the\nnon-flare stars with period determinations have periods greater than\n18.2 days. Conducting a K-S test we find that the probability that the\ntwo samples are drawn from the same distribution is less than\n$10^{-6}$.\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.4}}\n\\plotone{f22.eps}\n\\caption{Top: The median excess fraction of bright $5\\delta$ outliers\n and $10\\delta$ outliers vs. period for stars in our variables\n catalog that are identified as reliable detections for at least one\n search algorithm, and that are not flagged as probable blends or as\n having problematic amplitudes. In computing the excess fraction for\n each light curve we subtract the median excess fraction for the\n light curve scatter (fig.~\\ref{fig:RMSvsexcess}). Center: The median\n rank of the excess fraction of bright $5\\delta$ and $10\\delta$\n outliers vs. the rank period. The anti-correlation between period\n and the excess fraction of bright outliers is more apparent when the\n ranks, rather than the values, are plotted against each\n other. Bottom: Comparison between the period distributions for stars\n with detected large-amplitude long-duration flares, and stars\n without such a flare detected. Note the absence of long-period stars\n with flares. For clarity we have multiplied the flare star period\n distribution by a factor of 10.}\n\\label{fig:ExcessvsPeriod}\n\\end{figure}\n\n\\begin{figure}[!ht]\n\\ifthenelse{\\boolean{emulateapj}}{\\epsscale{1.2}}{\\epsscale{0.6}}\n\\plotone{f23.eps}\n\\caption{Top: The distribution of $V-K_{S}$ colors for the \\ensuremath{60}{} stars\n with high amplitude flares detected compared to the distribution of\n $V-K_{S}$ colors for all \\ensuremath{27,560}{} stars in the sample. The\n distribution for flares stars is biased toward redder $V-K_{S}$\n colors relative to the distribution for all stars. Note that we use\n a 5 times higher binning resolution for the full sample. Bottom:\n The fraction of stars with a high amplitude flare detection is\n plotted against $V-K_{S}$. For stars with $V-K_{S} < 4.0$ less than\n $\\sim 0.1\\%$ of stars had a flare detected, for stars with $V-K_{S}\n > 4.5$ the fraction is $\\ga 1\\%$.}\n\\label{fig:FlareColor}\n\\end{figure}\n\n\\section{Conclusion}\\label{sec:conclusion}\n\nIn this paper we have presented the results of a variability survey\nconducted with HATNet of field K and M dwarfs selected by color and\nproper motion. We used a variety of variability selection techniques\nto identify periodic and quasi-periodic variables, and have also\nconducted a search for large amplitude, long-duration flare events. We\nconducted Monte Carlo simulations of light curves with realistic noise\nproperties to aid in setting the selection thresholds. Out of a total\nsample of \\ensuremath{27,560}{} stars we selected \\ensuremath{3496}{} that\nshow potential variability, including \\ensuremath{95}{} that show eclipses in\ntheir light curves, and \\ensuremath{60}{} that show flares. We\ninspected all automatically selected light curves by eye, and flagged\n\\ensuremath{2321}{} stars (including those with flares) as being\nsecure variability detections. Because the HATNet images have poor\nspatial resolution, variability blending is a significant problem. We\ntherefore implemented an automated routine to classify selected\nnon-flare variables as probable blends, potential blends, unlikely\nblends, unblended or as having problematic amplitudes. Altogether we\nfound \\ensuremath{1928}{} variables that are classified as\nsecure detections and are not classified as probable blends or as\nhaving problematic amplitudes (cases where the best-fit Fourier series\nto the light curve has a flase alarm probability greater than\n10\\%). This includes \\ensuremath{79}{} stars that show eclipses in\ntheir light curves. We identified \\ensuremath{64}{} flare events in\n\\ensuremath{60}{} stars, \\ensuremath{38}{} of these stars are\nalso selected as potential periodic or quasi-periodic variables\n(\\ensuremath{37}{} are considered reliable detections, of\nwhich \\ensuremath{36}{} have reliable period\ndeterminations and \\ensuremath{31}{} have\nreliable period determinations and are not flagged as probable blends\nare as having problematic amplitude determinations). We matched the\nsample of potential variables to other catalogs, and found that\n\\ensuremath{77}{} lie within $2\\arcmin$ of a previously identified variable,\nwhile \\ensuremath{3419}{} do not. Including only flare stars and variables\nthat are classified as secure detections and are not classified as\nprobable blends or as having problematic amplitudes,\n\\ensuremath{43}{} (including\n\\ensuremath{7}{} EBs) lie within $2\\arcmin$ of a previously\nidentified variable, so that \\ensuremath{1885}{}\nare new identifications.\n\nOne of the eclipsing binaries that we identified is the previously\nknown SB2 system 1RXS~J154727.5+450803. By combining the published RV\ncurves for the component stars with the HATNet $I$~band light curve,\nwe obtained initial estimates for the masses and radii of the\ncomponent stars (Tab.~\\ref{tab:RXJ1547param}). The system is one of\nonly a handful of known double-lined eclipsing binaries with component\nmasses less than $0.3~M_{\\odot}$. While we caution that the errors on\nthe component radii are likely to be underestimated due to systematic\nerrors that have not been considered in this preliminary analysis, it\nis interesting that the radii do appear to be larger than predicted if\nthe system is older than $\\sim 200~{\\rm Myr}$. With a magnitude $V\n\\sim 13.4$, this system is only slightly fainter than the well-studied\nbinary CM Dra ($V \\sim 12.90$) which has been the anchor of the\nempirical mass-radius relation for very late M\ndwarfs. 1RXS~J154727.5+450803 is thus a promising target for more\ndetailed follow-up to obtain high precision measurements of the\nfundamental parameters of the component stars. With additional\nfollow-up, the large sample of \\ensuremath{79}{} probable late-type eclipsing\nbinaries presented in this paper should prove fruitful for further\ninvestigations of the fundamental parameters of low-mass stars.\n\nThe majority of the variable stars that we have identified are likely\nto be BY~Dra type variables, with the measured period corresponding to\nthe rotation period of the star. This is the largest sample of\nrotation periods presented to date for late-type field stars. We\ndiscussed a number of broad trends seen in the data, including an\nanti-correlation between the rotation period and the photometric\namplitude of variability, an exponential relation between $V-K_{S}$\ncolor and the fraction of stars that are variable, a positive\ncorrelation between period and the $V-K_{S}$ color for stars with $V -\nK_{S} \\la 4.5$, a relative absence of stars with $P \\ga 10.0~{\\rm\n days}$ and $V - K_{S} \\ga 4.5$, and an anti-correlation between the\nrotation period and the ratio of X-ray to $J$~band flux. The\ncorrelations between period and activity indicators including the\namplitude of photometric variability and the X-ray emission are\nconsistent with the well-known rotation-age-activity-mass relations\nfor F, G, K and early M dwarfs. The data presented here may help in\nfurther refining these relations. Our data hints at a change in the\nrotation-activity connection for the least massive stars in the sample\n($M \\la 0.25~M_{\\odot}$). The anti-correlation between period and\namplitude appears to break down for these stars, and similarly the\nperiod-X-ray anti-correlation is less significant for these stars than\nfor more massive stars. This is potentially at odds with previous\nstudies which used H$\\alpha$ to trace activity and $v \\sin i$ to infer\nrotation period, and found that the period-activity anti-correlation\nextends to very late-type M dwarfs. Comparing our sample to other\nfield and open cluster samples, we find that the rotation periods of\nstars in our sample are generally longer than the periods found in\nopen clusters with $t \\la 620~{\\rm Myr}$, which implies that K and M\ndwarf stars continue to lose angular momentum past the age of the\nHyades. This appears to be true as well for stars with $M \\la\n0.25~M_{\\odot}$, though these stars generally have shorter periods\nthan more massive stars in our sample.\n\nFinally we have conducted a search for flare events in our light\ncurves, identifying \\ensuremath{64}{} events in \\ensuremath{60}{}\nstars. Due to the difficulty between distinguishing a flare from bad\nphotometry in an automated way, there are likely to be many flare\nevents in the light curves that we do not identify. We therefore do\nnot attempt to draw conclusions about the total occurrence rate of\nflaring \\citep[for a recent determination of this frequency using data\n from the Sloan Digital Sky Survey, see][]{Kowalski.09}. We find that\nthe distribution of $V-K_{S}$ colors for flare stars is biased toward\nred colors, implying that the flare frequency increases with\ndecreasing stellar mass, which has been known for a long time\n(\\citealp{Ambartsumyan.70}, see also \\citealp{Kowalski.09}). We find\nthat roughly half the flare stars are detected as periodic variables,\nwhich is a significantly higher fraction than for the full sample of\nstars. This is in line with the expectation that stellar flaring is\nassociated with the presence of significant starspots, and is\nconsistent with the finding by \\citet{Kowalski.09} that the flaring\nfrequency of active M dwarfs showing H$\\alpha$ emission is $\\sim 30$\ntimes higher than the flaring frequency of inactive M dwarfs. We also\nfind that the distribution of periods for flare stars is biased toward\nshorter periods, again as expected from the rotation-activity\nconnection. Finally we attempt to statistically identify flares by\nsearching for excess bright outliers relative to faint outliers in the\nlight curves. This excess appears to be anti-correlated with rotation\nperiod and provides further evidence that flares are more common among\nrapidly rotating K and M dwarfs than among slower rotators.\n\n\\acknowledgements\n\nHATNet operations have been funded by NASA grants NNG04GN74G,\nNNX08AF23G and SAO IR\\&D grants. G.\\'{A}.B. acknowledges support from\nthe Postdoctoral Fellowship of the NSF Astronomy and Astrophysics\nProgram (AST-0702843). T.M. acknowledges support from the ISRAEL\nSCIENCE FOUNDATION (grant No. 655\/07). The Digitized Sky Surveys were\nproduced at the Space Telescope Science Institute under\nU.S. Government grant NAG W-2166. The images of these surveys are\nbased on photographic data obtained using the Oschin Schmidt Telescope\non Palomar Mountain and the UK Schmidt Telescope. The plates were\nprocessed into the present compressed digital form with the permission\nof these institutions. This research has made use of data obtained\nfrom or software provided by the US National Virtual Observatory,\nwhich is sponsored by the National Science Foundation. This research\nhas made use of the SIMBAD database, operated at CDS, Strasbourg,\nFrance.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsubsection*{\\bibname}}\n\n\n\\begin{document}\n\\runningauthor{Dina Mardaoui and Damien Garreau}\n\n\\twocolumn[\n\n\\aistatstitle{An Analysis of LIME for Text Data}\n\n\\aistatsauthor{Dina Mardaoui \\And Damien Garreau}\n\\aistatsaddress{Polytech Nice, France \\And Universit\\'e C\\^ote d'Azur, Inria, CNRS, LJAD, France}\n]\n\n\\begin{abstract}\nText data are increasingly handled in an automated fashion by machine learning algorithms. \nBut the models handling these data are not always well-understood due to their complexity and are more and more often referred to as ``black-boxes.''\nInterpretability methods aim to explain how these models operate. \nAmong them, LIME has become one of the most popular in recent years. \nHowever, it comes without theoretical guarantees: even for simple models, we are not sure that LIME behaves accurately. \nIn this paper, we provide a first theoretical analysis of LIME for text data. \nAs a consequence of our theoretical findings, we show that LIME indeed provides meaningful explanations for simple models, namely decision trees and linear models. \n\\end{abstract}\n\n\\section{Introduction}\n\nNatural language processing has progressed at an accelerated pace in the last decade. \nThis time period saw the second coming of artificial neural networks, embodied by the apparition of recurrent neural networks (RNNs) and more particularly long short-term memory networks (LSTMs). \nThese new architectures, in conjunction with large, publicly available datasets and efficient optimization techniques, have allowed computers to compete with and sometime even beat humans on specific tasks. \n\nMore recently, the paradigm has shifted from recurrent neural networks to \\emph{transformers networks} \\citep{vaswani_et_al_2017}. \nInstead of training models specifically for a task, large \\emph{language models} are trained on supersized datasets. \nFor instance, \\texttt{Webtext2} contains the text data associated to $45$ millions links \\citep{radford_et_al_2019}. \nThe growth in complexity of these models seems to know no limit, especially with regards to their number of parameters. \nFor instance, BERT \\citep{devlin_et_al_2018} has roughly $340$ millions of parameters, a meager number compared to more recent models such as GTP-2 \\citep[$1.5$ billions]{radford_et_al_2019} and GPT-3 \\citep[$175$ billions]{brown_et_al_2020}.\n\nFaced with such giants, it is becoming more and more challenging to understand how particular predictions are made. \nYet, \\emph{interpretability} of these algorithms is an urgent need. \nThis is especially true in some applications such as healthcare, where natural language processing is used for instance to obtain summaries of patients records \\citep{spyns_1996}. \nIn such cases, we do not want to deploy in the wild an algorithm making near perfect predictions on the test set but for the wrong reasons: the consequences could be tragic. \n\n\\begin{figure}\n \\centering\n\\includegraphics[scale=0.21]{general_explanation.pdf}\n \\vspace{-0.3in}\n \\caption{\\label{fig:general-explanation}Explaining the prediction of a random forest classifier on a Yelp review. \\emph{Left panel:} the document to explain. The words deemed important for the prediction are highlighted, in orange (positive influence) and blue (negative influence). \\emph{Right panel:} values of the largest $6$ interpretable coefficients, ranked by absolute value. }\n\\end{figure}\n\nIn this context, a flourishing literature proposing \\emph{interpretability methods} emerged. \nWe refer to the survey papers of \\citet{Guidotti_et_al_2018} and \\citet{Adadi_Berrada_2018} for an overview, and to \\citet{danilevsky_et_al_2020} for a focus on natural language processing. \nWith the notable exception of SHAP \\citep{Lundberg_Lee_2017}, these methods do not come with any guarantees. \nNamely, given a simple model already interpretable to some extent, we cannot be sure that these methods provide meaningful explanations. \nFor instance, explaining a model that is based on the presence of a given word should return an explanation that gives high weight to this word. \nWithout such guarantees, using these methods on the tremendously more complex models aforementioned seems like a risky bet. \n\nIn this paper, we focus on one of the most popular interpretability method: \\emph{Local Interpretable Model-agnostic Explanations} \\citet[LIME]{ribeiro_et_al_2016}, and more precisely its implementation for text data. \nLIME's process to explain the prediction of a model~$f$ for an example~$\\xi$ can be summarized as follows: \n\\begin{enumerate}[(i).,itemsep=1pt,topsep=0pt]\n \\item from a corpus of documents $\\corp$, create a TF-IDF transformer $\\Normtfidf$ embedding documents into $\\Reals^D$;\n \\item create $n$ perturbed documents $x_1,\\ldots,x_n$ by deleting words at random in $\\xi$; \n \\item for each new example, get the prediction of the model $y_i\\defeq f(\\normtfidf{x_i})$;\n \\item train a (weighted) linear surrogate model with inputs the absence \/ presence of words and responses the $y_i$s.\n\\end{enumerate}\nThe user is then given the coefficients of the surrogate model (or rather a subset of the coefficients, corresponding to the largest ones) as depicted in Figure~\\ref{fig:general-explanation}. \nWe call these coefficients the \\emph{interpretable coefficients}. \n\nThe model-agnostic approach of LIME has contributed greatly to its popularity: one does not need to know the precise architecture of~$f$ in order to get explanations, it is sufficient to be able to query~$f$ a large number of times. \nThe explanations provided by the user are also very intuitive, making it easy to check that a model is behaving in the appropriate way (or not!) on a particular example. \n\n\\paragraph{Contributions. }\nIn this paper, we present the first theoretical analysis of LIME for text data. \nIn detail,\n\n\\begin{itemize}[itemsep=1pt,topsep=0pt]\n \\item we show that, when the number of perturbed samples is large, \\textbf{the interpretable coefficients concentrate with high probability around a fixed vector $\\beta$} that depends only on the model, the example to explain, and hyperparameters of the method;\n \\item we provide an \\textbf{explicit expression of $\\beta$}, from which we gain interesting insights on LIME. In particular, \\textbf{the explanations provided are linear in $f$};\n \\item for simple decision trees, we go further into the computations. We show that \\textbf{LIME provably provides meaningful explanations}, giving large coefficients to words that are pivotal for the prediction;\n \\item for linear models, we come to the same conclusion by showing that the interpretable coefficient associate to a given word is approximately equal to \\textbf{the product of the coefficient in the linear model and the TF-IDF transform of the word} in the example. \n\\end{itemize}\n\nWe want to emphasize that all our results apply to the default implementation of LIME for text data\\footnote{\\url{https:\/\/github.com\/marcotcr\/lime}} (as of October 12, 2020), with the only caveat that we do not consider any feature selection procedure in our analysis. \nAll our theoretical claims are supported by numerical experiments, the code thereof can be found at \\url{https:\/\/github.com\/dmardaoui\/lime_text_theory}.\n\n\\paragraph{Related work. }\nThe closest related work to the present paper is \\citet{garreau_luxburg_2020_aistats}, in which the authors provided a theoretical analysis of a variant of LIME in the case of tabular data (that is, unstructured data belonging to $\\Reals^N$) when $f$ is linear. \nThis line of work was later extended by the same authors \\citep{garreau_luxburg_2020_arxiv}, this time in a setting very close to the default implementation and for other classes of models (in particular partition-based classifiers such as CART trees and kernel regressors built on the Gaussian kernel). \nWhile uncovering a number of good properties of LIME, these analyses also exposed some weaknesses of LIME, notably cancellation of interpretable features for some choices of hyperparameters. \n\nThe present work is quite similar in spirit, however we are concerned with \\emph{text data}. \nThe LIME algorithm operates quite differently in this case. \nIn particular, the input data goes first through a TF-IDF transform (a non-linear transformation) and there is no discretization step since interpretable features are readily available (the words of the document). \nTherefore both the analysis and our conclusions are quite different, as it will become clear in the rest of the paper. \n\n\\section{LIME for text data}\n\\label{sec:lime}\n\nIn this section, we lay out the general operation of LIME for text data and introduce our notation in the process. \nFrom now on, we consider a model~$f$ and look at its prediction for a fixed example~$\\xi$ belonging to a corpus~$\\corp$ of size~$N$, which is built on a dictionary~$\\dg$ of size~$D$. \nWe let $\\norm{\\cdot}$ denote the Euclidean norm, and $\\sphere{D-1}$ the unit sphere of $\\Reals^D$. \n\nBefore getting started, let us note that LIME is usually used in the \\emph{classification} setting: $f$ takes values in $\\{0,1\\}$ (say), and $f(\\normtfidf{\\xi})$ represents the class attributed to~$\\xi$ by~$f$. \nHowever, behind the scenes, LIME requires~$f$ to be a real-valued function. \nIn the case of classification, this function is the probability of belonging to a certain class according to the model. \nIn other words, the \\emph{regression} version of LIME is used, and this is the setting that we consider in this paper. \nWe now detail each step of the algorithm. \n\n\n\\subsection{TF-IDF transform}\n\\label{sec:tfidf}\n\nLIME works with a vector representation of the documents. \nThe TF-IDF transform \\citep{luhn_1957,jones_1972} is a popular way to obtain such a representation. \nThe idea underlying the TF-IDF is quite simple: to any document, associate a vector of size~$D$. \nIf we set $\\word_1,\\ldots,\\word_D$ to be our dictionary, the $j$th component of this vector represents the importance of word~$\\word_j$. \nIt is given by the product of two terms: the term frequency (TF, how frequent the word is in the document), and the inverse term frequency (IDF, how rare the word is in our corpus). \nIntuitively, the TF-IDF of a document has a high value for a given word if this word is frequent in the document and, at the same time, not so frequent in the corpus. \nIn this way, common words such as ``the'' do not receive high weight. \n\nFormally, let us fix $\\delta \\in\\corp$. \nFor each word $\\word_j\\in\\dg$, we set $m_j$ the number of times $\\word_j$ appears in $\\delta$. \nWe also set $v_j\\defeq \\log \\frac{N+1}{N_j+1}+1$, where~$N_j$ is the number of documents in~$\\corp$ containing~$\\word_j$. \nWhen presented with~$\\corp$, we can pre-compute all the $v_j$s and at run time we only need to count the number of occurrences of~$\\word_j$ in~$\\delta$. \nWe can now define the normalized TF-IDF:\n\n\\begin{definition}[Normalized TF-IDF]\n\\label{def:tf-idf}\nWe define the \\emph{normalized TF-IDF} of $\\delta$ as the vector $\\normtfidf{\\delta}\\in\\Reals^D$ defined coordinate-wise by \n\\begin{equation}\n\\label{eq:def-norm-tf-idf}\n\\forall 1\\leq j\\leq D,\\quad \\normtfidf{\\delta}_j \\defeq \\frac{m_j v_j}{\\sqrt{\\sum_{j=1}^D m_j^2v_j^2}}\n\\, .\n\\end{equation}\nIn particular, $\\norm{\\phi(\\delta)}=1$, where $\\norm{\\cdot}$ is the Euclidean norm. \n\\end{definition}\n\nNote that there are many different ways to define the TF and IDF terms, as well as normalization choices. \nWe restrict ourselves to the version used in the default implementation of LIME, with the understanding that different implementation choices would not change drastically our analysis. \nFor instance, normalizing by the $\\ell_1$ norm instead of the $\\ell_2$ norm would lead to slightly different computations in Proposition~\\ref{prop:beta-computation-linear-main}. \n\nFinally, note that this transformation step does not take place for tabular data, since the data already belong to $\\Reals^D$ in this case. \n\n\\subsection{Sampling}\n\\label{sec:sampling}\n\nLet us now fix a given document $\\xi$ and describe the sampling procedure of LIME. \nEssentially, the idea is to sample new documents similar to $\\xi$ in order to see how~$f$ varies in a neighborhood of $\\xi$. \n\n\\begin{figure}\n \\centering\n\\includegraphics[scale=0.18]{sampling.pdf}\n \\caption{\\label{fig:sampling}The sampling scheme of LIME for text data. To the left, the document to explain $\\xi$, which contains $d=15$ distinct words. The new samples $x_1,\\ldots,x_n$ are obtained by removing $s_i$ random words from $\\xi$ (in blue). In the $n$th sample, one word is removed, yielding two deletions in the original document.}\n\\end{figure}\n\nMore precisely, let us denote by $d$ the number of distinct words in $\\xi$ and set $\\dl\\defeq \\{\\word_1,\\ldots,\\word_d\\}$ the \\emph{local dictionary}. \nFor each new sample, LIME first draws uniformly at random in $\\{1,\\ldots,d\\}$ a number~$s_i$ of words to remove from~$\\xi$. \nSubsequently, a subset $S_i\\subseteq \\{1,\\ldots,d\\}$ of size~$s_i$ is drawn uniformly at random: all the words with indices contained in $S_i$ are \\emph{removed} from~$\\xi$. \nNote that the multiplicity of removals is independent from $s_i$: if the word ``good'' appears $10$ times in $\\xi$ and its index belongs to $S$, then all the instances of ``good'' are removed from $\\xi$ (see Figure~\\ref{fig:sampling}). \nThis process is repeated~$n$ times, yielding~$n$ new samples $x_1,\\ldots,x_n$. \nWith these new documents come~$n$ new binary vectors $z_1,\\ldots,z_n\\in\\{0,1\\}^d$, marking the absence or presence of a word in $x_i$. \nNamely, $z_{i,j}=1$ if $\\word_j$ belongs to $x_i$ and $0$ otherwise. \nWe call the $z_i$s the \\emph{interpretable features}. \nNote that we will write $\\Indic\\defeq (1,\\ldots,1)^\\top$ for the binary feature associated to~$\\xi$: all the words are present. \n\nAlready we see a difficulty appearing in our analysis: when removing words from $\\xi$ at random, $\\normtfidf{\\xi}$ is modified in a non-trivial manner. \nIn particular, the denominator of Eq.~\\eqref{eq:def-norm-tf-idf} can change drastically if many words are removed. \n\nIn the case of tabular data, the interpretable features are obtained in a completely different fashion, by discretizing the dataset. \n\n\\subsection{Weights}\n\nLet us start by defining the \\emph{cosine distance}: \n\n\\begin{definition}[Cosine distance]\nFor any $u,v\\in\\Reals^d$, we define\n\\begin{equation}\n\\label{eq:def-cos-distance}\n\\distcos{u}{v} \\defeq 1 - \\frac{u\\cdot v}{\\norm{u}\\cdot \\norm{v}}\n\\, .\n\\end{equation}\n\\end{definition}\n\nIntuitively, the cosine distance between~$u$ and~$v$ is small if the \\emph{angle} between~$u$ and~$v$ is small. \nEach new sample~$x_i$ receives a positive weight~$\\pi_i$, defined~by\n\\begin{equation}\n\\label{eq:def-weights}\n\\pi_i \\defeq \\exp{\\frac{-\\distcos{\\Indic}{z_i}^2}{2\\nu^2}}\n\\, ,\n\\end{equation}\nwhere $\\nu$ is a positive \\emph{bandwidth parameter}. \nThe intuition behind these weights is that $x_i$ can be far away from $\\xi$ if many words are removed (in the most extreme case, $s=d$, all the words from $\\xi$ are removed). \nIn that case, $z_i$ has mostly $0$ components, and is far away from $\\Indic$.\n\nNote that the cosine distance in Eq.~\\eqref{eq:def-weights} is actually multiplied by $100$ in the current implementation of LIME. \nThus there is the following correspondence between our notation and the code convention: $\\nu_{\\text{LIME}}=100\\nu$. \nFor instance, the default choice of bandwidth, $\\nu_{\\text{LIME}}=25$, corresponds to $\\nu=0.25$. \n\nWe now make the following important remark: \\textbf{the weights only depends on the number of deletions.} \nIndeed, conditionally to $S_i$ having exactly $s$ elements, we have $z_i\\cdot \\Indic = d-s$ and $\\norm{z_i}=\\sqrt{d-s}$. \nSince $\\norm{\\Indic}=\\sqrt{d}$, using Eq.~\\eqref{eq:def-weights}, we deduce that $\\pi_i=\\psi(s\/d)$, where we defined the mapping \n\\begin{align}\n\\psi\\colon [0,1]& \\longrightarrow \\Reals \\label{eq:def-psi-main} \\\\\nt &\\longmapsto \\exp{\\frac{-(1-\\sqrt{1-t})^2}{2\\nu^2}} \\notag \n\\, .\n\\end{align}\nWe can see in Figure~\\ref{fig:psi} how the weights are given to observations: when $s$ is small, then $\\psi(s\/d)\\approx 1$ and when $s\\approx d$, $\\psi(s\/d)$ which is a small quantity depending on $\\nu$. \nNote that the complicated dependency of the weights in $s$ brings additional difficulty in our analysis, and that we will sometimes restrict ourselves to the large bandwidth regime (that is, $\\nu\\to +\\infty$). \nIn that case, $\\pi_i \\approx 1$ for any $1\\leq i\\leq n$. \n\nEuclidean distance between the interpretable features is used instead of the cosine distance in the tabular data version of the algorithm. \n\n\n\\begin{figure}\n \\centering\n\\includegraphics[scale=0.18]{psi_sd.pdf}\n \\caption{\\label{fig:psi}Weights as a function of the number of deletions for different bandwidth parameters ($\\nu=0.25$ is default). LIME gives more weights to documents with few deletions ($s\/d\\approx 0$ means that $\\psi(s\/d)\\approx 1$ regardless of the bandwidth).}\n\\end{figure}\n\n\n\\subsection{Surrogate model}\n\nThe next step is to train a surrogate model on the interpretable features $z_1,\\ldots,z_n$, trying to approximate the responses $y_i\\defeq f(\\normtfidf{x_i})$. \nIn the default implementation of LIME, this model is linear and is obtained by weighted ridge regression \\citep{hoerl_1970}. \nFormally, LIME outputs \n\\begin{equation}\n\\label{eq:main-problem}\n\\betahat_n^{\\lambda} \\in\\argmin{\\beta\\in\\Reals^{d+1}} \\biggl\\{ \\sum_{i=1}^n \\pi_i(y_i - \\beta^\\top z_i)^2 + \\lambda \\norm{\\beta}^2\\biggr\\}\n\\, ,\n\\end{equation}\nwhere $\\lambda>0$ is a regularization parameter. \nWe call the components of $\\betahat_n^\\lambda$ the \\emph{interpretable coefficients}, the $0$th coordinate in our notation is by convention the intercept. \nNote that some feature selection mechanism is often used in practice, limiting the number of interpretable features in output from LIME. \nWe do not consider such mechanism in our analysis. \n\nWe now make a fundamental observation. \nIn its default implementation, LIME uses the default setting of \\texttt{sklearn} for the regularization parameter, that is, $\\lambda=1$. \nHence the first term in Eq.~\\eqref{eq:main-problem} is roughly of order $n$ and the second term of order $d$. \nSince we experiment in the large~$n$ regime ($n=5000$ is default) and with documents that have a few dozen distinct words, $n\\gg d$. \nTo put it plainly, we can consider that $\\lambda=0$ in our analysis and still recover meaningful results. \nWe will denote by $\\betahat_n$ the solution of Eq.~\\eqref{eq:main-problem} with $\\lambda=0$, that is, ordinary least-squares. \n\nWe conclude this presentation of LIME by noting that the main free parameter of the method is the bandwidth $\\nu$. \nAs far as we know, there is no principled way of choosing $\\nu$. \nThe default choice, $\\nu=0.25$, does not seem satisfactory in many respects. \nIn particular, other choices of bandwidth can lead to different values for interpretable coefficients. \nIn the most extreme cases, they can even change sign, see Figure~\\ref{fig:cancellation}. \nThis phenomenon was also noted for tabular data in \\citet{garreau_luxburg_2020_arxiv}. \n\n\\begin{figure}\n \\centering\n\\includegraphics[scale=0.2]{cancellation.pdf}\n \\vspace{-0.1in}\n \\caption{\\label{fig:cancellation}In this experiment, we plot the interpretable coefficient associated to the word ``came\" as a function of the bandwidth parameter. The red vertical line marks the default bandwidth choice ($\\nu=25$). We can see that LIME gives a negative influence for $\\nu \\approx 0.1$ and a positive one for $\\nu > 0.2$. }\n\\end{figure}\n\n\\section{Main results}\n\nWithout further ado, let us present our main result. \nFor clarity's sake, we split it in two parts: Section~\\ref{sec:main:concentration} contains the concentration of $\\betahat_n$ around $\\beta^f$ whereas Section~\\ref{sec:main:computation} presents the exact expression of $\\beta^f$. \n\n\\subsection{Concentration of $\\betahat_n$}\n\\label{sec:main:concentration}\n\nWhen the number of new samples~$n$ is large, we expect LIME to stabilize and the explanations not to vary too much. \nThe next result supports this intuition. \n\n\\begin{theorem}[Concentration of $\\betahat_n$]\n\\label{th:concentration-of-betahat}\nSuppose that the model $f$ is bounded by a positive constant $M$ on $\\sphere{D-1}$. \nRecall that we let $d$ denote the number of distinct words of $\\xi$, the example to explain. \nLet $0<\\epsilon < M$ and $\\eta\\in (0,1)$. \nThen, there exist a vector $\\beta^f\\in\\Reals^d$ such that, for every \n\\[\nn\\gtrsim \\max \\left\\{M^2d^{9} \\exps{\\frac{10}{\\nu^2}}, Md^5\\exps{\\frac{5}{\\nu^2}}\\right\\} \\frac{\\log \\frac{8d}{\\eta}}{\\epsilon^2}\n\\, ,\n\\]\nwe have $\\proba{\\smallnorm{\\betahat_n - \\beta^f} \\geq \\epsilon} \\leq \\eta$. \n\\end{theorem}\n\nWe refer to the supplementary material for a complete statement (we omitted numerical constants here for clarity) and a detailed proof. \nIn essence, Theorem~\\ref{th:concentration-of-betahat} tells us that we can focus on $\\beta^f$ in order to understand how LIME operates, provided that~$n$ is large enough. \nThe main limitation of Theorem~\\ref{th:concentration-of-betahat} is the dependency of~$n$ in~$d$ and~$\\nu$. \nThe control that we achieve on $\\smallnorm{\\betahat_n-\\beta}$ becomes quite poor for large~$d$ or small~$\\nu$: we would then need~$n$ to be unreasonably large in order to witness concentration. \n\nWe notice that Theorem~\\ref{th:concentration-of-betahat} is very similar in its form to Theorem~1 in \\citet{garreau_luxburg_2020_arxiv} except that (i) the dimension is replaced by the number of distinct words in the document to explain, and (ii) there is no discretization parameter in our case. \nThe differences with the analysis in the tabular data framework will be more visible in the next section. \n\n\\subsection{Expression of $\\beta^f$}\n\\label{sec:main:computation}\n\nOur next result shows that we can derive an explicit expression for $\\beta^f$. \nBefore stating our result, we need to introduce more notation. \nFrom now on, we set $x$ a random variable such that $x_1,\\ldots,x_n$ are i.i.d. copies of $x$. \nSimilarly,~$\\pi$ corresponds to the draw of the $\\pi_i$s and $z$ to that of the~$z_i$s. \n\n\\begin{definition}[$\\alpha$ coefficients]\n\\label{def:alphas}\nDefine $\\alpha_0\\defeq \\expec{\\pi}$ and, for any $1\\leq p\\leq d$, \n\\begin{equation}\n\\label{eq:def-alphas-main}\n\\alpha_p \\defeq \\expec{\\pi \\cdot z_1 \\cdots z_p }\n\\, .\n\\end{equation}\n\\end{definition}\n\nIntuitively, when $\\nu$ is large, $\\alpha_p$ corresponds to the probability that $p$ distinct words are present in $x$. \nThe sampling process of LIME is such that $\\alpha_p$ does not depend on the exact set of indices considered. \nIn fact,~$\\alpha_p$ only depends on~$d$ and~$\\nu$. \nWe show in the supplementary material that it is possible to compute the $\\alpha$ coefficients in closed-form as a function of~$d$ and~$\\nu$:\n\n\\begin{proposition}[Computation of the $\\alpha$ coefficients]\n\\label{prop:alphas-computation-main}\nLet $0\\leq p\\leq d$. \nFor any $d\\geq 1$ and $\\nu >0$, it holds that \n\\[\n\\alpha_p = \\frac{1}{d} \\sum_{s=1}^d \\prod_{k=0}^{p-1} \\frac{d-s-k}{d-k} \\psi\\left(\\frac{s}{d}\\right)\n\\, .\n\\]\n\\end{proposition}\n\nFrom these coefficients, we form the normalization constant\n\\begin{equation}\n\\label{eq:def-densct}\nc}% constant in the denominator of \\Sigma^{-1_d \\defeq (d-1)\\alpha_0\\alpha_2 -d\\alpha_1^2 + \\alpha_0\\alpha_1\n\\, .\n\\end{equation}\n\nWe will also need the following. \n\n\\begin{definition}[$\\sigma$ coefficients]\n\\label{def:sigmas}\nFor any $d\\geq 1$ and $\\nu >0$, define\n\\begin{equation}\n\\label{eq:def-sigmas}\n\\begin{cases}\n\\sigma_1 &\\defeq -\\alpha_1\n\\, , \\\\\n\\sigma_2 &\\defeq \\frac{(d-2)\\alpha_0 \\alpha_2 - (d-1)\\alpha_1^2 + \\alpha_0\\alpha_1}{\\alpha_1-\\alpha_2}\\, , \\\\\n\\sigma_3 &\\defeq \\frac{\\alpha_1^2-\\alpha_0\\alpha_2}{\\alpha_1-\\alpha_2 }\n\\, .\n\\end{cases}\n\\end{equation}\n\\end{definition}\n\nWith these notation in hand, we have:\n\n\\begin{proposition}[Expression of $\\beta^f$]\n\\label{prop:expression-of-beta}\nUnder the assumptions of Theorem~\\ref{th:concentration-of-betahat}, we have $c}% constant in the denominator of \\Sigma^{-1_d >0$ and, for any $1\\leq j\\leq d$,\n\\begin{align}\n\\label{eq:def-beta}\n\\beta_j^f = c}% constant in the denominator of \\Sigma^{-1^{-1}_d\\biggl\\{\\sigma_1 \\expec{\\pi f(\\normtfidf{x})} & + \\sigma_2 \\expec{\\pi z_j f(\\normtfidf{x})} \\\\\n&+ \\sigma_3 \\sum_{\\substack{k=1 \\\\ k\\neq j}}^d \\expec{\\pi z_k f(\\normtfidf{x})}\\biggr\\} \\notag \n\\, .\n\\end{align}\n\\end{proposition}\n\nWe also have an expression for the intercept which can be found in the supplementary material, as well as the proof of Proposition~\\ref{prop:expression-of-beta}. \nAt first glance, Eq.~\\eqref{eq:def-beta} is quite similar to Eq.~(6) in \\citet{garreau_luxburg_2020_arxiv}, which gives the expression of $\\beta_j^f$ in the tabular data case. \nThe main difference is the TF-IDF transform in the expectation, personified by $\\Normtfidf$, and the additional terms (there is no $\\sigma_3$ factor in the tabular data case). \nIn addition, the expression of the $\\sigma$ coefficients is much more complicated than in the tabular data case. \nWe now present some immediate consequences of Proposition~\\ref{prop:expression-of-beta}. \n\n\\paragraph{Linearity of explanations. }\nPerhaps the most striking feature of Eq.~\\eqref{eq:def-beta} is that it is \\textbf{linear in $f$}.\nMore precisely, the mapping $f\\mapsto \\beta^f$ is linear in $f$: for any given two functions $f$ and $g$, we have\n\\[\n\\beta^{f+g} = \\beta^f + \\beta^g\n\\, .\n\\]\nTherefore, because of Theorem~\\ref{th:concentration-of-betahat}, the explanations $\\betahat_n$ obtained for a finite sample of new examples are also approximately linear in the model to explain. \nWe illustrate this phenomenon in Figure~\\ref{fig:linearity}. \nThis is remarkable: many models used in machine learning can be written as a linear combination of smaller models (\\emph{e.g.}, generalized linear models, kernel regressors, decision trees and random forests). \nIn order to understand the explanations provided by these complicated models, one can try and understand the explanations for the elementary elements of the models first. \n\n\\paragraph{Large bandwidth. }\nIt can be difficult to get a good sense of the values taken by the $\\sigma$ coefficients, and therefore of $\\beta$. \nLet us see how Proposition~\\ref{prop:expression-of-beta} simplifies in the large bandwidth regime and what insights we can gain. \nWe denote by $\\betainf$ the limit of $\\beta$ when $\\nu\\to +\\infty$. \nWhen $\\nu\\to +\\infty$, we prove in the supplementary material that, for any $1\\leq j\\leq d$, up to $\\bigo{1\/d}$ terms and a numerical constant, the $j$-th coordinate of $\\betainf$ is then approximately equal to \n\\begin{equation*}\n\\left(\\betainf^f\\right)_j\\! \\approx\\! \\condexpec{f(\\normtfidf{x})}{\\word_j\\in x} - \\frac{1}{d}\\sum_{k\\neq j} \\condexpec{f(\\normtfidf{x})}{\\word_k\\in x}\n.\n\\end{equation*}\nIntuitively, the interpretable coefficient associated to the word $\\word_j$ is high if \\textbf{the expected value of the model when word $\\word_j$ is present is significantly higher than the typical expected value when other words are present}. \nWe think that this is reasonable: if the model predicts much higher values when $\\word_j$ belongs to the example, it surely means that~$\\word_j$ being present is important for the prediction. \nOf course, this is far from the full picture, since (i) this reasoning is only valid for large bandwidth, and (ii) in practice, we are concerned with $\\betahat_n$ which may be not so close to $\\beta^f$ for small $n$. \n\n\\begin{figure}\n \\centering\n\\includegraphics[scale=0.25]{linsum.pdf}\n \\vspace{-0.1in}\n \\caption{\\label{fig:linearity}The explanations given by LIME for the sum of two models (here two random forests regressors) are the sum of the explanations for each model, up to noise coming from the sampling procedure.}\n\\end{figure}\n\n\n\\subsection{Sketch of the proof}\n\nWe conclude this section with a brief sketch of the proof of Theorem~\\ref{th:concentration-of-betahat}, the full proof can be found in the supplementary material. \n\nSince we set $\\lambda=0$ in Eq.~\\eqref{eq:main-problem}, $\\betahat_n$ is the solution of a weighted least-squares problem. \nDenote by $W\\in\\Reals^{n\\times n}$ the diagonal matrix such that $W_{i,i}=\\pi_i$, and set $Z\\in\\{0,1\\}^{n\\times (d+1)}$ the matrix such that its $i$th line is $(1,z_i^\\top)$. \nThen the solution of Eq.~\\eqref{eq:main-problem} is given by \n\\[\n\\betahat_n = \\left(Z^\\top WZ\\right)^{-1}Z^\\top Wy\n\\, ,\n\\]\nwhere we defined $y\\in\\Reals^n$ such that $y_i=f(\\normtfidf{x_i})$ for all $1\\leq i\\leq n$. \nLet us set $\\Sigmahat_n\\defeq \\frac{1}{n}Z^\\top WZ$ and $\\Gammahat_n^f\\defeq \\frac{1}{n}Z^\\top Wy$. \nBy the law of large numbers, we know that both $\\Sigmahat_n$ and $\\Gammahat_n^f$ converge in probability towards their population counterparts $\\Sigma\\defeq \\smallexpec{\\Sigmahat_n}$ and $\\Gamma^f\\defeq \\smallexpec{\\Gammahat_n}$. \nTherefore, provided that $\\Sigma$ is invertible, $\\betahat_n$ is close to $\\beta^f\\defeq \\Sigma^{-1}\\Gamma^f$ with high probability. \n\nAs we have seen in Section~\\ref{sec:lime}, the main differences with respect to the tabular data implementation are (i) the interpretable features, and (ii) the TF-IDF transform. \nThe first point lead to a completely different $\\Sigma$ than the one obtained in \\citet{garreau_luxburg_2020_arxiv}. \nIn particular, it has no zero coefficients, leading to more complicated expression for $\\beta^f$ and additional challenges when controlling $\\opnorm{\\Sigma^{-1}}$. \nThe second point is quite challenging since, as noted in Section~\\ref{sec:tfidf}, \\textbf{the TF-IDF transform of a document changes radically when deleting words at random in the document.} \nThis is the main reason why we have to resort to approximations when dealing with linear models. \n\n\n\n\n\\section{Expression of $\\beta^f$ for simple models}\n\\label{sec:discussion}\n\nIn this section, we see how to specialize Proposition~\\ref{prop:expression-of-beta} to simple models $f$. \nRecall that our main goal in doing so is to investigate whether it makes sense or not to use LIME in these cases. \nWe will focus on two classes of models: decision trees (Section~\\ref{sec:decision-trees}) and linear models (Section~\\ref{sec:linear-models}). \n\n\\subsection{Decision trees}\n\\label{sec:decision-trees}\n\nIn this section we focus on simple decision trees built on the presence or absence of given words. \nFor instance, let us look at the model returning $1$ if the word ``food'' is present, or if ``about'' and ``everything'' are present in the document. \nIdeally, LIME would give high positive weights to ``food,'' ``about,'' and ``everything,'' if they are present in the document to explain, and small weight to all other words. \n\nWe first notice that such simple decision trees can be written as sums of products of the binary features. \nIndeed, recall that we defined $z_j=\\indic{\\word_j\\in x}$. \nFor instance, suppose that the first three words of our dictionary are ``food,'' ``about,'' and ``everything.''\nThen the model from the previous paragraph can be written \n\\begin{equation}\n\\label{eq:def-g}\ng(x) = z_1 + (1-z_1)\\cdot z_2 \\cdot z_3\n\\, .\n\\end{equation}\n\nNow it is clear that the $z_j$s can be written as function of the TF-IDF transform of a word, since $\\word_j\\in x$ if, and only if, $\\normtfidf{x}_j > 0$. \nTherefore this class of models falls into our framework and we can use Theorem~\\ref{th:concentration-of-betahat} and Proposition~\\ref{prop:expression-of-beta} in order to gain insight on the explanations provided by LIME. \nFor instance, Eq.~\\eqref{eq:def-g} can be written as $f(\\normtfidf{x})$ with, for any $\\zeta\\in\\Reals^D$, \n\\[\nf(\\zeta) \\defeq \\indic{\\zeta_1 > 0} + (1-\\indic{\\zeta_1>0}) \\cdot \\indic{\\zeta_2 > 0} \\cdot \\indic{\\zeta_3 > 0}\n\\, .\n\\]\nBy linearity, it is sufficient to know how to compute~$\\beta^f$ when~$f$ is a product of indicator functions. \n\nWe now make an important remark: since the new example $x_1,\\ldots,x_n$ are created by deleting words at random from the text $\\xi$, \\textbf{$x$ only contains words that are already present in $\\xi$}. \nTherefore, without loss of generality, we can restrict ourselves to the local dictionary (the distinct words of $\\xi$). \nIndeed, for any word $\\word$ not already in $\\xi$, $\\indic{\\word \\in x}=0$ almost surely. \nAs before, we denote by $\\dl$ the local dictionary associated to $\\xi$, and we denote its elements by $\\word_1,\\ldots,\\word_d$. \nWe can compute in closed-form the interpretable coefficients for a product of indicator functions:\n\n\\begin{proposition}[Computation of $\\beta^f$, product of indicator functions]\n\\label{prop:beta-computation-indicator-product-general-main}\nLet $J\\subseteq \\{1,\\ldots,d\\}$ be a set of~$p$ distinct indices and set $f(x) = \\prod_{j\\in J}\\indic{x_j>0}$. \nThen, for any $j\\in J$, \n\\begin{align*}\n\\beta_j^f \\!&=\\! c}% constant in the denominator of \\Sigma^{-1_d^{-1}\\!\\bigl[\\sigma_1\\alpha_p + \\sigma_2\\alpha_p + (d\\!-\\!p)\\sigma_3\\alpha_{p+1} + (p\\!-\\!1)\\sigma_3\\alpha_p\\bigr]\n\\end{align*}\nand, for any $j\\in\\{1,\\ldots,d\\}\\setminus J$, \n\\begin{align*}\n\\beta_j^f \\!&=\\! c}% constant in the denominator of \\Sigma^{-1_d^{-1}\\!\\bigl[\\sigma_1\\alpha_p+\\sigma_2\\alpha_{p+1}+(d\\!-\\!p\\!-\\!1)\\sigma_3\\alpha_{p+1} + p\\sigma_3\\alpha_p \\bigr]\n.\n\\end{align*}\n\\end{proposition}\n\nIn particular, when $p=0$, Proposition~\\ref{prop:beta-computation-indicator-product-general-main} simplifies greatly and we find that $1\\leq k\\leq d$, $\\beta_k^f=\\indic{k=j}$. \nIt is already a reassuring result: when the model is just indicating if a given word is present, \\textbf{the explanation given by LIME is one for this word and zero for all the other words}. \n\nIt is slightly more complicated to see what happens when $p\\geq 1$. \nTo this extent, let us set $j\\in J$ and $k\\notin J$. \nThen it follows readily from Proposition~\\ref{prop:beta-computation-indicator-product-general} that\n\\[\n\\beta^f_j - \\beta_k^f = c}% constant in the denominator of \\Sigma^{-1_d^{-1}(\\sigma_2+\\sigma_3)(\\alpha_p-\\alpha_{p+1})\n\\, .\n\\]\nSince $\\alpha_p\\approx 1\/(p+1)$ and $\\sigma_2+\\sigma_3\\approx 6$, we deduce that $\\beta_j^f \\gg \\beta_k^f$. \nMoreover, from Definition~\\ref{def:alphas} and~\\ref{def:sigmas} one can show that $\\beta_k^f = \\bigo{1\/d}$ when $\\nu$ is large. \nThus Proposition~\\ref{prop:beta-computation-indicator-product-general} tells us that \\textbf{LIME gives large positive coefficients to words that are in the support of~$f$ and small coefficients to all the other words}. \nThis is a satisfying property. \n\nTogether with the linearity property, Proposition~\\ref{prop:beta-computation-indicator-product-general} allows us to compute $\\beta^f$ for any decision tree that can be written as in Eq.~\\eqref{eq:def-g}. \nWe give an example of our theoretical predictions in Figure~\\ref{fig:decision-tree-result}. \nAs predicted, \\textbf{the words that are pivotal in the prediction have high interpretable coefficients, whereas the other words receive near-zero coefficients}. \nIt is interesting to notice that words that are near the root of the tree receive a greater weight. \nWe present additional experiments in the supplementary material.\n\n\\begin{figure}\n \\centering\n\\includegraphics[scale=0.25]{decision_tree.pdf}\n \\vspace{-0.1in}\n \\caption{\\label{fig:decision-tree-result}Theory \\emph{vs} practice for the tree defined by Eq.~\\eqref{eq:def-g}. The black whisker boxes correspond to $100$ runs of LIME with default settings ($n=5000$ new examples and $\\nu=0.25$) whereas the red crosses correspond to the theoretical predictions given by our analysis. The example to explain is a Yelp review with $d=35$ distinct words.}\n\\end{figure}\n\n\\subsection{Linear models}\n\\label{sec:linear-models}\n\nWe now focus on linear models, that is, for any document $x$,\n\\begin{equation}\n\\label{eq:def-linear-model-main}\nf(\\normtfidf{x}) \\defeq \\sum_{j=1}^d \\lambda_j \\normtfidf{x}_j\n\\, ,\n\\end{equation}\nwhere $\\lambda_1,\\ldots,\\lambda_d$ are arbitrary fixed coefficients. \nWe have to resort to approximate computations in this case: from now on, we assume that $\\nu = +\\infty$. \nWe start with the simplest linear function: all coefficients are zero except one, that is, $\\lambda_k=1$ if $k=j$ and $0$ otherwise in Eq.~\\eqref{eq:def-linear-model-main}, for a fixed index~$j$. \nWe need to introduce additional notation before stating or result. \nFor any $1\\leq j\\leq d$, define\n\\[\n\\omega_k \\defeq \\frac{m_j^2v_j^2}{\\sum_{\\ell=1}^d m_\\ell^2v_\\ell^2}\n\\, ,\n\\]\nwhere the $m_k$s and $v_k$s were defined in Section~\\ref{sec:tfidf}. \nFor any~$J$ that is a strict subset of $\\{1,\\ldots,d\\}$, define $H_S\\defeq \\sum_{j\\in J}\\omega_j$. \nRecall that $S$ denotes the random subset of indices chosen by LIME in the sampling step (see Section~\\ref{sec:sampling}). \nDefine $E_j= \\condexpec{(1-H_S)^{-1\/2}}{S\\not\\ni j}$ and for any $k\\neq j$, $E_{j,k} = \\condexpec{(1-H_S)^{-1\/2}}{S\\not\\ni j,k}$. \nThen we have the following:\n\n\\begin{proposition}[Computation of $\\beta^f$, linear case]\n\\label{prop:beta-computation-linear-main}\nLet $1\\leq j\\leq d$ and assume that $f(\\normtfidf{x})=\\normtfidf{x}_j$. \nThen, for any $1\\leq k\\leq d$ such that $k\\neq j$, \n\\begin{align*}\n\\left(\\betainf^f\\right)_k &= \\biggl[2 E_{j,1} - \\frac{2}{d}\\sum_{\\ell \\neq k,j}E_{j,\\ell}\\biggr] \\normtfidf{\\xi}_j + \\bigo{\\frac{1}{d}}\n\\, ,\n\\end{align*}\nand\n\\begin{align*}\n\\left(\\betainf^f\\right)_j &= \\biggl[3E_j - \\frac{2}{d} \\sum_{k \\neq j}E_{j,k}\\biggr] \\normtfidf{\\xi}_j + \\bigo{\\frac{1}{d}}\n\\, .\n\\end{align*}\n\\end{proposition}\n\nProposition~\\ref{prop:beta-computation-linear-main} is proved in the supplementary material. \nThe main difficulty is to compute the expected value of $\\normtfidf{x}_j$: this is the reason for the $E_j$ terms, for which we find an approximate expression as a function of the $\\omega_k$s. \nAssuming that the $\\omega_k$ are small, we can further this approximation and show that $E_j \\approx 1.22$ and $E_{j,k}\\approx 1.15$. \nIn particular, \\textbf{these expressions do not depend on~$j$ and~$k$}. \nThus we can drastically simplify the statement of Proposition~\\ref{prop:beta-computation-linear-main}: for any $k\\neq j$, $\\left(\\beta_\\infty^f\\right)_k \\approx 0$ and $\\left(\\beta_\\infty^f\\right)_j \\approx 1.36 \\normtfidf{\\xi}_j$. \nWe can now go back to our original goal, Eq.~\\eqref{eq:def-linear-model-main}. \nBy linearity, we deduce that \n\\begin{equation}\n\\label{eq:simplified-betainf-linear-main}\n\\forall 1\\leq j\\leq d, \\quad \\left(\\beta_\\infty^f\\right)_j \\approx 1.36 \\cdot \\lambda_j \\cdot \\normtfidf{\\xi}_j\n\\, .\n\\end{equation}\nIn other words, up to a numerical constant and small error terms depending on $d$, \\textbf{the explanation for a linear~$f$ is the TF-IDF value of the word multiplied by the coefficient of the linear model. }\nWe believe that this behavior is desirable for an interpretability method: large coefficients in the linear model should intuitively be associated to large interpretable coefficients. \nBut at the same time the TF-IDF of the term is taken into account. \n\nWe observe a very good match between theory and practice (see Figure~\\ref{fig:linear}). \nSurprisingly, this is the case even though we assume that~$\\nu$ is large in our derivations, whereas~$\\nu$ is chosen by default in all our experiments.\nWe present experiments with other bandwidths in the supplementary. \n\n\\begin{figure}\n \\centering\n\\includegraphics[scale=0.24]{linear.pdf}\n \n \\caption{\\label{fig:linear}Theory \\emph{vs} practice for an arbitrary linear model. The black whisker boxes correspond to $100$ runs of LIME with default settings ($n=5000$ and $\\nu=0.25$). The red crosses correspond to our theoretical predictions: $\\beta_j\\approx 1.36\\lambda_j\\normtfidf{\\xi}_j$. Here $d=29$. }\n\\end{figure}\n\n\\section{Conclusion}\n\nIn this work we proposed the first theoretical analysis of LIME for text data. \nIn particular, we provided a closed-form expression for the interpretable coefficients when the number of perturbed samples is large. \nLeveraging this expression, we exhibited some desirable behavior of LIME such as the linearity with respect to the model. \nIn specific cases (simple decision trees and linear models), we derived more precise expression, showing that LIME outputs meaningful explanations in these cases. \n\nAs future work, we want to tackle more complex models. \nMore precisely, we think that it is possible to obtained approximate statements in the spirit of Eq.~\\eqref{eq:simplified-betainf-linear-main} for models that are not linear. \n\n\\subsubsection*{Acknowledgments}\n\nThis work was partly funded by the UCA DEP grant. \nThe authors want to thank Andr\\'e Galligo for getting them to know eachother. \n\n\n\\section*{Organization of the supplementary material}\n\nIn this supplementary material, we collect the proofs of all our theoretical results and additional experiments. \nWe study the covariance matrix in Section~\\ref{sec:study-of-sigma} and the responses in Section~\\ref{sec:study-of-gamma}. \nThe proof of our main results can be found in Section~\\ref{sec:study-of-beta}. \nCombinatorial results needed for the approximation formulas obtained in the linear case are collected in Section~\\ref{sec:subsets-sums}, while other technical results can be found in Section~\\ref{sec:technical}. \nFinally, we present some additional experiments in Section~\\ref{sec:experiments}. \n\n\\paragraph{Notation.}\nFirst, let us quickly recall our notation. \nWe consider $x,z,\\pi$ the generic random variables associated to the sampling of new examples by LIME. \nTo put it plainly, the new examples $x_1,\\ldots,x_n$ are i.i.d. samples from the random variable $x$. \nAlso remember that we denote by $S\\subseteq \\{1,\\ldots,d\\}$ the random subset of indices removed by LIME when creating new samples for a text with $d$ distinct words. \nFor any finite set $R$, we write $\\card{R}$ the cardinality of $R$. \nRecall that we denote by $S$ the random set of indices deleted in the sampling. \nWe write $\\Expec_s$ the expectation conditionally to $\\card{S}=s$. \nSince we consider vectors belonging to $\\Reals^{d+1}$ with the zero-th coordinate corresponding to an intercept, we will often start the numbering at $0$ instead of $1$. \nFor any matrix $M$, we set $\\frobnorm{M}$ the Frobenius norm of $M$ and $\\opnorm{M}$ the operator norm of $M$. \n\n\n\\section{The study of $\\Sigma$}\n\\label{sec:study-of-sigma}\n\nWe begin by the study of the covariance matrix. \nWe show in Section~\\ref{sec:computation-of-sigma} how to compute $\\Sigma$. \nWe will see how the $\\alpha$ coefficients defined in the main paper appear. \nIn Section~\\ref{sec:computation-of-sigma-inverse}, we show that it is possible to invert $\\Sigma$ in closed-form: it can be written in function of $c}% constant in the denominator of \\Sigma^{-1_d$ and the $\\sigma$ coefficients. \nWe show how $\\Sigmahat_n$ concentrates around $\\Sigma$ in Section~\\ref{sec:sigmahat-concentration}. \nFinally, Section~\\ref{sec:control-opnorm} is dedicated to the control of $\\opnorm{\\Sigma^{-1}}$. \n\n\n\\subsection{Computation of $\\Sigma$}\n\\label{sec:computation-of-sigma}\n\nIn this section, we derived a closed-form expression for $\\Sigma\\defeq \\smallexpec{\\Sigmahat_n}$ as a function of $d$ and $\\nu$. \nRecall that we defined $\\Sigmahat = \\frac{1}{n}Z^\\top WZ$. \nBy definition of $Z$ and $W$, we have\n\\[\n\\Sigmahat =\n\\begin{pmatrix}\n\\frac{1}{n}\\sum_{i=1}^n \\pi_i & \\frac{1}{n}\\sum_{i=1}^n \\pi_i z_{i,1} & \\cdots & \\frac{1}{n}\\sum_{i=1}^n \\pi_i z_{i,d} \\\\ \n\\frac{1}{n}\\sum_{i=1}^n \\pi_i z_{i,1} & \\frac{1}{n}\\sum_{i=1}^n \\pi_i z_{i,1} & \\cdots & \\frac{1}{n}\\sum_{i=1}^n \\pi_i z_{i,1}z_{i,d} \\\\ \n\\vdots & \\vdots & \\ddots & \\vdots \\\\ \n\\frac{1}{n}\\sum_{i=1}^n \\pi_i z_{i,d} & \\frac{1}{n}\\sum_{i=1}^n \\pi_i z_{i,1}z_{i,d} & \\cdots & \\frac{1}{n}\\sum_{i=1}^n \\pi_i z_{i,d}\n\\end{pmatrix}\n\\in\\Reals^{(d+1)\\times (d+1)}\n\\, .\n\\]\nTaking the expectation in the last display with respect to the sampling of new examples yields\n\\begin{equation}\n\\label{eq:def-sigma}\n\\Sigma =\\begin{pmatrix}\n\\expec{\\pi} & \\expec{\\pi z_1} & \\cdots & \\expec{\\pi z_d} \\\\ \n \\expec{\\pi z_1} & \\expec{\\pi z_1 } & \\cdots & \\expec{\\pi z_1 z_d} \\\\ \n\\vdots & \\vdots & \\ddots & \\vdots \\\\ \n\\expec{\\pi z_d} & \\expec{\\pi z_1z_d } & \\cdots & \\expec{\\pi z_d}\n\\end{pmatrix}\n\\in\\Reals^{(d+1)\\times (d+1)}\n\\, .\n\\end{equation}\n\nAn important remark is that $\\expec{\\pi z_j}$ does not depend on $j$. \nIndeed, there is no privileged index in the sampling of $S$ (the subset of removed indices). \nThus we only have to look into $\\expec{\\pi z_1}$ (say). \nFor the same reason, $\\expec{\\pi z_jz_k}$ does not depend on the $2$-uple $(j,k)$, and we can limit our investigations to $\\expec{\\pi z_1z_2}$. \nThis is the reason why we defined $\\alpha_0 = \\expec{\\pi}$ and, for any $1\\leq p\\leq d$, \n\\begin{equation}\n\\label{eq:def-alphas}\n\\alpha_p = \\expec{\\pi \\cdot z_1 \\cdots z_p}\n\\end{equation}\nin the main paper. \nWe recognize the definition of the $\\alpha_p$s in Eq.~\\eqref{eq:def-sigma} and we write\n\\[\n\\Sigma_{j,k} = \n\\begin{cases}\n\\alpha_0 &\\text{ if } j=k=0, \\\\\n\\alpha_1 &\\text{ if } j=0 \\text{ and } k> 0 \\text{ or } j> 0 \\text{ and } k=0 \\text{ or } j=k> 0, \\\\\n\\alpha_2 &\\text{ otherwise. }\n\\end{cases}\n\\]\nAs promised, we can be more explicit regarding the $\\alpha$ coefficients. \nRecall that we defined the mapping \n\\begin{align}\n\\psi\\colon [0,1]& \\longrightarrow \\Reals \\label{eq:def-psi} \\\\\nt &\\longmapsto \\exp{-(1-\\sqrt{1-t})^2\/(2\\nu^2)} \\notag \n\\, .\n\\end{align}\nIt is a decreasing mapping (see Figure~\\ref{fig:psi-t}). \nWith this notation in hand, we have the following expression for the $\\alpha$ coefficients (this is Proposition~1 in the paper):\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.16]{psi_sup_compressed.pdf}\n \\caption{\\label{fig:psi-t}The function $\\psi$ defined by Eq.~\\eqref{eq:def-psi} with bandwidth parameter $\\nu=0.25$. In orange (resp. blue), one can see the upper (resp. lower) bound given by Eq.~\\eqref{eq:psi-precise-bound}. }\n\\end{figure}\n\n\\begin{proposition}[Computation of the $\\alpha$ coefficients]\n\\label{prop:alphas-computation}\nFor any $d\\geq 1$, $\\nu >0$, and $p\\geq 0$, it holds that\n\\[\n\\alpha_p = \\frac{1}{d} \\sum_{s=1}^d \\prod_{k=0}^{p-1} \\frac{d-s-k}{d-k} \\psi\\left(\\frac{s}{d}\\right)\n\\, .\n\\]\n\\end{proposition}\n\nIn particular, the first three $\\alpha$ coefficients can be written\n\\[\n\\alpha_0 = \\frac{1}{d} \\sum_{s=1}^d \\psi\\left(\\frac{s}{d}\\right) \\, ,\n\\quad \n\\alpha_1 = \\frac{1}{d} \\sum_{s=1}^d \\left(1-\\frac{s}{d}\\right)\\psi\\left(\\frac{s}{d}\\right)\n\\, ,\n\\quad \\text{ and } \\quad\n\\alpha_2 = \\frac{1}{d} \\sum_{s=1}^d \\left(1-\\frac{s}{d}\\right)\\left(1-\\frac{s}{d-1}\\right)\\psi\\left(\\frac{s}{d}\\right)\n\\, .\n\\]\n\n\\begin{proof}\nThe idea of the proof is to use the law of total expectation with respect to the collection of events $\\{\\card{S}=s\\}$ for $s\\in\\{1,\\ldots,d\\}$. \nSince $\\proba{\\card{S}=s}=\\frac{1}{d}$ for any $1\\leq s\\leq d$, all that is left to compute is the expectation of $\\pi z_1\\cdots z_p$ conditionally to $\\card{S}=s$. \nAccording to the remark in Section~2.3 of the main paper, $\\pi = \\psi(s\/d)$ conditionally to $\\{\\card{S}=s\\}$.\nWe can conclude since, according to Lemma~\\ref{lemma:proba-containing-cond}, \n\\[\n\\probaunder{\\word_1\\in x,\\ldots,\\word_p\\in x}{s} = \\frac{(d-s)(d-s-1)\\cdots (d-s-p+1)}{d(d-1)\\cdots (d-p+1)}\n\\, .\n\\]\n\\end{proof}\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.11]{alpha0_nu_compressed.pdf} \n\\includegraphics[scale=0.11]{alpha1_nu_compressed.pdf}\n\\includegraphics[scale=0.11]{alpha2_nu_compressed.pdf}\n\\includegraphics[scale=0.11]{alpha3_nu_compressed.pdf}\n\\caption{\\label{fig:alphas-bandwidth-dependency}Behavior of the first $\\alpha$ coefficients with respect to the bandwidth parameter $\\nu$. The red vertical lines mark the default bandwidth choice ($\\nu=0.25$). The green horizontal line denotes the limits for large $d$ given by Corollary~\\ref{cor:alphas-approx}.}\n\\end{figure}\n\nIt is important to notice that, when $\\nu \\to +\\infty$, $\\psi(t)\\to 0$ for any $t\\in (0,1]$. \nAs a consequence, in the large bandwidth regime, the $\\psi(s\/d)$ weights are arbitrarily close to one. \nWe demonstrate this effect in Figure~\\ref{fig:alphas-bandwidth-dependency}. \nIn this situation, the $\\alpha$ coefficients take a simpler form. \n\n\\begin{corollary}[Large bandwidth approximation of $\\alpha$ coefficients]\n\\label{cor:alphas-approx}\nFor any $0\\leq p\\leq d$, it holds that\n\\[\n\\lim_{\\nu\\to +\\infty} \\alpha_p = \\frac{d-p}{(p+1)d}\n\\, .\n\\]\n\\end{corollary}\n\nWe report these approximate values in Figure~\\ref{fig:alphas-bandwidth-dependency}. \nIn particular, when both $\\nu$ and $d$ are large, we can see that $\\alpha_p\\approx 1\/(p+1)$. \nThus $\\alpha_0\\approx 1$, $\\alpha_1\\approx \\frac{1}{2}$, and $\\alpha_2\\approx \\frac{1}{3}$. \n\n\\begin{proof}\nWhen $\\nu\\to +\\infty$, we have $\\psi(s\/d)\\to 1$ and we can conclude directly by using Lemma~\\ref{lemma:proba-containing}. \n\\end{proof}\n\n\nNotice that we can be slightly more precise than Corollary~\\ref{cor:alphas-approx}. \nIndeed, $\\psi$ is decreasing on $[0,1]$, thus for any $t\\in [0,1]$, $\\exp{-1\/(2\\nu^2)}\\leq \\psi(t)\\leq 1$. \nTherefore we can present some efficient bounds for the $\\alpha$ coefficients when $\\nu$ is large. \n\n\\begin{corollary}[Bounds on the $\\alpha$ coefficients]\n\\label{cor:alphas-bounds}\nFor any $0\\leq p\\leq d$, it holds that \n\\[\n\\frac{d-p}{(p+1)d} \\exps{\\frac{-1}{2\\nu^2}} \\leq \\alpha_p \\leq \\frac{d-p}{(p+1)d}\n\\, .\n\\]\n\\end{corollary}\n\nOne can further show that, for any $0\\leq t\\leq 1$, \n\\begin{equation}\n\\label{eq:psi-precise-bound}\n\\exp{\\frac{-t^2}{2\\nu^2}} \\leq \\psi(t) \\leq \\exp{\\frac{-t^2}{8\\nu^2}}\n\\, .\n\\end{equation}\nUsing Eq.~\\eqref{eq:psi-precise-bound} together with the series-integral comparison theorem would yield very accurate bounds for the $\\alpha$ coefficients and related quantities, but we will not follow that road. \n\n\n\n\\subsection{Computation of $\\Sigma^{-1}$}\n\\label{sec:computation-of-sigma-inverse}\n\nIn this section, we present a closed-form formula for the matrix inverse of $\\Sigma$ as a function of $d$ and $\\nu$.\n\n\\begin{minipage}{0.7\\textwidth}\n\\begin{proposition}[Computation of $\\Sigma^{-1}$]\n\\label{prop:sigma-inverse-computation}\nFor any $d\\geq 1$ and $\\nu >0$, recall that we defined\n\\[\nc}% constant in the denominator of \\Sigma^{-1_d = (d-1)\\alpha_0\\alpha_2 -d\\alpha_1^2 + \\alpha_0\\alpha_1\n\\, .\n\\]\nAssume that $c}% constant in the denominator of \\Sigma^{-1_d\\neq 0$ and $\\alpha_1\\neq \\alpha_2$. \nDefine $\\sigma_0 \\defeq (d-1)\\alpha_2 + \\alpha_1$ and recall that we set \n\\[\n\\begin{cases}\n\\sigma_1 &= -\\alpha_1\n\\, , \\\\\n\\sigma_2 &= \\frac{(d-2)\\alpha_0 \\alpha_2 - (d-1)\\alpha_1^2 + \\alpha_0\\alpha_1}{\\alpha_1-\\alpha_2}\\, , \\\\\n\\sigma_3 &= \\frac{\\alpha_1^2-\\alpha_0\\alpha_2}{\\alpha_1-\\alpha_2 }\n\\, .\n\\end{cases}\n\\]\nThen it holds that\n\\begin{equation}\n\\label{eq:sigma-inverse-computation}\n \\Sigma^{-1} = \n \\frac{1}{c}% constant in the denominator of \\Sigma^{-1_d}\n\\begin{pmatrix}\n \\sigma_0 & \\sigma_1 & \\sigma_1 &\\cdots & \\sigma_1 \\\\ \n \n \\sigma_1 & \\sigma_2 & \\sigma_3 & \\cdots & \\sigma_3 \\\\\n \n \\sigma_1 & \\sigma_3 & \\sigma_2 & \\ddots & \\vdots \\\\ \n \n \\vdots & \\vdots & \\ddots & \\ddots & \\sigma_3 \\\\\n \n \\sigma_1 & \\sigma_3 & \\cdots & \\sigma_3 & \\sigma_2 \\\\\n\\end{pmatrix}\n\\in\\Reals^{(d+1)\\times (d+1)}\n\\, .\n\\end{equation}\n\\end{proposition}\n\\end{minipage}\n\\begin{minipage}{0.25\\textwidth}\n \\begin{center}\n \\includegraphics[scale=0.11]{cd_compressed.pdf}\n \\end{center}\n \\captionof{figure}{\\label{fig:dencst}Evolution of the normalization constant $c}% constant in the denominator of \\Sigma^{-1_d$ as a function of the bandwidth for $d=30$. In red, the default bandwidth $\\nu=0.25$, in green the limit for large bandwidth given by Corollary~\\ref{cor:approximate-sigma-inverse}.}\n\\end{minipage}\n\n\nWe display the evolution of the $\\sigma_i\/c}% constant in the denominator of \\Sigma^{-1_d$ coefficients with respect to $\\nu$ in Figure~\\ref{fig:sigmas}. \n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.11]{sigma0__nu_compressed.pdf} \n \\includegraphics[scale=0.11]{sigma1__nu_compressed.pdf}\n \\includegraphics[scale=0.11]{sigma2__nu_compressed.pdf}\n \\includegraphics[scale=0.11]{sigma3__nu_compressed.pdf}\n \\caption{\\label{fig:sigmas}Evolution of $\\sigma_i\/c}% constant in the denominator of \\Sigma^{-1_d$ as a function of $\\nu$ for $1\\leq i\\leq 4$ for $d=30$. In red the default value of the bandwidth. In green the limits given by Corollary~\\ref{cor:approximate-sigma-inverse}. We can see that the $\\sigma$ coefficients are close to these limit values for the default bandwidth. }\n\\end{figure}\n\n\\begin{proof}\nFrom Eq.~\\eqref{eq:def-sigma}, we can see that $\\Sigma$ is a block matrix. \nThe result follows from the block matrix inversion formula and one can check directly that $\\Sigma\\cdot \\Sigma^{-1} = \\Identity_{d+1}$. \n\\end{proof}\n\nOur next result shows that the assumptions of Proposition~\\ref{prop:sigma-inverse-computation} are satisfied: $\\alpha_1-\\alpha_2$ and $c}% constant in the denominator of \\Sigma^{-1_d$ are positive quantities. \nIn fact, we prove a slightly stronger statement which will be necessary to control the operator norm of $\\Sigma^{-1}$. \n\n\\begin{proposition}[$\\Sigma$ is invertible]\n\\label{prop:large-nu-makes-everything-ok}\nFor any $d\\geq 2$, \n\\[\n\\alpha_1 - \\alpha_2 \\geq \\frac{\\exps{\\frac{-1}{2\\nu^2}}}{6} > 0\n\\, ,\n\\quad \\text{ and } \\quad \nc}% constant in the denominator of \\Sigma^{-1_d \\geq \\frac{\\exps{\\frac{-2}{\\nu^2}}}{40} > 0\n\\, .\n\\]\n\\end{proposition}\n\n\\begin{proof}\nBy definition of the $\\alpha$ coefficients (Eq.~\\eqref{eq:def-alphas}), we have\n\\[\n\\alpha_1 - \\alpha_2 = \\frac{1}{d}\\sum_{s=1}^d \\left(1-\\frac{s}{d}\\right) \\frac{s}{d-1} \\psi\\left(\\frac{s}{d}\\right)\n\\, .\n\\]\nSince $\\exps{\\frac{-1}{2\\nu^2}} \\leq \\psi(t) \\leq 1$ for any $t\\in [0,1]$, we have\n\\begin{equation}\n\\label{eq:large-nu-aux-1}\n\\exps{\\frac{-1}{2\\nu^2}} \\cdot \\frac{1}{d}\\sum_{s=1}^d \\left(1-\\frac{s}{d}\\right) \\frac{s}{d-1} = \\frac{d+1}{6d}\\cdot \\exps{\\frac{-1}{2\\nu^2}}\n\\leq \\alpha_1-\\alpha_2 \\leq \\frac{d+1}{6d}\n\\, .\n\\end{equation}\nThe right-hand side of Eq.~\\eqref{eq:large-nu-aux-1} yields the promised bound. \nNote that the same reasoning gives\n\\begin{equation}\n\\label{eq:large-nu-aux-2}\n\\frac{d+1}{2d} \\cdot \\exps{\\frac{-1}{2\\nu^2}} \\leq \\alpha_0 - \\alpha_1 \\leq \\frac{d+1}{2d}\n\\, .\n\\end{equation}\n\nLet us now find a lower bound for $c}% constant in the denominator of \\Sigma^{-1_d$. \nWe first start by noticing that \n\\begin{align}\nc}% constant in the denominator of \\Sigma^{-1_d &= d\\alpha_1(\\alpha_0-\\alpha_1) - (d-1)\\alpha_0(\\alpha_1-\\alpha_2) \\label{eq:dencst-alt-writing} \\\\\n&= \\sum_{s=1}^d \\left(1-\\frac{s}{d}\\right) \\psi\\left(\\frac{s}{d}\\right) \\cdot \\frac{1}{d}\\sum_{s=1}^d \\frac{s}{d}\\psi\\left(\\frac{s}{d}\\right) - \\sum_{s=1}^d \\psi\\left(\\frac{s}{d}\\right) \\cdot \\frac{1}{d} \\sum_{s=1}^d \\left(1-\\frac{s}{d}\\right) \\psi\\left(\\frac{s}{d}\\right) \\notag \\\\\nc}% constant in the denominator of \\Sigma^{-1_d &= \\frac{1}{d}\\left[ \\sum_{s=1}^d \\psi\\left(\\frac{s}{d}\\right) \\cdot \\sum_{s=1}^d \\frac{s^2}{d^2}\\psi\\left(\\frac{s}{d}\\right) - \\left(\\sum_{s=1}^d \\frac{s}{d}\\psi\\left(\\frac{s}{d}\\right)\\right)^2\\right] \\notag \n\\, .\n\\end{align}\nTherefore, by Cauchy-Schwarz inequality, $c}% constant in the denominator of \\Sigma^{-1_d\\geq 0$. \nIn fact, $c}% constant in the denominator of \\Sigma^{-1_d>0$ since the equality case in Cauchy-Schwarz is attained for proportional summands, which is not the case here. \n\nHowever, we need to improve this result if we want to control $\\opnorm{\\Sigma^{-1}}$ more precisely. \nTo this extent, we use a refinement of Cauchy-Schwarz inequality obtained by \\citet{filipovski_2019}. \nLet us set, for any $1\\leq s\\leq d$, \n\\[\na_s \\defeq \\sqrt{\\psi\\left(\\frac{s}{d}\\right)}\\, ,\n\\quad b_s \\defeq \\frac{s}{d}\\sqrt{\\psi\\left(\\frac{s}{d}\\right)}\\, ,\n\\quad A \\defeq \\sqrt{\\sum_{s=1}^d a_s^2} \\, ,\n\\quad \\text{ and } B\\defeq \\sqrt{\\sum_{s=1}^d b_s^2}\n\\, .\n\\]\nWith these notation, \n\\[\nc}% constant in the denominator of \\Sigma^{-1_d = \\frac{1}{d}\\left[A^2B^2-\\left(\\sum_{s=1}^d a_sb_s\\right)^2\\right]\n\\, ,\n\\]\nand Cauchy-Schwarz yields $A^2B^2\\geq \\left(\\sum_{s=1}^d a_sb_s\\right)^2$. \nTheorem~2.1 in \\citet{filipovski_2019} is a stronger result, namely\n\\begin{equation}\n\\label{eq:improved-cauchy-schwarz}\nAB\\geq \\sum_{s=1}^da_sb_s + \\frac{1}{4}\\sum_{s=1}^d \\frac{(a_s^2 B^2 - b_s^2 A^2)^2}{a_s^4B^4 + b_s^4 A^4} a_sb_s\n\\, .\n\\end{equation}\nLet us focus on this last term. \nSince all the terms are non-negative, we can lower bound by the term of order $d$, that is,\n\\begin{equation}\n\\label{eq:large-nu-aux-3}\n\\frac{1}{4}\\sum_{s=1}^d \\frac{(a_s^2 B^2 - b_s^2 A^2)^2}{a_s^4B^4 + b_s^4 A^4} a_sb_s \\geq \\frac{1}{4} \\frac{(b_d^2A^2-a_d^2B^2)^2}{b_d^4A^4+a_d^4B^4}a_db_d = \\frac{1}{4}\\frac{(A^2-B^2)^2}{A^4+B^4}\\psi(1)\n\\, ,\n\\end{equation}\nsince $a_d=b_d = \\sqrt{\\psi(1)}$. \nOn one side, we notice that\n\\begin{align}\nA^2-B^2 &= \\sum_{s=1}^d \\left(1-\\frac{s^2}{d^2}\\right)\\psi\\left(\\frac{s}{d}\\right) \\notag \\\\\n&\\geq \\exp{\\frac{-1}{2\\nu^2}} \\cdot \\sum_{s=1}^d \\left(1-\\frac{s^2}{d^2}\\right) \\notag \\tag{for any $t\\in[0,1]$, $\\psi(t)\\geq \\exps{-1\/(2\\nu^2)}$} \\\\\n&= \\exp{\\frac{-1}{2\\nu^2}} \\cdot \\frac{1}{6}\\left(4d-\\frac{1}{d}-3\\right) \\notag \\\\\nA^2-B^2 &\\geq \\frac{3d\\cdot \\exp{\\frac{-1}{2\\nu^2}}}{8} \\notag \n\\, ,\n\\end{align}\nwhere we used $d\\geq 2$ in the last display. \nWe deduce that $(A^2-B^2)^2 \\geq 9d^2\\exps{\\frac{-1}{2\\nu^2}}\/64$. \nOn the other side, it is clear that $A^2\\leq d$, and \n\\[\nB^2 \\leq \\sum_{s=1}^d \\frac{s^2}{d^2} = \\frac{(d+1)(2d+1)}{6d}\n\\, .\n\\]\nFor any $d\\geq 2$, we have $B^2\\leq 5d\/8$, and we deduce that $A^4+B^4\\leq \\frac{89}{64}d^2$. \nTherefore,\n\\[\n\\frac{(A^2-B^2)^2}{A^4+B^4} \\geq \\frac{9\\exps{\\frac{-1}{\\nu^2}}}{89}\n\\, .\n\\]\nComing back to Eq.~\\eqref{eq:large-nu-aux-3}, we proved that \n\\[\n\\frac{1}{4}\\sum_{s=1}^d \\frac{(a_s^2 B^2 - b_s^2 A^2)^2}{a_s^4B^4 + b_s^4 A^4} a_sb_s \\geq \\frac{9\\exps{\\frac{-3}{2\\nu^2}}}{356}\n\\, .\n\\]\nPlugging into Eq.~\\eqref{eq:improved-cauchy-schwarz} and taking the square, we deduce that \n\\[\nA^2B^2\\geq \\left(\\sum_{s=1}^d a_sb_s\\right)^2 + 2\\cdot \\sum_{s=1}^d a_sb_s\\cdot \\frac{9\\exps{\\frac{-3}{2\\nu^2}}}{356} + \\frac{81\\exps{\\frac{-3}{\\nu^2}}}{126736}\n\\, .\n\\]\nBut $\\sum a_sb_s \\geq d\\exps{\\frac{-1}{2\\nu^2}}\/2$, therefore, ignoring the last term, we have\n\\[\nA^2B^2 -\\left(\\sum_{s=1}^d a_sb_s\\right)^2 \\geq \\frac{9d\\exps{\\frac{-2}{\\nu^2}}}{356}\n\\, .\n\\]\nWe conclude by noticing that $356\/9\\leq 40$. \n\\end{proof}\n\n\\begin{remark}\nWe suspect that the correct lower bound for $c}% constant in the denominator of \\Sigma^{-1_d$ is actually of order $d$, but we did not manage to prove it. \nCareful inspection of the proof shows that this $d$ factor is lost when considering only the last term of the summation in Eq.~\\eqref{eq:improved-cauchy-schwarz}. \nIt is however challenging to control the remaining terms, since $B^2$ is roughly half of $A^2$ and $\\frac{s^2}{d^2}B^2-A^2$ is close to $0$ for some values of $s$. \n\\end{remark}\n\nWe conclude this section by giving an approximation of $\\Sigma^{-1}$ for large bandwidth. \nThis approximation will be particularly useful in Section~\\ref{sec:beta-computation}. \n\n\\begin{corollary}[Large bandwidth approximation of $\\Sigma^{-1}$]\n\\label{cor:approximate-sigma-inverse}\nFor any $d\\geq 2$, when $\\nu \\to +\\infty$, we have \n\\[\nc}% constant in the denominator of \\Sigma^{-1_d \\longrightarrow \\frac{d^2-1}{12d} \n\\, ,\n\\]\nand, as a consequence,\n\\begin{equation}\n\\label{eq:def-sigma-infty}\n\\begin{cases}\n\\frac{\\sigma_0}{c}% constant in the denominator of \\Sigma^{-1_d} &\\to \\frac{2(2d-1)}{d+1} = 4 - \\frac{6}{d} + \\bigo{\\frac{1}{d^2}} \\\\\n\\frac{\\sigma_1}{c}% constant in the denominator of \\Sigma^{-1_d} &\\to \\frac{-6}{d+1} = - \\frac{6}{d} + \\bigo{\\frac{1}{d^2}} \\\\\n\\frac{\\sigma_2}{c}% constant in the denominator of \\Sigma^{-1_d} &\\to \\frac{6(d^2-2d+3)}{(d+1)(d-1)} = 6 - \\frac{12}{d} + \\bigo{\\frac{1}{d^2}} \\\\\n\\frac{\\sigma_3}{c}% constant in the denominator of \\Sigma^{-1_d} &\\to \\frac{-6(d-3)}{(d+1)(d-1)} = -\\frac{6}{d} + \\bigo{\\frac{1}{d^2}}\\, .\n\\end{cases}\n\\end{equation}\n\\end{corollary}\n\n\\begin{proof}\nThe proof is straightforward from the definition of $c}% constant in the denominator of \\Sigma^{-1_d$ and the $\\sigma$ coefficients, and Corollary~\\ref{cor:alphas-approx}. \n\\end{proof}\n\n\\subsection{Concentration of $\\Sigmahat_n$}\n\\label{sec:sigmahat-concentration}\n\nWe now turn to the concentration of $\\Sigmahat_n$ around $\\Sigma$. \nMore precisely, we show that $\\Sigmahat_n$ is close to $\\Sigma$ in operator norm, with high probability. \nSince the definition of $\\Sigmahat_n$ is identical to the one in the Tabular LIME case, we can use the proof machinery of \\citet{garreau_luxburg_2020_arxiv}. \n\n\\begin{proposition}[Concentration of $\\Sigmahat_n$]\n\\label{prop:sigmahat-concentration}\nFor any $t\\geq 0$, \n\\[\n\\proba{\\opnorm{\\Sigmahat_n - \\Sigma} \\geq t} \\leq 4d\\cdot \\exp{\\frac{-nt^2}{32d^2}}\n\\, .\n\\]\n\\end{proposition}\n\n\\begin{proof}\nWe can write $\\Sigmahat=\\frac{1}{n}\\sum_i \\pi_i Z_iZ_i^\\top$. \nThe summands are bounded i.i.d. random variables, thus we can apply the matrix version of Hoeffding inequality. \nMore precisely, the entries of $\\Sigmahat_n$ belong to $[0,1]$ by construction, and Corollary~\\ref{cor:alphas-bounds} guarantees that the entries of $\\Sigma$ also belong to $[0,1]$. \nTherefore, if we set $M_i\\defeq \\frac{1}{n}\\pi_i Z_iZ_i^\\top -\\Sigma$, then the $M_i$ satisfy the assumptions of Theorem~21 in \\citet{garreau_luxburg_2020_arxiv} and we can conclude since $\\frac{1}{n}\\sum_i M_i = \\Sigmahat_n-\\Sigma$. \n\\end{proof}\n\n\\subsection{Control of $\\opnorm{\\Sigma^{-1}}$}\n\\label{sec:control-opnorm}\n\nWe now turn to the control of $\\opnorm{\\Sigma^{-1}}$. \nEssentially, our strategy is to bound the entries of $\\Sigma^{-1}$, and then to derive an upper bound for $\\opnorm{\\Sigma^{-1}}$ by noticing that $\\opnorm{\\Sigma^{-1}}\\leq \\frobnorm{\\Sigma^{-1}}$. \nThus let us start by controlling the $\\sigma$ coefficients in absolute value. \n\n\\begin{lemma}[Control of the $\\sigma$ coefficients]\n\\label{lemma:sigma-elements-control}\nLet $d\\geq 2$ and $\\nu \\geq 1.66$. \nThen it holds that\n\\[\n\\abs{\\sigma_0} \\leq \\frac{d}{3} \\, ,\n\\quad \\abs{\\sigma_1} \\leq 1\\, ,\n\\quad \\abs{\\sigma_2} \\leq \\frac{3d}{2}\\exps{\\frac{1}{2\\nu^2}}\\, ,\n\\quad \\text{ and }\\quad \\abs{\\sigma_3} \\leq \\frac{3}{2}\\exps{\\frac{1}{2\\nu^2}}\n\\, .\n\\]\n\\end{lemma}\n\n\\begin{proof}\nBy its definition, we know that $\\sigma_0$ is positive. \nMoreover, from Corollary~\\ref{cor:alphas-bounds}, we see that\n\\begin{align*}\n\\sigma_0 &= (d-1)\\alpha_2 + \\alpha_1 \\\\\n&\\leq \\frac{(d-1)(d-2)}{3d} + \\frac{d-1}{2d} \\\\\n&= \\frac{2d^2-3d+3}{6d}\n\\, .\n\\end{align*}\nOne can check that for any $d\\geq 2$, we have $2d^2-3d+3\\leq 2d^2$, which concludes the proof of the first claim. \n\nSince $\\abs{\\sigma_1}=\\alpha_1$, the second claim is straightforward from Corollary~\\ref{cor:alphas-bounds}. \n\nRegarding $\\sigma_2$, we notice that\n\\[\n\\sigma_2 = \\frac{c}% constant in the denominator of \\Sigma^{-1_d + \\alpha_1^2 - \\alpha_0\\alpha_2}{\\alpha_1-\\alpha_2}\n\\, .\n\\]\nSince $\\alpha_0\\geq \\alpha_1\\geq \\alpha_2$, we have\n\\[\n-\\alpha_1(\\alpha_0-\\alpha_1) \\leq \\alpha_1^2-\\alpha_0\\alpha_2 \\leq \\alpha_0(\\alpha_1-\\alpha_2)\n\\, .\n\\]\nUsing Eqs.~\\eqref{eq:large-nu-aux-1} and~\\eqref{eq:large-nu-aux-2} in conjunction with Corollary~\\ref{cor:alphas-bounds}, we find that $\\abs{\\alpha_1^2-\\alpha_0\\alpha_2} \\leq 1\/4$. \nMoreover, from Eq.~\\eqref{eq:dencst-alt-writing}, we see that $c}% constant in the denominator of \\Sigma^{-1_d\\leq d\/4$. \nWe deduce that \n\\[\n\\abs{\\sigma_2} \\leq \\left(\\frac{d}{4} + \\frac{1}{4}\\right)\\cdot 6\\exps{\\frac{1}{2\\nu^2}}\n\\, ,\n\\]\nwhere we used the first statement of Proposition~\\ref{prop:large-nu-makes-everything-ok} to lower bound $\\alpha_1\\alpha_2$. \nThe results follows, since $d\\geq 2$. \n\nFinally, we write\n\\begin{align*}\n\\abs{\\sigma_3} &= \\frac{\\abs{\\alpha_1^2-\\alpha_0\\alpha_2}}{\\alpha_1-\\alpha_2} \\\\\n&\\leq \\frac{1\/4}{\\frac{d+1}{6d}\\cdot \\exps{\\frac{-1}{2\\nu^2}}}\n\\end{align*}\naccording to Proposition~\\ref{prop:large-nu-makes-everything-ok}. \n\\end{proof}\n\nWe now proceed to bound the operator norm of $\\Sigma^{-1}$. \n\n\\begin{proposition}[Control of $\\opnorm{\\Sigma^{-1}}$]\n\\label{prop:opnorm-control}\nFor any $d\\geq 2$ and any $\\nu > 0$, it holds that \n\\[\n\\opnorm{\\Sigma^{-1}} \\leq 70 d^{3\/2} \\exps{\\frac{5}{2\\nu^2}}\n\\, .\n\\]\n\\end{proposition}\n\n\\begin{remark}\n\\label{remark:influence-of-d}\nWe notice that the control obtained worsens as $d\\to +\\infty$ and $\\nu\\to 0$. \nWe conjecture that the dependency in $d$ is not tight. \nFor instance, showing that $c}% constant in the denominator of \\Sigma^{-1_d=\\Omega(d)$ (that is, improving Proposition~\\ref{prop:large-nu-makes-everything-ok}) would yield an upper bound of order $d$ instead of $d^{3\/2}$. \nThe discussion after Proposition~\\ref{prop:large-nu-makes-everything-ok} indicates that such an improvement may be possible. \nMoreover, we see in experiments that the concentration of $\\betahat_n$ does not degrade that much for large $d$ (see, in particular, Figure~\\ref{fig:linear-large-d} in Section~\\ref{sec:add-exp-linear}), another sign that Proposition~\\ref{prop:opnorm-control} could be improved. \n\\end{remark}\n\n\\begin{proof}\nWe will use the fact that $\\opnorm{\\Sigma^{-1}}\\leq \\frobnorm{\\Sigma^{-1}}$. \nWe first write\n\\[\n\\frobnorm{\\Sigma^{-1}}^2 = \\frac{1}{c}% constant in the denominator of \\Sigma^{-1_d^2}\\left(\\sigma_0^2 + 2d\\sigma_1^2 + d\\sigma_2^2 + (d^2-d)\\sigma_3^2\\right)\n\\, ,\n\\]\nby definition of the $\\sigma$ coefficients. \nOn one hand, using Lemma~\\ref{lemma:sigma-elements-control}, we write\n\\begin{align}\n\\sigma_0^2 + 2d\\sigma_1^2 + d\\sigma_2^2 + (d^2-d)\\sigma_3^2 &\\leq \\frac{d^2}{9} + 2d + d\\cdot (3d\/2)^2 \\exps{\\frac{1}{\\nu^2}} + (d^2-d)\\cdot \\frac{9}{4}\\exps{\\frac{1}{\\nu^2}} \\notag \\\\\n&\\leq 3d^3\\exps{\\frac{1}{\\nu^2}} \\label{eq:opnorm-control-aux-1}\n\\, ,\n\\end{align}\nwhere we used $c}% constant in the denominator of \\Sigma^{-1_d\\leq d$ and $d\\geq 2$ in the last display. \nOn the other hand, a direct consequence of Proposition~\\ref{prop:large-nu-makes-everything-ok} is that\n\\begin{equation}\n\\label{eq:opnorm-control-aux-2}\n\\frac{1}{c}% constant in the denominator of \\Sigma^{-1_d^2} \\leq 1600\\exps{\\frac{4}{\\nu^2}}\n\\, .\n\\end{equation}\nPutting together Eqs.~\\eqref{eq:opnorm-control-aux-1} and~\\eqref{eq:opnorm-control-aux-2}, we obtain the claimed result, since $\\sqrt{3\\cdot 1600}\\leq 70$. \n\\end{proof}\n\n\\section{The study of $\\Gamma^f$}\n\\label{sec:study-of-gamma}\n\nWe now turn to the study of the (weighted) responses. \nIn Section~\\ref{sec:gamma-computation}, we obtain an explicit expression for the average responses. \nWe show how to obtain closed-form expressions in the case of indicator functions in Section~\\ref{sec:gamma-computation-indicator}. \nIn the case of a linear model, we have to resort to approximations that are detailed in Section~\\ref{sec:gamma-computation-linear}. \nSection~\\ref{sec:concentration-gammahat} contains the concentration result for $\\Gammahat_n$. \n\n\n\\subsection{Computation of $\\Gamma^f$}\n\\label{sec:gamma-computation}\n\nWe start our study by giving an expression for $\\Gamma^f$ for any $f$ under mild assumptions. \nRecall that we defined $\\Gammahat_n=\\frac{1}{n}Z^\\top W y$, where $y\\in\\Reals^{d+1}$ is the random vector defined coordinate-wise by $y_i=f(x_i)$. \nFrom the definition of $\\Gammahat_n$, it is straightforward that\n\\[ \n\\Gammahat_n =\n\\begin{pmatrix}\n\\frac{1}{n}\\sum_{i=1}^n \\pi_{i}f(\\normtfidf{x_i}) \\\\ \n\\frac{1}{n}\\sum_{i=1}^{n} \\pi_{i}{z_{i,1}}f(\\normtfidf{x_i})\\\\ \n\\vdots\\\\ \n\\frac{1}{n}\\sum_{i=1}^{n} \\pi_{i}{z_{i,d}}f(\\normtfidf{x_i}) \\\\ \n\\end{pmatrix}\n\\in\\Reals^{d+1}\n\\, .\n\\]\nAs a consequence, since we defined $\\Gamma^f=\\smallexpec{\\Gammahat_n}$, it holds that\n\\begin{equation}\n\\label{eq:def-gamma}\n\\Gamma^f = \n\\begin{pmatrix}\n\\expec{ \\pi f(\\normtfidf{x}) } \\\\ \n\\expec{\\pi z_1 f(\\normtfidf{x})}\\\\ \n\\vdots\\\\ \n\\expec{\\pi z_d f(\\normtfidf{x})} \\\\ \n\\end{pmatrix}\n\\, .\n\\end{equation}\nOf course, Eq.~\\eqref{eq:def-gamma} depends on the model $f$. \nThese computations can be challenging. \nNevertheless, it is possible to obtain exact results in simple situations. \n\n\\paragraph{Constant model. }\nAs a warm up, let us show how to compute $\\Gamma^f$ when $f$ is constant. \nPerhaps the simplest model of all: $f$ always returns the same value, whatever the value of $\\normtfidf{x}$ may be. \nBy linearity of $\\Gamma^f$ (see Section~3.2 of the main paper), it is sufficient to consider the case $f=1$. \nFrom Eq.~\\eqref{eq:def-gamma}, we see that \n\\[\n\\Gamma^f_j =\n\\begin{cases}\n\\expec{\\pi} &\\text{ if } j = 0, \\\\\n\\expec{\\pi z_j} &\\text{ otherwise. }\n\\end{cases}\n\\]\nWe recognize the definitions of the $\\alpha$ coefficients, and, more precisely, $\\Gamma^f_0=\\alpha_0$ and $\\Gamma^f_j=\\alpha_1$ if $j\\geq 1$. \n\n\n\\subsection{Indicator functions}\n\\label{sec:gamma-computation-indicator}\n\nLet us turn to a slightly more complicated class of models: indicator functions, or rather products of indicator functions. \nAs explained in the paper, these functions fall into our framework.\nWe have the following result: \n\n\\begin{proposition}[Computation of $\\Gamma^f$, product of indicator functions]\n\\label{prop:gamma-indicator-general}\nSet $J\\subseteq \\{1,\\ldots,d\\}$ a set of $p$ distinct indices. \nDefine \n\\[\nf(\\normtfidf{x}) \\defeq \\prod_{j\\in J} \\indic{\\normtfidf{x}_j > 0}\n\\, .\n\\]\nThen it holds that\n\\[\n\\Gamma_\\ell^f = \n\\begin{cases}\n\\alpha_p & \\text{ if } \\ell \\in \\{0\\}\\cup J \\\\\n\\alpha_{p+1} & \\text{ otherwise.}\n\\end{cases}\n\\]\n\\end{proposition}\n\n\\begin{proof}\nAs noticed in the paper, $f$ can be written as a product of $z_j$s. \nTherefore, we only have to compute\n\\[\n\\Expec\\biggl[\\pi \\prod_{j\\in J}z_j\\biggr] \\quad \\text{ and } \\quad \\Expec\\biggl[\\pi z_k \\prod_{j\\in J}z_j\\biggr]\n\\, ,\n\\]\nfor any $1\\leq k\\leq d$. \nThe first term is $\\alpha_p$ by definition. \nFor the second term, we notice that if $\\ell \\in \\{0\\}\\cup J$, then two terms are identical in the product of binary features, and we recognize the definition of $\\alpha_p$. \nIn all other cases, there are no cancellation and we recover the definition of $\\alpha_{p+1}$. \n\\end{proof}\n\n\\subsection{Linear model}\n\\label{sec:gamma-computation-linear}\n\nWe now consider a linear model, that is, \n\\begin{equation}\n\\label{eq:def-linear-model}\nf(\\normtfidf{x}) \\defeq \\sum_{j=1}^d \\lambda_j \\normtfidf{x}_j\n\\, ,\n\\end{equation}\nwhere $\\lambda_1,\\ldots,\\lambda_d$ are arbitrary fixed coefficients. \nIn order to simplify the computations, we will consider that $\\nu \\to +\\infty$ in this section. \nIn that case, $\\pi \\cvas 1$. \nIt is clear that $f$ is bounded on $\\sphere{D-1}$, thus, by dominated convergence, \n\\begin{equation}\n\\label{eq:def-gamma-infty}\n\\Gamma^f \\longrightarrow \\Gammainf \\defeq \n\\begin{pmatrix}\n\\expec{f(\\normtfidf{x}} \\\\\n\\expec{z_1 f(\\normtfidf{x}} \\\\\n\\vdots \\\\\n\\expec{z_d f(\\normtfidf{x}}\n\\end{pmatrix}\n\\in\\Reals^{d+1}\n\\, .\n\\end{equation}\n\nBy linearity of $f\\mapsto \\Gamma^f_{\\infty}$, it is sufficient to compute $\\expec{\\normtfidf{x}_j}$ and $\\expec{z_k \\normtfidf{x}_j}$ for any $1\\leq j,k\\leq d$.\n\nFor any $1\\leq j\\leq d$, recall that we defined \n\\[\n\\omega_k = \\frac{m_j^2v_j^2}{\\sum_{k=1}^d m_k^2v_k^2}\n\\, ,\n\\]\nand $H_S \\defeq \\sum_{k \\in S}\\omega_k$, where $S$ is the random subset of indices chosen by LIME. \nThe motivation for the definition of the random variable $H_S$ is the following proposition: it is possible to write the expected TF-IDF as an expression depending on $H_S$. \n\n\\begin{proposition}[Expected normalized TF-IDF]\n\\label{prop:expected-tfidf}\nLet $\\word_j$ be a fixed word of $\\xi$. \nThen, it holds that \n\\begin{equation}\n\\label{eq:expected-tfidf}\n\\expec{\\normtfidf{x}_j} = \\expec{z_j\\normtfidf{x}_j} = \\frac{d-1}{2d}\\cdot \\normtfidf{\\xi}_j \\cdot \\condexpec{\\frac{1}{\\sqrt{1 - H_S}}}{S\\not\\ni j}\n\\, ,\n\\end{equation}\nand, for any $k\\neq j$, \n\\begin{equation}\n\\label{eq:expected-tfidf-z}\n\\expec{z_k\\normtfidf{x}_j} = \\frac{d-2}{3d} \\cdot \\normtfidf{\\xi}_j \\cdot \\condexpec{\\frac{1}{\\sqrt{1-H_S}}}{S\\not\\ni j,k}\n\\, .\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nWe start by proving Eq~\\eqref{eq:expected-tfidf}. \nLet us split the expectation depending on $\\word_j\\in x$. \nSince the term frequency is $0$ if $\\word_j\\notin x$, we have\n\\begin{equation}\n\\label{eq:tfidf-aux-1}\n\\expec{\\normtfidf{x}_j} = \\condexpec{\\normtfidf{x}_j}{w_j\\in x} \\proba{w_j\\in x}\n\\, .\n\\end{equation}\nLemma~\\ref{lemma:proba-containing} gives us the value of $\\proba{w_j\\in x}$. \nLet us focus on the TF-IDF term in Eq.~\\eqref{eq:tfidf-aux-1}. \nBy definition, it is the product of the term frequency and the inverse document frequency, normalized. \nSince the latter does not change when words are removed from $\\xi$, only the norm changes: we have to remove all terms indexed by $S$. \nFor any $1\\leq j\\leq d$, let us set $m_j$ (resp. $v_j$) the term frequency (resp. the inverse term frequency) of $\\word_j$ \nConditionally to $\\{\\word_j\\in x\\}$,\n\\[\n\\normtfidf{x}_j = \\frac{m_j v_j}{\\sqrt{\\sum_{k \\notin S} m_k^2 v_k^2}}\n\\, .\n\\]\nLet us factor out $\\normtfidf{\\xi}_j$ in the previous display. \nBy definition of $H_S$, we have\n\\[\n\\normtfidf{x}_j = \\normtfidf{\\xi}_j \\cdot \\frac{1}{\\sqrt{1 - \\sum_{k\\in S} \\frac{m_k^2v_k^2}{\\norm{\\tfidf{\\xi}}^2}}} = \\normtfidf{\\xi}_j \\cdot \\frac{1}{\\sqrt{1-H_S}}\n\\, .\n\\]\nSince $\\{w_j\\in x\\}$ is equivalent to $\\{j\\notin S\\}$ by construction, we can conclude. \nThe proof of the second statement is similar; one just has to condition with respect to $\\{w_j,w_k\\in x\\}$ instead, which is equivalent to $\\{S\\not\\ni j,k\\}$. \n\\end{proof}\n\nAs a direct consequence of Proposition~\\ref{prop:expected-tfidf}, we can derive $\\Gamma_{\\infty}^f=\\lim_{\\nu\\to +\\infty}\\Gamma^f$ when $f:x\\mapsto x_j$. \nRecall that we set $E_j = \\condexpec{(1-H_S)^{-1\/2}}{S\\not\\ni j}$ and $E_{j,k} = \\condexpec{(1-H_S)^{-1\/2}}{S\\not\\ni j,k}$. \nThen\n\\begin{equation}\n\\label{eq:gamma-computation-linear}\n\\left(\\Gamma_{\\infty}^f\\right)_k = \n\\begin{cases}\n\\left(\\frac{1}{2}-\\frac{1}{2d}\\right) \\cdot E_j \\cdot \\normtfidf{\\xi}_j &\\text{ if } k=0 \\text{ or } k=j, \\\\\n\\left(\\frac{1}{3}-\\frac{2}{3d}\\right) \\cdot E_{j,k} \\cdot \\normtfidf{\\xi}_j &\\text{ otherwise.}\n\\end{cases}\n\\end{equation}\n\nIn practice, the expectation computations required to evaluate $E_j$ and $E_{j,k}$ are not tractable as soon as $d$ is large. \nIndeed, in that case, the law of $H_S$ is unknown and approximating the expectation by Monte-Carlo methods requires is hard since one has to sum over all subsets and there are $\\bigo{2^d}$ subsets $S$ such that $S\\subseteq \\{1,\\ldots,d\\}$. \nTherefore we resort to approximate expressions for these expected values computations. \n\nWe start by writing\n\\begin{equation}\n\\label{eq:main-approx-expec}\n\\expec{\\frac{1}{\\sqrt{1-X}}} \\approx \\frac{1}{\\sqrt{1-\\expec{X}}}\n\\, .\n\\end{equation}\nAll that is left to compute will be $\\condexpec{H_S}{S\\not\\ni j}$ and $\\condexpec{H_S}{S\\not\\ni j,k}$. \nWe see in Section~\\ref{sec:subsets-sums} that after some combinatoric considerations, it is possible to obtain these expected values as a function of $\\omega_j$ and $\\omega_k$. \nMore precisely, Lemma~\\ref{lemma:expectation-computation} states that\n\\begin{equation}\n\\label{eq:approx-expec-hs}\n\\condexpec{H_S}{S\\not\\ni j} = \\frac{1-\\omega_j}{3} + \\bigo{\\frac{1}{d}}\n\\quad \\text{ and }\\quad \n\\condexpec{H_S}{S\\not\\ni j,k} = \\frac{1-\\omega_j-\\omega_k}{4} + \\bigo{\\frac{1}{d}}\n\\, .\n\\end{equation}\nWhen $d$ is large and the $\\omega_k$s are small, using Eq.~\\eqref{eq:main-approx-expec}, we obtain the following approximations:\n\\begin{equation}\n\\label{eq:approx-norm-tfidf}\n\\expec{\\normtfidf{x}_j} \\approx \\frac{1}{2}\\cdot \\sqrt{\\frac{1}{1-\\frac{1}{3}}} \\cdot \\normtfidf{\\xi}_j \\approx 0.61 \\cdot \\normtfidf{\\xi}_j\n\\, ,\n\\end{equation}\nand, for any $k\\neq j$, \n\\begin{equation}\n\\label{eq:approx-norm-tfidf-2}\n\\expec{z_k\\normtfidf{x}_j} \\approx \\frac{1}{3}\\cdot \\sqrt{\\frac{1}{1-\\frac{1}{4}}}\\cdot \\normtfidf{\\xi}_j \\approx 0.38 \\cdot \\normtfidf{\\xi}_j\n\\, .\n\\end{equation}\nFor all practical purposes, we will use Eq.~\\eqref{eq:approx-norm-tfidf} and~\\eqref{eq:approx-norm-tfidf-2}. \n\n\\begin{remark}\nOne could obtain better approximations than above in two ways. \nFirst, it is possible to take into account the dependency in $\\omega_j$ and $\\omega_k$ in the expectation of $H_S$. \nThat is, plugging Eq.~\\eqref{eq:approx-expec-hs} into Eq.~\\eqref{eq:main-approx-expec} instead of the numerical values $1\/3$ and $1\/4$. \nThis yields more accurate, but more complicated formulas. \nWithout being so precise, it is also possible to consider an arbitrary distribution for the $\\omega_k$s (for instance, assuming that the term frequencies follow the Zipf's law \\citep{powers_1998}). \nSecond, since the mapping $\\theta : x\\mapsto \\frac{1}{\\sqrt{1-x}}$ is convex, by Jensen's inequality, we are always \\emph{underestimating} by considering $\\theta(\\expec{X})$ instead of $\\expec{\\theta(X)}$. \nGoing further in the Taylor expansion of $\\theta$ is a way to fix this problem, namely using\n\\[\n\\expec{\\frac{1}{\\sqrt{1-X}}} \\approx \\frac{1}{\\sqrt{1-\\expec{X}}} + \\frac{3\\var{X}}{8\\sqrt{1-\\expec{X}}}\n\\, ,\n\\]\ninstead of Eq.~\\eqref{eq:main-approx-expec}.\nWe found that \\textbf{it was not useful to do so from an experimental point of view:} our theoretical predictions match the experimental results while remaining simple enough. \n\\end{remark}\n\n\n\\subsection{Concentration of $\\Gammahat_n$}\n\\label{sec:concentration-gammahat}\n\nWe now show that $\\Gammahat_n$ is concentrated around $\\Gamma^f$. \nSince the expression of $\\Gammahat_n$ is the same than in the tabular case, and since $f$ is bounded on the unit sphere $\\sphere{D-1}$, the same reasoning as in the proof of Proposition~24 in \\citet{garreau_luxburg_2020_arxiv} can be applied. \n\n\\begin{proposition}[Concentration of $\\Gammahat_n$]\n\\label{prop:concentration-gammahat}\nAssume that $f$ is bounded by $M>0$ on $\\sphere{D-1}$. \nThen, for any $t>0$, it holds that \n\\[\n\\proba{\\smallnorm{\\Gammahat_n - \\Gamma^f} \\geq t} \\leq 4d \\exp{\\frac{-nt^2}{32Md^2}}\n\\, .\n\\]\n\\end{proposition}\n\n\\begin{proof}\nRecall that $\\norm{\\normtfidf{x}}=1$ almost surely. \nSince $f$ is bounded by $M$ on $\\sphere{D-1}$, it holds that $\\abs{f(\\normtfidf{x})}\\leq M$ almost surely. \nWe can then proceed as in the proof of Proposition~24 in \\citet{garreau_luxburg_2020_arxiv}. \n\\end{proof}\n\n\\section{The study of $\\beta^f$}\n\\label{sec:study-of-beta}\n\nIn this section, we study the interpretable coefficients. \nWe start with the computation of $\\beta^f$ in Section~\\ref{sec:beta-computation}. \nIn Section~\\ref{sec:betahat-concentration}, we show how $\\betahat_n$ concentrates around $\\beta^f$. \n\n\\subsection{Computation of $\\beta^f$}\n\\label{sec:beta-computation}\n\nRecall that, for any model $f$, we have defined $\\beta^f = \\Sigma^{-1}\\Gamma^f$. \nDirectly multiplying the expressions found for $\\Sigma^{-1}$ (Eq.~\\eqref{eq:sigma-inverse-computation}) and $\\Gamma^f$ (Eq.~\\eqref{eq:def-gamma}) obtained in the previous sections, we obtain the expression of $\\beta^f$ in the general case (this is Proposition~2 in the paper). \n\n\\begin{proposition}[Computation of $\\beta^f$, general case]\n\\label{prop:beta-computation-general}\nAssume that $f$ is bounded on the unit sphere. \nThen \n\\begin{equation}\n\\label{eq:beta-computation-intercept}\n\\beta^f_0 = c}% constant in the denominator of \\Sigma^{-1^{-1}_d\\biggl\\{\\sigma_0\\expec{\\pi f(\\normtfidf{x})} + \\sigma_1\\sum_{k=1}^d \\expec{\\pi z_k f(\\normtfidf{x})}\\biggr\\}\n\\, ,\n\\end{equation}\nand, for any $1\\leq j\\leq d$, \n\\begin{equation}\n\\label{eq:beta-computation-general}\n\\beta^f_j = \nc}% constant in the denominator of \\Sigma^{-1^{-1}_d\\biggl\\{\\sigma_1 \\expec{\\pi f(\\normtfidf{x})} + \\sigma_2 \\expec{\\pi z_j f(\\normtfidf{x})} + \\sigma_3 \\sum_{\\substack{k=1 \\\\ k\\neq j}}^d \\expec{\\pi z_k f(\\normtfidf{x})}\\biggr\\}\n\\, .\n\\end{equation}\n\\end{proposition}\n\nThis is Proposition~2 in the paper, with the additional expression of the intercept $\\beta_0^f$. \nLet us see how to obtain an approximate, simple expression when both the bandwidth parameter and the size of the local dictionary are large. \nWhen $\\nu\\to +\\infty$, using Corollary~\\ref{cor:approximate-sigma-inverse}, we find that \n\\[\n\\beta_0^f \\longrightarrow \\left(\\betainf^f\\right)_0\\defeq \\frac{4d-2}{d+1}\\expec{\\pi f(\\normtfidf{x})} - \\frac{6}{d+1}\\sum_{k=1}^d \\expec{\\pi z_k f(\\normtfidf{x})}\n\\, ,\n\\]\nand, for any $1\\leq j\\leq d$,\n\\[\n\\beta_j^f \\longrightarrow \\left(\\betainf^f\\right)_j \\defeq \\frac{-6}{d+1}\\expec{\\pi f(\\normtfidf{x})} + \\frac{6(d^2-2d+3)}{d^2-1}\\expec{\\pi z_j f(\\normtfidf{x})} - \\frac{6(d-3)}{d^2-1}\\sum_{k\\neq j} \\expec{\\pi z_k f(\\normtfidf{x})}\n\\, .\n\\]\nFor large $d$, since $f$ is bounded on $\\sphere{D-1}$, we find that\n\\[\n\\left(\\betainf^f\\right)_0 = 4\\expec{\\pi f(\\normtfidf{x})} - \\frac{6}{d}\\sum_{k=1}^d \\expec{\\pi z_k f(\\normtfidf{x})} + \\bigo{\\frac{1}{d}}\n\\, ,\n\\]\nand, for any $1\\leq j\\leq d$,\n\\[\n\\left(\\betainf^f\\right)_j = 6\\expec{\\pi z_j f(\\normtfidf{x})} - \\frac{6}{d}\\sum_{k\\neq j} \\expec{\\pi z_k f(\\normtfidf{x})} + \\bigo{\\frac{1}{d}}\n\\, .\n\\]\nNow, by definition of the interpretable features, for any $1\\leq j\\leq d$, \n\\begin{align*}\n\\expec{\\pi z_j f(\\normtfidf{x})} &= \\condexpec{\\pi z_j f(\\normtfidf{x})}{\\word_j \\in x} \\cdot \\proba{\\word_j \\in x} + \\condexpec{\\pi z_j f(\\normtfidf{x})}{\\word_j \\notin x} \\cdot \\proba{\\word_j \\notin x} \\\\\n&= \\condexpec{\\pi f(\\normtfidf{x})}{\\word_j \\in x}\\cdot \\frac{d-1}{2d} + 0\n\\, ,\n\\end{align*}\nwhere we used Lemma~\\ref{lemma:proba-containing} in the last display. \nTherefore, we have the following approximations of the interpretable coefficients: \n\\begin{equation}\n\\label{eq:betainf-simplified-intercept}\n\\left(\\betainf^f\\right)_0 = 2\\expec{\\pi f(\\normtfidf{x})} - \\frac{3}{d}\\sum_k \\condexpec{\\pi f(\\normtfidf{x})}{\\word_k\\in x} + \\bigo{\\frac{1}{d}}\n\\, ,\n\\end{equation}\nand, for any $1\\leq j\\leq d$,\n\\begin{equation}\n\\label{eq:betainf-simplified}\n\\left(\\betainf^f\\right)_j = 3\\condexpec{\\pi f(\\normtfidf{x})}{\\word_j\\in x} - \\frac{3}{d}\\sum_k \\condexpec{\\pi f(\\normtfidf{x})}{\\word_k\\in x} + \\bigo{\\frac{1}{d}}\n\\, .\n\\end{equation}\nThe last display is the approximation of Proposition~\\ref{prop:beta-computation-general} presented in the paper. \n\n\\begin{remark}\nIn \\citet{garreau_luxburg_2020_arxiv}, it is noted that LIME for tabular data provably ignores unused coordinates.\nIn other words, if the model $f$ does not depend on coordinate $j$, then the explanation $\\beta^f_j$ is $0$. \nWe could not prove such a statement in the case of text data, even for simplified expressions such as Eq.~\\eqref{eq:betainf-simplified}. \n\\end{remark}\n\nWe now show how to compute~$\\beta^f$ in specific cases, thus returning to generic $\\nu$ and $d$.\n\n\\paragraph{Constant model. }\nAs a warm up exercise, let us assume that $f$ is a constant, which we set to $1$ without loss of generality (by linearity). \nRecall that, in that case, $\\Gamma^f_0=\\alpha_0$ and $\\Gamma^f_j=\\alpha_1$ for any $1\\leq j\\leq d$. \nFrom the definition of $c}% constant in the denominator of \\Sigma^{-1_d$ and the $\\sigma$ coefficients (Proposition~\\ref{prop:sigma-inverse-computation}), we find that \n\\[\n\\begin{cases}\n\\sigma_0 \\alpha_0 + d\\sigma_1\\alpha_1 &= c}% constant in the denominator of \\Sigma^{-1_d \\, ,\\\\\n\\sigma_1\\alpha_0 + \\sigma_2\\alpha_1 + (d-1)\\sigma_3\\alpha_1 &= 0 \n\\, .\n\\end{cases}\n\\]\nWe deduce from Proposition~\\ref{prop:beta-computation-general} that $\\beta^f_0=1$ and $\\beta^f_j=0$ for any $1\\leq j\\leq d$. \nThis is conform to our intuition: if the model is constant, then no word should receive nonzero weight in the explanation provided by Text LIME. \n\n\\paragraph{Indicator functions. }\nWe now turn to indicator functions, more precisely \\emph{products} of indicator functions. \nWe will prove the following (Proposition~3 in the paper):\n\n\\begin{proposition}[Computation of $\\beta^f$, product of indicator functions]\n\\label{prop:beta-computation-indicator-product-general}\nLet $j\\subseteq \\{1,\\ldots,d\\}$ be a set of $p$ distinct indices and set $f(x) = \\prod_{j\\in J}\\indic{x_j>0}$. \nThen \n\\[\n\\begin{cases}\n\\beta_0^f &= c}% constant in the denominator of \\Sigma^{-1_d^{-1}\\left(\\sigma_0\\alpha_p+p\\sigma_1\\alpha_p+(d-p)\\sigma_1\\alpha_{p+1}\\right)\\, , \\\\\n\\beta_j^f &= c}% constant in the denominator of \\Sigma^{-1_d^{-1}\\left(\\sigma_1\\alpha_p + \\sigma_2\\alpha_p + (d-p)\\sigma_3\\alpha_{p+1} + (p-1)\\sigma_3\\alpha_p\\right) \\text{ if }j \\in J\\, ,\\\\\n\\beta_j^f &= c}% constant in the denominator of \\Sigma^{-1_d^{-1}\\left(\\sigma_1\\alpha_p+\\sigma_2\\alpha_{p+1}+(d-p-1)\\sigma_3\\alpha_{p+1} + p\\sigma_3\\alpha_p \\right) \\text{ otherwise}\n\\, .\n\\end{cases}\n\\]\n\\end{proposition}\n\n\n\\begin{proof}\nThe proof is straightforward from Proposition~\\ref{prop:gamma-indicator-general} and Proposition~\\ref{prop:beta-computation-general}. \n\\end{proof}\n\n\\paragraph{Linear model. }\nIn this last paragraph, we treat the linear case. \nAs noted in Section~\\ref{sec:gamma-computation-linear}, we have to resort to approximate computations: in this paragraph, we assume that $\\nu = +\\infty$. \nWe start with the simplest linear function: all coefficients are zero except one (this is Proposition~4 in the paper). \n\n\\begin{proposition}[Computation of $\\beta^f$, linear case]\n\\label{prop:beta-computation-linear}\nLet $1\\leq j\\leq d$ and assume that $f(\\normtfidf{x})=\\normtfidf{x}_j$. \nRecall that we set $E_j= \\condexpec{(1-H_S)^{-1\/2}}{S\\not\\ni j}$ and for any $k\\neq j$, $E_{j,k} = \\condexpec{(1-H_S)^{-1\/2}}{S\\not\\ni j,k}$. \nThen \n\\begin{equation*}\n\\left(\\beta_\\infty^f\\right)_0 =\n\\left\\{5 E_j -\\frac{2}{d} \\sum_{k \\neq j}E_{j,k}\\right\\} \\normtfidf{\\xi}_j + \\bigo{\\frac{1}{d}}\n\\end{equation*}\nfor any $k\\neq j$, \n\\begin{equation*}\n\\left(\\beta_\\infty^f\\right)_k = \n\\left\\{2E_{j,1} -\\frac{2}{d}\\sum_{\\ell \\neq k,j}E_{j,\\ell} \\right\\}\\normtfidf{\\xi}_j + \\bigo{\\frac{1}{d}}\n\\, ,\n\\end{equation*}\nand\n\\begin{equation*}\n\\left(\\beta_\\infty^f\\right)_j =\n\\left\\{3E_j -\\frac{2}{d} \\sum_{k \\neq j}E_{j,k}\\right\\} \\normtfidf{\\xi}_j + \\bigo{\\frac{1}{d}}\n\\, .\n\\end{equation*}\n\\end{proposition}\n\n\\begin{proof}\nStraightforward from Eqs.~\\eqref{eq:def-sigma-infty} and~\\eqref{eq:gamma-computation-linear}. \n\\end{proof}\n\nAssuming that the $\\omega_k$ are small, we deduce from Eqs.~\\eqref{eq:approx-norm-tfidf} and~\\eqref{eq:approx-norm-tfidf-2} that $E_j \\approx 1.22$ and $E_{j,k}\\approx 1.15$. \nIn particular, \\emph{they do not depend on $j$ and $k$.} \nThus we can drastically simplify the statement of Proposition~\\ref{prop:beta-computation-linear}:\n\\begin{equation}\n\\label{eq:simplified-betainf-linear-1}\n\\forall k\\neq j,\\quad \\left(\\beta_\\infty^f\\right)_k \\approx 0\n\\quad \\text{ and } \\left(\\beta_\\infty^f\\right)_j \\approx 1.36 \\normtfidf{\\xi}_j\n\\, .\n\\end{equation}\nWe can now go back to our original goal: $f(x) = \\sum_{j=1}^{d}\\lambda_j x_j$. \nBy linearity, we deduce from Eq.~\\eqref{eq:simplified-betainf-linear-1} that \n\\begin{equation}\n\\label{eq:simplified-betainf-linear}\n\\forall 1\\leq j\\leq d, \\quad \\left(\\beta_\\infty^f\\right)_j \\approx 1.36 \\cdot \\lambda_j \\cdot \\normtfidf{\\xi}_j\n\\, .\n\\end{equation}\nIn other words, as noted in the paper, \\textbf{the explanation for a linear~$f$ is the TF-IDF of the word multiplied by the coefficient of the linear model,} up to a numerical constant and small error terms depending on~$d$.\n\n\n\\subsection{Concentration of $\\betahat$}\n\\label{sec:betahat-concentration}\n\nIn this section, we state and prove our main result: the concentration of $\\betahat_n$ around $\\beta^f$ with high probability (this is Theorem~1 in the paper). \n\n\\begin{theorem}[Concentration of $\\betahat_n$]\n\\label{th:betahat-concentration}\nSuppose that $f$ is bounded by $M>0$ on $\\sphere{D-1}$. \nLet $\\epsilon >0$ be a small constant, at least smaller than $M$. \nLet $\\eta\\in (0,1)$. \nThen, for every \n\\[\nn\\geq \\max \\left\\{2^9\\cdot 70^4 M^2d^{9} \\exps{\\frac{10}{\\nu^2}}, 2^9\\cdot 70^2 Md^5\\exps{\\frac{5}{\\nu^2}}\\right\\} \\frac{\\log \\frac{8d}{\\eta}}{\\epsilon^2}\n\\, ,\n\\]\nwe have $\\proba{\\smallnorm{\\betahat_n - \\beta^f} \\geq \\epsilon} \\leq \\eta$. \n\\end{theorem}\n\n\\begin{proof}\nWe follow the proof scheme of Theorem~28 in \\citet{garreau_luxburg_2020_arxiv}. \nThe key point is to notice that \n\\begin{equation}\n\\label{eq:binding-lemma}\n\\smallnorm{\\betahat_n-\\beta^f} \\leq 2\\opnorm{\\Sigma^{-1}} \\smallnorm{\\Gammahat_n-\\Gamma^f} + 2\\opnorm{\\Sigma^{-1}}^2 \\norm{\\Gamma^f}\\smallopnorm{\\Sigmahat_n-\\Sigma} \n\\, ,\n\\end{equation}\nprovided that $\\smallopnorm{\\Sigma^{-1}(\\Sigmahat_n-\\Sigma)}\\leq 0.32$ (this is Lemma~27 in \\citet{garreau_luxburg_2020_arxiv}. \nTherefore, in order to show that $\\smallnorm{\\betahat_n-\\beta^f}\\leq \\epsilon$, it suffices to show that each term in Eq.~\\eqref{eq:binding-lemma} is smaller than $\\epsilon\/4$ and that $\\smallopnorm{\\Sigma^{-1}(\\Sigmahat-\\Sigma)}\\leq 0.32$. \nThe concentration results obtained in Section~\\ref{sec:study-of-sigma} and ~\\ref{sec:study-of-gamma} guarantee that both $\\smallopnorm{\\Sigmahat-\\Sigma}$ and $\\smallnorm{\\Gammahat-\\Gamma^f}$ are small if $n$ is large enough, with high probability. \nThis, combined with the upper bound on $\\smallopnorm{\\Sigma^{-1}}$ given by Proposition~\\ref{prop:opnorm-control}, concludes the proof. \n\nLet us give a bit more details. \nWe start with the control of $\\smallopnorm{\\Sigma^{-1}(\\Sigmahat_n-\\Sigma)}$. \nSet $t_1\\defeq (220 d^{3\/2}\\exps{\\frac{5}{2\\nu^2}})^{-1}$ and $n_1\\defeq 32d^2\\log \\frac{8d}{\\eta} \/ t_1^2$. \nThen, according to Proposition~\\ref{prop:sigmahat-concentration}, for any $n\\geq n_1$, \n\\[\n\\proba{\\smallopnorm{\\Sigmahat_n-\\Sigma} \\geq t_1} \\leq 4d\\exp{\\frac{-nt_1^2}{32d^2}} \\leq \\frac{\\eta}{2}\n\\, .\n\\]\nSince $\\smallopnorm{\\Sigma^{-1}}\\leq 70 d^{3\/2}\\exps{\\frac{5}{2\\nu^2}}$ (according to Proposition~\\ref{prop:opnorm-control}), by sub-multiplicativity of the operator norm, it holds that\n\\begin{equation}\n\\label{eq:proof-main-aux-1}\n\\smallopnorm{\\Sigma^{-1}(\\Sigmahat-\\Sigma)} \\leq \\smallopnorm{\\Sigma^{-1}} \\smallopnorm{\\Sigmahat-\\Sigma} \\leq 70\/220 < 0.32\n\\, ,\n\\end{equation}\nwith probability greater than $1-\\eta\/2$. \n\nNow let us set $t_2\\defeq (4\\cdot 70^2 M d^{7\/2} \\exps{\\frac{5}{\\nu^2}})^{-1}\\epsilon$ and $n_2 \\defeq 32d^2 \\log \\frac{8d}{\\eta} \/ t_2^2$. \nAccording to Proposition~\\ref{prop:sigmahat-concentration}, for any $n\\geq n_2$, it holds that \n\\[\n\\smallopnorm{\\Sigmahat_n-\\Sigma} \\leq \\frac{\\epsilon}{4Md^{1\/2}} \\cdot (70^2 d^3 \\exps{5\/\\nu^2})^{-1}\n\\, ,\n\\]\nwith probability greater than $\\eta\/2$. \nSince $\\smallnorm{\\Gamma^f}\\leq M\\cdot d^{1\/2}$ and $\\smallopnorm{\\Sigma^{-1}}^2\\leq 70^2d^3\\exps{5\/\\nu^2}$, \n\\[\n\\opnorm{\\Sigma^{-1}} \\smallnorm{\\Gammahat-\\Gamma^f} \\leq \\frac{\\epsilon}{4}\n\\]\nwith probability grater than $1-\\eta\/2$. \nNotice that, since we assumed $\\epsilon < M$, $t_2< t_1$, and thus Eq.~\\eqref{eq:proof-main-aux-1} also holds. \n\nFinally, let us set $t_3\\defeq \\epsilon \/ (4\\cdot 70 d^{3\/2}\\exps{\\frac{5}{2\\nu^2}})$ and $n_3\\defeq 32Md^2\\log \\frac{8d}{\\eta}\/t_3^2$. \nAccording to Proposition~\\ref{prop:concentration-gammahat}, for any $n\\geq n_3$, \n\\[\n\\proba{\\smallnorm{\\Gammahat_n-\\Gamma^f} \\geq t_3} \\leq 4d\\exp{\\frac{-nt_3^2}{32Md^2}} \\leq \\frac{\\eta}{2}\n\\, .\n\\]\nSince $\\smallopnorm{\\Sigma^{-1}}\\leq 70d^{3\/2}\\exps{\\frac{5}{2\\nu^2}}$, we deduce that \n\\[\n\\opnorm{\\Sigma^{-1}}^2 \\norm{\\Gamma^f}\\smallopnorm{\\Sigmahat_n-\\Sigma} \\leq \\frac{\\epsilon}{2}\n\\, ,\n\\]\nwith probability greater than $1-\\eta\/2$. \nWe conclude by a union bound argument. \n\\end{proof}\n\n\\section{Sums over subsets}\n\\label{sec:subsets-sums}\n\nIn this section, independent from the rest, we collect technical facts about sums over subsets. \nMore particularly, we now consider arbitrary, fixed positive real numbers $\\omega_1,\\ldots,\\omega_d$ such that $\\sum_k \\omega_k = 1$. \nWe are interested in subsets $S$ of $\\{1,\\ldots,d\\}$. \nFor any such $S$, we define $H_S\\defeq \\sum_{k\\in S}\\omega_k$ the sum of the $\\omega_k$ coefficients over $S$. \nOur main goal in this section is to compute the expectation of $H_S$ conditionally to $S$ not containing a given index (or two given indices), which is the key quantity appearing in Proposition~\\ref{prop:beta-computation-linear}.\n\n\\begin{lemma}[First order subset sums]\n\\label{lemma:first-order-subset-sums}\nLet $1\\leq s\\leq d$ and $1\\leq j,k\\leq d$ with $j\\neq k$. \nThen \n\\[\n\\sum_{\\substack{\\card{S} = s \\\\ S\\not\\ni j}} H_S = \\binom{d-2}{s-1}(1-\\omega_j)\n\\, ,\n\\]\nand \n\\[\n\\sum_{\\substack{\\card{S} = s \\\\ S\\not\\ni j,k}} H_S = \\binom{d-3}{s-1}(1-\\omega_j-\\omega_k)\n\\, .\n\\]\n\\end{lemma}\n\n\\begin{proof}\nThe main idea of the proof is to rearrange the sum, summing over all indices and then counting how many subsets satisfy the condition. \nThat is, \n\\begin{align*}\n\\sum_{\\substack{\\card{S} = s \\\\ S \\ni j}} H_S &= \\sum_{k=1}^d \\omega_k \\cdot \\cardset{S \\text{ s.t. } j,k \\in S} \\\\\n&= \\sum_{k\\neq j} \\omega_k \\cdot \\binom{d-2}{s-2} + \\omega_j \\cdot \\binom{d-1}{s-1} \\\\\n&= \\binom{d-2}{s-2} + \\left[ \\binom{d-1}{s-1} - \\binom{d-2}{s-2} \\right]\\omega_j\n\\, . \n\\end{align*}\nWe conclude by using the binomial identity\n\\[\n\\binom{d-1}{s-1} - \\binom{d-2}{s-2} = \\binom{d-2}{s-1}\n\\, .\n\\]\nNotice that, in the previous derivation, we had to split the sum to account for the case $j=k$. \nThe proof of the second formula is similar. \n\\end{proof}\n\nLet us turn to expectation computation that are important to derive approximation in Section~\\ref{sec:gamma-computation-linear}.\nWe now see $S$ and $H_S$ as random variables. \nWe will denote by $\\expecunder{\\cdot}{s}$ the expectation conditionally to the event $\\{\\card{S}=s\\}$. \n\n\\begin{lemma}[Expectation computation]\n\\label{lemma:expectation-computation}\nLet $j,k$ be distinct elements of $\\{1,\\ldots,d\\}$. \nThen\n\\begin{equation}\n\\label{eq:subset-sum-expectation-computation}\n\\condexpec{H_S}{S\\not\\ni j} = \\frac{(1-\\omega_j)(d+1)}{3(d-1)} = \\frac{1-\\omega_j}{3} + \\bigo{\\frac{1}{d}}\n\\, ,\n\\end{equation}\nand\n\\begin{equation}\n\\label{eq:subset-sum-expectation-2}\n\\condexpec{H_S}{S\\not\\ni j,k} = \\frac{(1-\\omega_j-\\omega_k)(d+1)}{4(d-2)} = \\frac{1-\\omega_j-\\omega_k}{4} + \\bigo{\\frac{1}{d}}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nBy the law of total expectation, we know that \n\\[\n\\condexpec{H_S}{S\\not\\ni j} = \\sum_{s=1}^{d} \\condexpecunder{H_S}{S\\not\\ni j}{s} \\cdot \\condproba{\\card{S}=s}{S\\not\\ni j}\n\\, .\n\\]\nWe first notice that, for any $s< d$, \n\\begin{align*}\n\\condproba{\\card{S}=s}{S\\not\\ni j} &= \\frac{\\condproba{S\\not\\ni j}{\\card{S} = s} \\proba{\\card{S}=s}}{\\proba{j\\notin S}} \\\\\n&= \\frac{\\binom{d-1}{s} \/ \\binom{d}{s}\\cdot \\frac{1}{d}}{\\frac{d-1}{2d}}\\\\\n\\condproba{\\card{S}=s}{S\\not\\ni j} &= \\frac{2(d-s)}{d(d-1)}\n\\, .\n\\end{align*}\nAccording to Lemma~\\ref{lemma:first-order-subset-sums}, for any $1\\leq s < d$, \n\\[\n\\sum_{\\substack{\\card{S} = s \\\\ S\\not\\ni j}} H_S = \\binom{d-2}{s-1}(1-\\omega_j)\n\\, .\n\\]\nMoreover, there are $\\binom{d-1}{s}$ such subsets. \nSince $\\binom{d-1}{s-1}^{-1}\\binom{d-2}{s}=\\frac{s}{d-1}$, we deduce that\n\\[\n\\condexpecunder{H_S}{S\\not\\ni j}{s} = \\frac{s}{d-1}(1-\\omega_j)\n\\, .\n\\]\nFinally, we write\n\\begin{align*}\n\\condexpec{H_S}{S\\not\\ni j} &= \\sum_{s=1}^{d-1} \\frac{s}{d-1}(1-\\omega_j) \\cdot \\frac{2(d-s)}{d(d-1)} \\\\\n&= (1-\\omega_j) \\cdot \\frac{2}{d(d-1)^2} \\sum_{s=1}^{d-1} s(d-s) \\\\\n\\condexpec{H_S}{S\\not\\ni j} &= \\frac{(d+1)(1-\\omega_j)}{3(d-1)}\n\\, .\n\\end{align*}\nThe second case is similar. \nOne just has to note that\n\\begin{align*}\n\\condproba{\\card{S}=s}{S\\not\\ni j,k} &= \\frac{\\condproba{S\\not\\ni j,k}{\\card{S}=s}}{\\proba{j,k \\notin S}} \\\\\n&= \\frac{3(d-s)(d-s-1)}{d(d-1)(d-2)} \\tag{Lemma~\\ref{lemma:proba-containing}}\n\\, .\n\\end{align*}\nThen we can conclude since \n\\[\n\\sum_{s=1}^{d-2} s(d-s)(d-s-1) = \\frac{(d-2)(d-1)d(d+1)}{12}\n\\, .\n\\]\n\\end{proof}\n\n\\section{Technical results}\n\\label{sec:technical}\n\nIn this section, we collect small probability computations that are ubiquitous in our derivations. \nWe start with the probability for a given word to be present in the new sample $x$, conditionally to $\\card{S} =s$. \n\n\\begin{lemma}[Conditional probability to contain given words]\n\\label{lemma:proba-containing-cond}\nLet $\\word_1,\\ldots,\\word_p$ be $p$ distinct words of $\\dl$. \nThen, for any $1\\leq s\\leq d$, \n\\[\n\\probaunder{\\word_1\\in x,\\ldots,\\word_p\\in x}{s} = \\frac{(d-s)(d-s-1)\\cdots (d-s-p+1)}{d(d-1)\\cdots (d-p+1)} = \\frac{(d-s)!}{(d-s-p)!}\\cdot \\frac{(d-p)!}{d!}\n\\, .\n\\]\n\\end{lemma}\n\nIn the proofs, we use extensively Lemma~\\ref{lemma:proba-containing-cond} for $p=1$ and $p=2$, that is,\n\\[\n\\probaunder{\\word_j\\in x}{s} = \\frac{d-s}{d} \n\\quad \\text{ and } \\quad \n\\probaunder{\\word_j \\in x, \\word_k \\in x}{s} = \\frac{(d-s)(d-s-1)}{d(d-1)}\n\\, ,\n\\]\nfor any $1\\leq j,k\\leq d$ with $j\\neq k$. \n\n\\begin{proof}\nWe prove the more general statement. \nConditionally to $\\card{S} =s$, the choice of $S$ is uniform among all subsets of $\\{1,\\ldots,d\\}$ of cardinality $s$. \nThere are $\\binom{d}{s}$ such subsets, and only $\\binom{d-p}{s}$ of them do not contain the indices corresponding to $\\word_1,\\ldots,\\word_p$.\n\\end{proof}\n\nWe have the following result, without conditioning on the cardinality of $S$: \n\n\\begin{lemma}[Probability to contain given words]\n\t\\label{lemma:proba-containing}\nLet $\\word_1,\\ldots,\\word_p$ be $p$ distinct words of $\\dl$. \n\tThen\n\t\\[\n\t\\proba{\\word_1,\\ldots,\\word_p\\in x} = \\frac{d-p}{(p+1)d}\n\t\\, .\n\t\\]\n\\end{lemma}\n\n\\begin{proof}\nBy the law of total expectation,\n\\begin{align*}\n\\proba{\\word_1,\\ldots,\\word_p\\in x} &= \\frac{1}{d}\\sum_{s=1}^d \\condproba{\\word_1,\\ldots,\\word_p\\in x}{s} \\\\\n&= \\frac{1}{d}\\sum_{s=1}^d \\frac{(d-s)!}{(d-s-p)!}\\cdot \\frac{(d-p)!}{d!}\n\\, ,\n\\end{align*}\nwhere we used Lemma~\\ref{lemma:proba-containing-cond} in the last display. \nBy the hockey-stick identity~\\citep{ross_1997}, we have\n\\[\n\\sum_{s=1}^d \\binom{d-s}{p} = \\sum_{s=p}^{d-1} \\binom{s}{p} = \\binom{d}{p+1}\n\\, .\n\\]\nWe deduce that \n\\begin{equation}\n\\label{eq:aux-limit-1}\n\\sum_{s=1}^d \\frac{(d-s)!}{(d-s-p)!} = \\frac{d!}{(p+1)\\cdot (d-p-1)!}\n\\, .\n\\end{equation}\nWe deduce that \n\\begin{align*}\n\\proba{\\word_1,\\ldots,\\word_p\\in x}\n&= \\frac{1}{d} \\frac{(d-p)!}{d!} \\sum_{s=1}^d \\frac{(d-s)!}{(d-s-p)!} \\\\\n&= \\frac{1}{d} \\frac{(d-p)!}{d!} \\frac{d!}{(p+1)\\cdot (d-p-1)!} \\tag{by Eq.~\\eqref{eq:aux-limit-1}} \\\\\n\\proba{\\word_1,\\ldots,\\word_p\\in x} &= \\frac{d-p}{(p+1)d}\n\\, .\n\\end{align*}\n\\end{proof}\n\n\\section{Additional experiments}\n\\label{sec:experiments}\n\nIn this section, we present additional experiments. \nWe collect the experiments related to decision trees in Section~\\ref{sec:add-exp-trees} and those related to linear models in Section~\\ref{sec:add-exp-linear}. \n\n\\paragraph{Setting.}\nAll the experiments presented here and in the paper are done on Yelp reviews (the data are publicly available at \\url{https:\/\/www.kaggle.com\/omkarsabnis\/yelp-reviews-dataset}). \nFor a given model $f$, the general mechanism of our experiments is the following. \nFor a given document $\\xi$ containing $d$ distinct words, we set a bandwidth parameter $\\nu$ and a number of new samples $n$. \nThen we run LIME $n_\\text{exp}$ times on $\\xi$, with no feature selection procedure (that is, all words belonging to the local dictionary receive an explanation). \nWe want to emphasize again that this is the only difference with the default implementation. \nUnless otherwise specified, the parameters of LIME are chosen by default, that is, $\\nu=0.25$ and $n=5000$.\nThe number of experiments $n_\\text{exp}$ is set to $100$. \nThe whisker boxes are obtained by collecting the empirical values of the $n_\\text{exp}$ runs of LIME: they give an indication as to the variability in explanations due to the sampling of new examples. \nGenerally, we report a subset of the interpretable coefficients, the other having near zero values. \n\nLet us explain briefly how to read these whisker boxes: to each word corresponds a whisker box containing all the $n_\\text{exp}$ values of interpretable coefficients provided by LIME ($\\betahat_j$ in our notation). \nThe horizontal dark lines mark the quartiles of these values, and the horizontal blue line is the median. \nOn top of these experimental results, we report with red crosses the values predicted by our analysis ($\\beta_j^f$ in our notation).\n\nThe Python code for all experiments is available at \\url{https:\/\/github.com\/dmardaoui\/lime_text_theory}.\nWe encourage the reader to try and run the experiments on other examples of the dataset and with other parameters. \n\n\\subsection{Decision trees}\n\\label{sec:add-exp-trees}\n\nIn this section, we present additional experiments for small decision trees. \nWe begin by investigating the influence of $\\nu$ and $n$ on the quality of our theoretical predictions. \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.21]{decision_tree_nu_005.pdf} \n\\hspace{0.5cm}\n\\includegraphics[scale=0.21]{decision_tree_nu_035.pdf}\n\\caption{\\label{fig:tree-bandwidth}Influence of the bandwidth on the explanation given for a small decision tree on a Yelp review ($n=5000,n_\\text{exp}=100$, $d=29$). \\emph{Left panel:} $\\nu=0.05$, \\emph{right panel:} $\\nu=0.35$. Our theoretical predictions remain accurate for non-default bandwidths.}\n\\end{figure}\n\n\\paragraph{Influence of the bandwidth.}\nLet us consider the same example $\\xi$ and decision tree as in the paper. \nIn particular, the model $f$ is written as \n\\[\n\\indic{\\text{``food''}} + (1-\\indic{\\text{``food''}}) \\cdot \\indic{\\text{``about''}} \\cdot \\indic{\\text{``Everything''}}\n\\, .\n\\]\nWe now consider non-default bandwidths, that is, bandwidths different than $0.25$. \nWe present in Figure~\\ref{fig:tree-bandwidth} the results of these experiments. \nIn the left panel, we took a smaller bandwidth ($\\nu=0.05$) and in the right panel a larger bandwidth ($\\nu=0.35$). \nWe see that while the numerical value of the coefficients changes slightly, their relative order is preserved. \nMoreover, our theoretical predictions remain accurate in that case, which is to be expected since we did not resort to any approximation in this case. \nInterestingly, the empirical results for small $\\nu$ seem more spread out, as hinted by Theorem~\\ref{th:betahat-concentration}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.21]{decision_tree_simple_n_50.pdf} \n\\hspace{0.5cm}\n\\includegraphics[scale=0.21]{decision_tree_simple_n_8000.pdf}\n\\caption{\\label{fig:tree-nsample}Influence of the number of perturbed samples on the explanation given for a small decision tree on a Yelp review ($\\nu=0.25,n_\\text{exp}=100,d=29$). \\emph{Left panel:} $n=50$, \\emph{right panel:} $n=8000$. Empirical values are less likely to be close to the theoretical predictions for small $n$.}\n\\end{figure}\n\n\n\\paragraph{Influence of the number of samples.}\nKeeping the same model and example to explain as above, we looked into non-default number of samples $n$. \nWe present in Figure~\\ref{fig:tree-nsample} the results of these experiments. \nWe took a very small $n$ in the left panel ($n=50$ is two orders of magnitude smaller than the default $n=5000$) and a larger $n$ in the right panel. \nAs expected, when $n$ is larger, the concentration around our theoretical predictions is even better. \nTo the opposite, for small $n$, we see that the explanations vary wildly. \nThis is materialized by much wider whisker boxes. \nNevertheless, to our surprise, it seems that our theoretical predictions still contain some relevant information in that case. \n\n\\paragraph{Influence of depth.}\nFinally, we looked into more complex decision trees. \nThe decision rule used in Figure~\\ref{fig:tree-complex} is given by \n\\[\n\\indic{\\text{``food''}} + (1-\\indic{\\text{``food''}})\\indic{\\text{``about''}}\\indic{\\text{``Everything''}} +\\indic{\\text{``bad''}}+ \\indic{\\text{``bad''}}\\indic{\\text{``character''}}\n\\, .\n\\]\nWe see that increasing the depth of the tree is not a problem from a theoretical point of view. \nIt is interesting to see that words used in several nodes for the decision receive more weight (\\emph{e.g.}, ``bad'' in this example). \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.25]{decision_tree_complexe.pdf} \n\\caption{\\label{fig:tree-complex}Theory meets practice for a more complex decision tree ($\\nu=0.25,n_\\text{exp}=100,n=5000,d=29$). Here we report all coefficients. The theory still holds for more complex trees.}\n\\end{figure}\n\n\n\\subsection{Linear models}\n\\label{sec:add-exp-linear}\n\n\nLet us conclude this section with additional experiments for linear models. \nAs in the paper, we consider an arbitrary linear model \n\\[\nf(\\normtfidf{x}) = \\sum_{j=1}^d \\lambda_j \\normtfidf{x}_j\n\\, .\n\\]\nIn practice, the coefficients $\\lambda_j$ are drawn i.i.d. according to a Gaussian distribution. \n\n\\paragraph{Influence of the bandwidth.}\nAs in the previous section, we start by investigating the role of the bandwidth in the accuracy of our theoretical predictions. \nWe see in the right panel of Figure~\\ref{fig:linear-bandwidth} that taking a larger bandwidth does not change much neither the explanations nor the fit between our theoretical predictions and the empirical results. \nThis is expected, since our approximation (Eq.~\\eqref{eq:simplified-betainf-linear}) is based on the large bandwidth approximation. \nHowever, the left panel of Figure~\\ref{fig:linear-bandwidth} shows how this approximation becomes dubious when the bandwidth is small. \nIt is interesting to note that in that case, the theory seems to always \\emph{overestimate} the empirical results, in absolute value. \nThe large bandwidth approximation is definitely a culprit here, but it could also be the regularization coming into play. \nIndeed, the discussion at the end of Section~2.4 in the paper that lead us to ignore the regularization is no longer valid for a small $\\nu$. \nIn that case, the $\\pi_i$s can be quite small and the first term in Eq.~(5) of the paper is of order $\\exps{-1\/(2\\nu^2)}n$ instead of $n$. \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.25]{linear_nu_005.pdf} \n\\includegraphics[scale=0.25]{linear_nu_035.pdf} \n\\caption{\\label{fig:linear-bandwidth}Influence of the bandwidth on the explanation for a linear model on a Yelp review ($n_\\text{exp}=100,n=5000, d=29 $). \\emph{Left panel:} $\\nu=0.05$, \\emph{right panel:} $\\nu=0.35$. The approximate theoretical values are less accurate for smaller bandwidths.}\n\\end{figure}\n\n\\paragraph{Influence of the number of samples.}\nNow let us look at the influence of the number of perturbed samples. \nAs in the previous section, we look into very small values of~$n$, \\emph{e.g.}, $n=50$. \nWe see in the left panel of Figure~\\ref{fig:linear-n} that, as expected, the variability of the explanations increases drastically. \nThe theoretical predictions seem to overestimate the empirical results in absolute value, which could again be due to the regularization beginning to play a role for small $n$, since the discussion in Section~2.4 of the paper is only valid for large~$n$. \n\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.25]{linear_n_50.pdf} \n\\includegraphics[scale=0.25]{linear_n_8000.pdf} \n\\caption{\\label{fig:linear-n}Influence of the number of perturbed samples on the explanation for a linear model on a Yelp review ($ \\nu=0.25,n_\\text{exp}=100, d=29 $). \\emph{Left panel:} $n=50$, \\emph{right panel:} $n=8000$. The empirical explanations are more spread out for small values of $n$.}\n\\end{figure}\n\n\\paragraph{Influence of $d$.}\nTo conclude this section, let us note that $d$ does not seem to be a limiting factor in our analysis. \nWhile Theorem~\\ref{th:betahat-concentration} hints that the concentration phenomenon may worsen for large $d$, as noted before in Remark~\\ref{remark:influence-of-d}, we have reason to suspect that it is not the case. \nAll experiments presented on this section so far consider an example whose local dictionary has size $d=29$. \nIn Figure~\\ref{fig:linear-large-d} we present an experiment on an example that has a local dictionary of size $d=52$.\nWe observed no visible change in the accuracy of our predictions. \n\n\\begin{figure}\n\n\\centering\n\\includegraphics[scale=0.25]{linear_large_d.pdf} \n\\caption{\\label{fig:linear-large-d}Theory meets practice for an example with a larger vocabulary ($\\nu=0.25,n_\\text{exp}=100,n=5000,d=537$). Here we report only $50$ interpretable coefficients. Our theoretical predictions seem to hold for larger local dictionaries.}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\label{sec:intro} \n\nDecision-making in medicine requires precise knowledge of individualized health outcomes over time after applying different treatments \\cite{huang2012analysis,hill2013assessing}. This then informs the choice of treatment plans and thus ensures effective care personalized to individualized patients. Traditionally, the gold standard for estimating the effects of treatments are randomized controlled trials~(RCTs). However, RCTs are costly, often impractical, or even unethical. To address this, there is a growing interest in estimating health outcomes over time after treatments from observational data, such as, \\eg, electronic health records.\n\nNumerous methods have been proposed for estimating (counterfactual) outcomes after a treatment from observational data in the static setting \\cite{van2006targeted,chipman2010bart,johansson2016learning,curth2021nonparametric}. Different from that, we focus on longitudinal settings, that is, \\emph{over time}. In fact, longitudinal data are nowadays paramount in medical practice. For example, almost all EHRs nowadays store sequences of medical events over time \\cite{allam2021analyzing}. However, estimating counterfactual outcomes over time is challenging. One reason is that counterfactual outcomes are typically never observed. On top of that, directly estimating counterfactual outcomes with traditional machine learning methods in the presence of (time-varying) confounding has a larger generalization error of estimation \\cite{alaa2018limits}, or is even biased (in case of multiple-step-ahead prediction) \\cite{robins2009estimation}. Instead, tailored methods are needed. \n\nOnly recently, there is a growing interest in methods for estimating counterfactual outcomes over time \\cite{robins2009estimation}. Here, state-of-the-art methods make nowadays use of machine learning. Prominent examples are: recurrent marginal structural networks~(RMSNs) \\cite{lim2018forecasting}, counterfactual recurrent network~(CRN) \\cite{bica2020estimating}, and G-Net \\cite{li2021g}. However, these methods build upon simple long short-term memory~(LSTM) networks \\cite{hochreiter1997long}, because of which their ability to model complex, long-range dependencies in observational data is limited. As a remedy, we develop a \\emph{Causal Transformer}\\xspace~(CT\\xspace) for estimating counterfactual outcomes over time. It is carefully designed to capture complex, long-range dependencies in medical data that are nowadays common in EHRs. \n\nIn this paper, we aim at estimating counterfactual outcomes over time, that is, for one- and multi-step-ahead predictions. For this, we develop a novel \\emph{Causal Transformer}\\xspace~(CT\\xspace): it overcomes limitations of existing methods by leveraging a tailored transformer-based architecture to capture complex, long-range dependencies in the observational data. Specifically, we combine three separate transformer subnetworks for processing time-varying covariates, past treatments, and past outcomes, respectively, into a joint network with in-between cross-attentions. Here, each transformer subnetwork is further extended by (i)~masked multi-head self-attention, (ii)~shared learnable relative positional encoding, and (iii)~attentional dropout. \n\nTo train CT\\xspace, we further develop a custom end-to-end training procedure and, to this end, propose a novel counterfactual domain confusion loss. This allows us to solve an adversarial balancing objective in which we balance representations to be (a)~predictive of outcomes and (b)~non-predictive of the current treatment assignment. The latter is crucial to address confounding bias and thus reduces the generalization error of counterfactual prediction. We demonstrate the effectiveness of our CT\\xspace over state-of-the-art methods using an extensive series of experiments with synthetic and real-world data. \n\nOverall, our \\textbf{main contribution} are as follows:\\footnote{Code is available online: \\url{https:\/\/anonymous.4open.science\/r\/AnonymousCausalTransformer-9FC5}}\n\\vspace{-0.2cm}\n\\begin{enumerate}[noitemsep]\n\\item We propose a new end-to-end model for estimating counterfactual outcomes over time: the \\emph{Causal Transformer}\\xspace~(CT\\xspace). To the best of our knowledge, this is the first transformer tailored to causal inference. \n\\item We develop a custom training procedure for our CT\\xspace based on a novel counterfactual domain confusion loss. \n\\item We use synthetic and real-world data to demonstrate that our CT\\xspace achieves state-of-the-art performance. We further achieve this both for one- and multi-step-ahead predictions. \n\\end{enumerate}\n\\vspace{-0.3cm}\n\n\\begin{figure*}[tbp]\n \\begin{center}\n \\centerline{\\includegraphics[width=\\textwidth]{figures\/multi-input-causal-transformer}}\n \\caption{Overview of CT\\xspace architecture. We distinguish two timelines: time steps $1, \\ldots t$ refer observational data (patient trajectories) and thus input; time steps $t+1, \\ldots t+\\tau$ is the projection horizon and thus output. Three separate transformers are used in parallel for encoding observational data as input: treatments $\\mathbf{A}_t$ (blue), outcomes $\\mathbf{Y}_{t}$ (green), and time-varying covariates $\\mathbf{X}_t$ (red). These are fused in via $k$ stacked multi-input blocks. Additional static covariates $\\mathbf{V}$ (gray) are fed into all multi-input blocks. Each multi-input block further makes use of cross-attentions. Afterwards, the three respective representation for treatments, outcomes, and time-varying covariates are averaged, giving the (balanced) representation $\\mathbf{\\Phi}_t$ (purple). On top of that are two additional networks $G_Y$ (outcome prediction network) and $G_A$ (treatment classifier network), that we later use for balancing in our counterfactual domain confusion loss. Residual connections with layer normalizations are omitted for clarity.\n }\n \\label{fig:multi-input-transformer}\n \\end{center}\n \\vskip -0.2in\n\\end{figure*}\n\n\\section{Related Work} \n\\label{sec:related-work}\n\n\\paragraph{Estimating counterfactual outcomes in static setting.} \n\nExtensive literature has focused on estimating counterfactual outcomes (or, analogously, individual treatment effects~(ITE)) in static settings \\cite{johansson2016learning,alaa2018bayesian,wager2018estimation,yoon2018ganite,curth2021nonparametric}. Several works have adapted deep learning for that purpose \\cite{johansson2016learning,yoon2018ganite}. In the static setting, the input is given by cross-sectional data, and, as such, there are \\emph{no} time-varying covariates, treatments, and outcomes. However, counterfactual outcome estimation in static settings is different from work, in which we are interested in settings over time. \n\n\\paragraph{Estimating counterfactual outcomes over time.} Methods for estimating time-varying outcomes were originally introduced in epidemiology and make widespread use of simple linear models. Here, the aim is to estimate average (non-individual) effects of time-varying treatments. Examples of such methods include G-computation, marginal structural models (MSMs), and structural nested models \\cite{robins1986new,robins2000marginal,hernan2001marginal,robins2009estimation}. To address the limited expressiveness of linear models, several Bayesian non-parametric methods were proposed \\cite{Xu16,schulam2017reliable,soleimani2017treatment}. However, these make strong assumptions regarding the data generation mechanism, and are not designed for multi-dimensional outcomes as well as static covariates. Other methods build upon recurrent neural networks \\cite{qian2021synctwin,berrevoets2021disentangled} but these are restricted to single-time treatments or make stronger assumptions for identifiability, which do not hold for our setting (see Appendix~\\ref{app:methods-table}). \n\nThere are several methods that build upon the potential outcomes framework \\cite{rubin1978bayesian,robins2009estimation}, and, thus, ensure identifiability by making the same assumptions as we do (see Sec.~\\ref{sec:problem-formulation}). Here, state-of-the-art methods are recurrent marginal structural networks~(RMSNs) \\cite{lim2018forecasting}, counterfactual recurrent network~(CRN) \\cite{bica2020estimating}, and G-Net \\cite{li2021g}. These methods address bias due to time-varying confounding in different ways. RMSNs combine two propensity networks and use the predicted inverse probability of treatment weighting~(IPTW) scores for training the prediction networks. CRN uses an adversarial objective to produce the sequence of balanced representations, which are simultaneously predictive of the outcome but non-predictive of the current treatment assignment. G-Net aims to predict both outcomes and time-varying covariates, and then performs G-computation for multiple-step-ahead prediction. All of three aforementioned methods are built on top of one\/two-layer LSTM encoder-decoder architectures. Because of that, they are limited by the extent with which they can capture long-range, complex dependencies between time-varying confounders (\\ie, time-varying covariates, previous treatments, and previous outcomes). However, such complex data are nowadays widespread in medical practice (\\eg, electronic health records) \\cite{allam2021analyzing}, which may impede the performance of the previous methods for real-world medical data. As a remedy, we develop a \\emph{deep} transformer network for counterfactual outcomes estimation over time. \n\n\\paragraph{Transformers.} Transformers refer to deep neural networks for sequential data that typically adopt a custom self-attention mechanism \\cite{vaswani2017attention}. This makes transformers both flexible and powerful in modeling long-range associative dependencies for sequence-to-sequence tasks. Prominent examples come from natural language processing (e.g., BERT \\cite{devlin2019bert}, RoBERTa \\cite{liu2019roberta}, and GPT-3 \\cite{brown2020language}). Other examples include, \\eg, time-series forecasting \\cite{tang2021probabilistic,zhou2021informer}. However, to the best of our knowledge, no paper has developed transformers specifically for causal inference. This presents our novelty. \n\n\\section{Problem Formulation} \n\\label{sec:problem-formulation}\nWe build upon the standard setting for estimating counterfactual outcomes over time as in \\cite{robins2009estimation,lim2018forecasting,bica2020estimating,li2021g}. Let $i$ refer to some patient and with health trajectories that span time steps $t = 1, \\dots, T^{(i)}$. For each time step $t$ and each patient $i$, we have the following: $d_x$ time-varying covariates $\\mathbf{X}_{t}^{(i)} \\in \\mathbb{R}^{d_x}$; $d_a$ categorical treatments $\\mathbf{A}_{t}^{(i)} \\in \\{a_1, \\dots, a_{d_a}\\}$; and $d_y$ outcomes $\\mathbf{Y}_{t}^{(i)} \\in \\mathbb{R}^{d_y}$. For example, critical care for COVID-19 would involve blood pressure and heart rate as time-varying covariates, ventilation as treatment, and respiratory frequency as outcome. Treatments are modeled as categorical variables as this relates to the question of whether to apply a treatment or not, and is thus consistent with prior works \\cite{lim2018forecasting,bica2020estimating,li2021g}. Further, we record static covariates describing a patient $\\mathbf{V}^{(i)}$ (\\eg, gender, age, or other risk factors). For notation, we omit patient index $(i)$ unless needed. \n\nFor learning, we have access to i.i.d. observational data $\\mathcal{D} = \\big\\{ \\{\\mathbf{x}_{t}^{(i)}, \\mathbf{a}_{t}^{(i)}, \\mathbf{y}_{t}^{(i)}\\}_{t=1}^{T^{(i)}} \\cup \\mathbf{v}^{(i)} \\big\\}_{i=1}^N$. In clinical settings, such data are nowadays widely available in form of EHRs \\cite{allam2021analyzing}. Here, we summarize the patient trajectory by $\\bar{\\mathbf{H}}_{t} = \\{ \\bar{\\mathbf{X}}_{t}, \\bar{\\mathbf{A}}_{t-1}, \\bar{\\mathbf{Y}}_{t}, \\mathbf{V} \\}$, where $\\bar{\\mathbf{X}}_{t} = (\\mathbf{X}_1, \\dots, \\mathbf{X}_t)$, $\\bar{\\mathbf{Y}}_{t} = (\\mathbf{Y}_1, \\dots, \\mathbf{Y}_t)$, and $\\bar{\\mathbf{A}}_{t-1} = (\\mathbf{A}_1, \\dots, \\mathbf{A}_{t-1})$.\n\nWe build upon the potential outcomes framework \\cite{neyman1923application,rubin1978bayesian} and its extension to time-varying treatments and outcomes \\cite{robins2009estimation}. Let $\\tau \\geq 1$ denote projection horizon for a $\\tau$-step-ahead prediction. Further,\nlet $\\bar{\\mathbf{a}} (t, t+\\tau-1) = (\\mathbf{a}_t, \\ldots, \\mathbf{a}_{t + \\tau - 1})$ \ndenote a given (non-random) treatment intervention. Then, we are interested in the potential outcomes, $\\mathbf{Y}_{t + \\tau}[\\bar{\\mathbf{a}} (t, t+\\tau-1)]$, under the treatment intervention. However, the potential outcomes for a specific treatment intervention are typically never observed for a patient but must be estimated. Formally, the potential counterfactual outcomes over time are identifiable from factual observational data $\\mathcal{D}$ under three standard assumptions: (1)~consistency, (2)~sequential ignorability, and (3)~sequential overlap (see Appendix~\\ref{app:assumptions} for details). \n\nOur task is thus to estimate future counterfactual outcomes $\\mathbf{Y}_{t + \\tau}$, after applying a treatment intervention $\\bar{\\mathbf{a}} (t, t+\\tau-1)$ for a given patient history $\\bar{\\mathbf{H}}_{t}$. Formally, we aim to estimate:\n\\begin{equation}\n \\mathbb{E} \\big( \\mathbf{Y}_{t + \\tau}[\\bar{\\mathbf{a}} (t, t+\\tau-1)] \\;\\mid\\; \\bar{\\mathbf{H}}_{t} \\big) .\n\\end{equation} \nTo do so, we learn a function $g(\\tau, \\bar{\\mathbf{a}} (t, t+\\tau-1), \\bar{\\mathbf{H}}_{t})$. Simply estimating $g(\\cdot)$ with traditional machine learning is biased \\cite{robins2009estimation}. For example, one reason is that treatment interventions not only influence outcomes but also future covariates. To address this, we develop a tailored model for estimation.\n\n\\section{Causal Transformer}\n\n\\paragraph{Input.} \n\nOur \\emph{Causal Transformer}\\xspace~(CT\\xspace) is a single multi-input architecture, which combines three separate transformer subnetworks. Each processes a different sequence as input: (i)~past time-varying covariates $\\bar{\\mathbf{X}}_{t}$; (ii)~past outcomes $\\bar{\\mathbf{Y}}_{t}$; and (iii)~past treatments before intervention $\\bar{\\mathbf{A}}_{t-1}$. Since we aim at estimating the counterfactual outcome after treatment intervention, we further input the future treatment assignment that a medical practitioners wants to intervene on. Thus, we concatenate two treatment sequences into one $\\bar{\\mathbf{A}}_{t-1} \\cup \\bar{\\mathbf{a}} (t, t+\\tau-1)$. Additionally, (iv)~the vector with static covariates $\\mathbf{V}$ is fed into all subnetworks.\n\n\\subsection{Model architecture}\n\nCT\\xspace learns a sequence of treatment-invariant (balanced) \\emph{representations} $\\bar{\\mathbf{\\Phi}}_{t+\\tau-1} = (\\mathbf{\\Phi}_1, \\dots, \\mathbf{\\Phi}_{t+\\tau-1})$. To do so, we stack $k$ identical \\emph{transformer blocks}. The first transformer block receives the different input sequences. The $k$-th transformer block outputs a sequence of representations $\\mathbf{\\Phi}_t$. The architecture is shown in Fig.~\\ref{fig:multi-input-transformer}. \n\n\\paragraph{Transformer blocks.} \n\nLet $i = 1, \\ldots, k$ index the different transformer blocks. Each transformer block receives three parallel sequences of hidden states as input (for each of the input sequences). For time step $t$, we denote them by $\\mathbf{A}^i_t$, $\\mathbf{Y}^i_t$, and $\\mathbf{X}^i_t$, respectively. We denote size of the hidden states by $d_h$. Further, each transformer block receives a representation vector of static covariates as additional input. \n\nFor the first transformer block ($i=0$), we use linearly-transformed time-series as input:\n\\begin{align}\n \\begin{split}\n & \\mathbf{A}_t^0 = \\operatorname{Linear}(\\mathbf{A}_{t}), \\quad \\,\\, \\mathbf{X}_t^0 = \\operatorname{Linear}(\\mathbf{X}_{t}), \\\\\n & \\mathbf{Y}_t^0 = \\operatorname{Linear}(\\mathbf{Y}_{t}), \\quad \\,\\, \\tilde{\\mathbf{V}} = \\operatorname{Linear}(\\mathbf{V}),\n \\end{split}\n\\end{align}\nwhere parameters of linear layers are shared for all time steps. All blocks $ \\ge 2 $ use the output sequence of the previous block $i-1$ as inputs.\n\nFor notation, we denote sequences of hidden states after block $i$ by three tensors $\\mathrm{A}^i, \\mathrm{X}^i$, and $\\mathrm{Y}^i$, \\ie, \n\\begin{align}\n \\begin{split}\n & \\mathrm{A}^i = \\big(\\mathbf{A}_1^i, \\dots, \\mathbf{A}_{t + \\tau - 2}^i\\big)^\\top , \\,\n \\mathrm{X}^i = \\big(\\mathbf{X}_1^i, \\dots, \\mathbf{X}_{t}^i\\big)^\\top , \\\\\n & \\mathrm{Y}^i = \\big(\\mathbf{Y}_1^i, \\dots, \\mathbf{Y}_{t + \\tau - 1}^i\\big)^\\top\n \\end{split}\n\\end{align}\n\nFollowing \\cite{dong2021attention,lu2021pretrained}, each transformer block combines a (i)~multi-head self-\/cross-attention, (ii)~feed-forward layer, and (iii)~layer normalization. A detailed formulation is in Appendix~\\ref{app:CT-block}. \n\n\\underline{(i) Multi-head self-\/cross-attention} uses a scaled dot-product attention with several parallel attention heads. Each attention head requires a 3-tuple of keys, queries, and values, \\ie, $K, Q, V \\in \\mathbb{R}^{T \\times d_{qkv}}$, respectively. These are obtained from a sequence of hidden states $\\mathrm{H} = \\big(\\mathbf{h}_1, \\dots, \\mathbf{h}_t\\big)^\\top \\in \\mathbb{R}^{T \\times d_h}$ ($\\mathrm{H}$ is one of $\\mathrm{A}$, $\\mathrm{X}$ or $\\mathrm{Y}$, depending on the subnetwork). Formally, we compute\n\\begin{align}\n \\label{eq:attention}\n \\operatorname{head}^{(i)} & = \\operatorname{Attention}(Q^{(i)}, K^{(i)}, V^{(i)}) \\\\\n & = \\operatorname{softmax}\\Big(\\frac{Q^{(i)}K^{(i)}{}^\\top}{\\sqrt{d_{qkv}}}\\Big) V^{(i)} , \\label{eq:attention2}\n\n \\\\\n Q^{(i)} &= Q^{(i)}(\\mathrm{H}) = \\mathrm{H} \\, W_Q^{(i)} + \\mathbf{1} b_Q^{(i)}{}^\\top , \\\\\n K^{(i)} &= K^{(i)}(\\mathrm{H}) = \\mathrm{H} \\, W_K^{(i)} + \\mathbf{1} b^{(i)}_K{}^\\top , \\\\ \n V^{(i)} &= V^{(i)}(\\mathrm{H}) = \\mathrm{H} \\, W_V^{(i)} + \\mathbf{1} b_V^{(i)}{}^\\top ,\n\\end{align}\nwhere $W_Q^{(i)}, W_K^{(i)}, W_V^{(i)} \\in \\mathbb{R}^{d_h \\times d_{qkv}}$ and $b_Q^{(i)}$, $b_Q^{(i)}$, $b_V^{(i)} \\in \\mathbb{R}^{d_{qkv}}$ are parameters of a single attention head $i$, where $\\operatorname{softmax}(\\cdot)$ operates separately on each row, and where $\\mathbf{1} \\in \\mathbb{R}^{d_{qkv}}$ is a vector of ones. We set the dimensionality of keys and queries to $d_{qkv} = d_{h} \/ n_h$, where $n_h$ is the number of heads.\n\nThe output of a multi-head attention is a concatenation of the different heads, \\ie, \n\\begin{equation}\n \\operatorname{MHA}(Q, K, V) = \\operatorname{Concat}(\\operatorname{head}^{(1)}, \\dots, \\operatorname{head}^{(n_h)}) .\n\\end{equation}\nHere, we simplified the original multi-head attention in \\cite{vaswani2017attention} by omitting the final output projection layer after concatenation to reduce risk of overfitting.\n\n\nIn CT\\xspace, self-attention uses the sequence of hidden states from the same transformer subnetwork to infer keys, queries, and values, while cross-attention uses the sequence of hidden states of the other two transformer subnetworks as keys and values. We use multiple cross-attentions to exchange the information between parallel hidden states.\\footnote{Different variants of combining multiple-input information with self- and cross-attentions were already studied in the context of multi-source translation, e.g., by \\cite{libovicky2018input}. Our implementation is closest to parallel attention combination.} These are placed on top of the self-attention layers (see subdiagram in Fig.~\\ref{fig:multi-input-transformer}). We add the representation vector of static covariates, $\\tilde{\\mathbf{V}}$, when pooling different cross-attention outputs.\n\nWe mask hidden states for self- and cross-attentions by setting the attention logits in Eq.~\\eqref{eq:attention2} to $-\\infty$. This ensures that information flows only from the current input to future hidden states (and not the other way around). \n\n\\underline{(ii) Feed-forward layer} ($\\operatorname{FF}$) with ReLU activation is applied time-step-wise to the sequence of hidden states, \\ie,\n\\begin{equation*}\n \\operatorname{FF}(\\mathbf{h}_t) = \\operatorname{dropout}\\Big(W_{2} \\max\\big\\{ 0 , \n \\operatorname{dropout}(W_{1} \\mathbf{h}_t + b_{1}) \\big\\} + b_2\\Big) .\n\\end{equation*}\n\n\\underline{(iii) Layer normalization} ($\\operatorname{LN}$) \\cite{lei2016layer} and residual connections are added after each self- and cross-attention. We compute the layer normalization via\n\\begin{equation}\n \\operatorname{LN}(\\mathbf{h}_t) = \\frac{\\gamma}{\\sigma} \\odot (\\mathbf{h}_t - \\mu) + \\beta ,\n\\end{equation}\n\\begin{equation}\n \\mu = \\frac{1}{d_h} \\sum_{i=1}^{d_h} (\\mathbf{h}_t)_i, \\quad \\sigma = \\sqrt{\\frac{1}{d_h} \\sum_{i=1}^{d_h} \\big((\\mathbf{h}_t)_i - \\mu \\big)^2} ,\n\\end{equation}\nwhere $\\gamma, \\beta \\in \\mathbb{R}^{d_h}$ are scale and shift parameters and where $\\odot$ is an element-wise product. \n\n\\textbf{Balanced representations.} The (balanced) representations are then constructed via average pooling over three (or two) parallel hidden states of the $k$-th transformer block. Thereby, we use a fully-connected layer and an exponential linear unit (ELU) non-linearity; \\ie,\n\\begin{align}\n\\nonumber\n & \\mathbf{\\tilde{\\Phi}}_t = \n \\begin{cases}\n \\frac{1}{3}(\\mathbf{A}_{i-1}^{k} + \\mathbf{X}_i^{k} + \\mathbf{Y}_i^{k}), & i \\in \\{1, \\dots, t\\} , \\\\\n \\frac{1}{2}(\\mathbf{A}_{i-1}^{k} + \\mathbf{Y}_i^{k}), & i \\in \\{t+1, \\dots, t + \\tau - 1\\} ,\n \\end{cases} , \\\\\n & \\mathbf{\\Phi}_t = \\operatorname{ELU}(W_{O}\\operatorname{dropout}(\\mathbf{\\tilde{\\Phi}}_t) + b_{O}) \\label{eq:output-repr}\n\\end{align}\nwhere $W_{O} \\in \\mathbb{R}^{d_h \\times d_r}, b_O \\in \\mathbb{R}^{d_r}$ are layer parameters and $d_r$ is the dimensionality of the balanced representation. \n\n\\subsection{Positional encoding} \n\nIn order to preserve information about the order of hidden states, we make use of position encoding~(PE). This is especially relevant for clinical practice as it allows us to distinguish sequences such as, \\eg, (treatment~A $\\mapsto$ side-effect~S $\\mapsto$ treatment~B) from (treatment~A $\\mapsto$ treatment~B $\\mapsto$ side-effect~S). \n\nWe model information about relative positions in the input at time steps $j$ and $i$ with $0 \\le j \\le i \\le t$ by a set of vectors $a^V_{ij}, a^K_{ij} \\in \\mathbb{R}^{d_{qkv}}$ \\cite{shaw2018self}. Specifically, they are shaped in the form of Toeplitz matrix\n\\begin{align}\n & a^V_{ij} = w^V_{\\operatorname{clip}(j-i, l_{\\text{max}})}, \\qquad a^K_{ij} = w^K_{\\operatorname{clip}(j-i, l_{\\text{max}})}, \\\\\n & \\operatorname{clip}(x, l_{\\text{max}}) = \\max\\{ -l_{\\text{max}}, \\min\\{ l_{\\text{max}}, x \\}\\}\n\\end{align}\nwith learnable weights $w^K_k, w^V_k \\in \\mathbb{R}^{d_{qkv}}$, for $k \\in \\{-l_{\\text{max}}, \\dots, 0\\}$, and where $l_{\\text{max}}$ is the maximum distinguishable distance in the relative PE. The above formalization ensures that we obtain \\emph{relative} encodings, that is, our CT\\xspace considers the distance between past or current position $j$ and current position $i$, but not the actual location. Furthermore, the current position $i$ attends only to past information or itself, and, thus, we never use $a^V_{ij}$ and $a^K_{ij}$ where $i < j$. As a result, there are only $(l_{\\text{max}} + 1) \\times d_{qkv}$ parameters to estimate.\n\nWe then use the relative PE to modify the self-attention operation (Eq.~\\eqref{eq:attention}). Formally, we compute the attention scores via (indices of heads are dropped for clarity)\n\\begin{align}\n & (\\operatorname{Attention}(Q, K, V))_i = \\sum_{j=1}^t \\alpha_{ij}(V_j + a_{ij}^V) , \\\\\n & \\alpha_{ij} = \\operatorname{softmax}_j \\left(\\frac{Q_i^\\top (K_j + a_{ij}^K)}{\\sqrt{d_{qkv}}} \\right) , \\label{eq:attn-relative-enc}\n\\end{align}\nwith attention scores $\\alpha_{ij}$ and where $K_j$, $V_j$, and $Q_i$ are columns of corresponding matrices and where $\\operatorname{softmax}_j$ operates wrt. to index $j$. Cross-attention with PE is defined in an analogous way. In our CT\\xspace, the attention scores are shared across all the heads and blocks, as well as the three different sub-networks. \n\nIn our CT\\xspace, we use relative positional encodings \\cite{shaw2018self} that are incorporated in every self- and cross-attention. This is different from the original transformer \\cite{vaswani2017attention}, which used absolute positional encodings with fixed weights for the initial hidden states of the first transformer block (see Appendix~\\ref{app:abs-pe} for details). However, relative PE is regarded as more robust and, further, suited for patient trajectories where the order of treatments and diagnoses is particularly informative \\cite{allam2021analyzing}, but not the absolute time step. Additionally, it allows for better generalization to unseen sequence lengths: for the ranges beyond the maximal distinguishable distance $l_{\\text{max}}$, CT\\xspace stops to distinguish the precise relative location of states and considers everything as distant past information. In line with this, our experiments later also confirm relative PE to be superior over absolute PE. \n\n\\begin{figure*}[tbp]\n \n \\centering\n \\hfill\n \\subfigure[One-step-ahead prediction]{\\includegraphics[width=0.32\\textwidth]{figures\/tg-sim-one-step-ahead.pdf}}\\label{fig:results-tg-sim-one-step}\n \\hfill\n \\subfigure[$\\tau$-step-ahead prediction (single sliding treatment).]{\\includegraphics[width=0.32\\textwidth]{figures\/tg-sim-six-step-ahead-timing-of-treatment.pdf}}\\label{fig:results-tg-sim-six-step-timing}\n \\hfill\n \\subfigure[$\\tau$-step-ahead prediction (random trajectories)] {\\includegraphics[width=0.32\\textwidth]{figures\/tg-sim-six-step-ahead-random.pdf}}\\label{fig:results-tg-sim-six-step-rand}\n \\hfill\n \\caption{Results for fully-synthetic data based on tumor growth simulator (lower values are better). Shown is the mean performance averaged over five runs with different seeds. Here: $\\tau = 6$.}\n \\label{fig:results-tg-sim}\n \\vskip -0.2in\n\\end{figure*}\n\n\\begin{table*}[tbp]\n \\caption{Results for semi-synthetic data for $\\tau$-step-ahead prediction based on real-world medical data (MIMIC-III). Shown: RMSE as mean $\\pm$ standard deviation over five runs. Here: random trajectory setting. MSMs struggle for long prediction horizons with values $>$ 10.0 (due to linear modeling of IPTW scores).}\n \\label{tab:ss-sim-all}\n \n \\begin{center}\n \\scriptsize\n \\input{tables\/ss-sim-all}\n \\end{center}\n \\vskip -0.1in\n\\end{table*}\n\n\\subsection{Training of our \\emph{Causal Transformer}\\xspace} \n\\label{sub-sec:CT-training}\n\nIn our CT\\xspace, we aim at two simultaneous objectives to address confounding bias: (a)~we aim at learning representations that are predictive of the next outcome and (b)~are non-predictive of the current treatment assignment. This thus naturally yields an adversarial objective . For this purpose, we make use of balanced representations, which we train via a novel \\emph{counterfactual domain confusion loss}.\n\n\\paragraph{Adversarial balanced representations.} \n\nAs in \\cite{bica2020estimating}, we build \\emph{balanced} representations that allow us to achieve the adversarial objectives (a) and (b). For this, we put two fully-connected networks on top of the representation $\\mathbf{\\Phi}_t$, corresponding to the respective objectives: (a)~an outcome prediction network $G_Y$ and (b)~a treatment classifier network $G_A$. Both receive the representation $\\mathbf{\\Phi}_t$ as input; the outcome prediction network additionally receives the current treatment $\\mathbf{a}_t$ that we want to intervene on. We implement both as single hidden layer fully-connected networks with number of units $n_{\\text{FC}}$ and ELU activation. For notation, let $\\theta_{Y}$ and $\\theta_{A}$ denote the trainable parameters in $G_Y$ and $G_A$, respectively. Further, let $\\theta_{R}$ denote all trainable parameters in CT\\xspace for generating the representation $\\mathbf{\\Phi}_t$. \n \n\\paragraph{Factual outcome loss.} For objective~(a), we fit the outcome prediction network $G_Y$, and thus $\\mathbf{\\Phi}_t$, by minimizing the factual loss of the next outcome. This can be done, \\eg, via the mean squared error (MSE). We then yield\n\\begin{align}\n & \\mathcal{L}_{G_Y} (\\theta_Y, \\theta_R) = \\left\\Vert \\mathbf{Y}_{t+1} - G_Y\\big(\\mathbf{\\Phi}_t(\\theta_R), \\mathbf{a}_t; \\theta_Y \\big) \\right\\Vert^2 .\n\\end{align}\n\n\\paragraph{Counterfactual domain confusion loss.} For objective~(b), we want to fit the treatment classifier network $G_A$, and thus the representation $\\mathbf{\\Phi}_t$, in way that it is non-predictive of the current treatment. To achieve this, we develop a novel domain confusion loss tailored for counterfactual inference. Our idea builds upon the domain confusion loss \\cite{tzeng2015simultaneous}, an adversarial objective, previously used for unsupervised domain adaptation, whereas we adapt it specifically for counterfactual inference. \n\nThen, we fit $G_A$ so that it can predict the current treatment, \\ie, via \n\\begin{equation}\n\\label{eq:loss-ga}\n\\hspace{-0.3cm}\n\\mathcal{L}_{G_A} (\\theta_A, \\theta_R) = - \\sum_{j=1}^{d_a} \\mathbbm{1}_{[\\mathbf{a}_t = a_j]} \\log G_A (\\mathbf{\\Phi}_t(\\theta_R); \\theta_A) , \n\\end{equation}\nwhere $\\mathbbm{1}_{[\\cdot]}$ is the indicator function. This thus minimizes a classification loss of the current treatment assignment given $\\mathbf{\\Phi}_t$. However, while $G_A$ can predict the current treatment, the actual representation $\\mathbf{\\Phi}_t$ should not, and should rather be non-predictive. For this, we propose to minimize the cross-entropy between a uniform distribution over treatment categorical space and predictions of $G_A$ via\n\\begin{equation}\n\\label{eq:loss-conf}\n\\mathcal{L}_{\\text{conf}} (\\theta_A, \\theta_R) = - \\sum_{j=1}^{d_a} \\frac{1}{d_a} \\log G_A (\\mathbf{\\Phi}_t(\\theta_R); \\theta_A) ,\n\\end{equation}\nthus achieving domain confusion. \n\n\\paragraph{Overall adversarial objective.} \n\nUsing the above, CT\\xspace is trained via\n\\begin{align}\n\\hspace{-0.3cm} \n(\\hat{\\theta}_Y, \\hat{\\theta}_R) & = \\argmin_{\\theta_Y, \\theta_R} \\mathcal{L}_{G_Y} (\\theta_Y, \\theta_R) + \\alpha \\mathcal{L}_{\\text{conf}} (\\hat{\\theta}_A, \\theta_R) , \\label{eq:loss-yr}\\\\ \n \\hat{\\theta}_A & = \\argmin_{\\theta_A} \\alpha \\mathcal{L}_{G_A} (\\theta_A, \\hat{\\theta}_R) , \\label{eq:loss-a}\n\\end{align}\nwhere $\\alpha$ is a hyperparameter for domain confusion. Thereby, optimal values of $\\hat{\\theta}_Y$, $\\hat{\\theta}_R$ and $\\hat{\\theta}_A$ achieve an equilibrium between factual outcome prediction and domain confusion. In CT\\xspace, we implement this by performing iterative updates of the parameters of each transformer subnetwork (rather than optimizing globally). Details are in Appendix~\\ref{app:adv-training}. \n\nPrevious work \\cite{bica2020estimating} has addressed the above adversarial objective through gradient reversal \\cite{ganin2015unsupervised}. However, this has two shortcomings: (i)~If the parameter $\\lambda$ of gradient reversal becomes too large, the representation may be predictive of opposite treatment \\cite{atan2018counterfactual}. (ii)~If the treatment classifier network learns too fast, gradients vanish and are not passed to representations, leading to poor fit \\cite{tzeng2017adversarial}. Different from that, we propose a novel counterfactual domain confusion loss. As we see later, our loss is highly effective: it even improves CRN \\cite{bica2020estimating}, when replacing gradient reversal through our loss.\n\n\\paragraph{Stabilization.} \n\nWe further stabilize the above adversarial training by employing exponential moving average~(EMA) of model parameters during training \\cite{yaz2018unusual}. EMA helps to limit cycles of model parameters around the equilibrium with vanishing amplitude and thus accelerates overall convergence. We apply EMA to all trainable parameters (\\ie, $\\theta_Y$, $\\theta_R$, $\\theta_A$). Formally, we update parameters during training via \n\\begin{equation}\n\\theta^{(i)}_{\\text{EMA}} = \\beta \\, \\theta^{(i - 1)}_{\\text{EMA}} + (1 - \\beta) \\, \\theta^{(i)} ,\n\\end{equation}\nwhere superscripts $(i)$ refers to the different steps of the optimization algorithm, where $\\beta$ is a exponential smoothing parameter, and where we initialize $\\theta^{(0)}_{\\text{EMA}} = \\theta^{(0)}$. We provide pseudocode for an iterative gradient update in CT\\xspace via EMA in Appendix \\ref{app:adv-training}.\n\n\\paragraph{Attentional dropout.} To reduce the risk of overfitting between time steps, we implement attentional dropout via DropAttention \\cite{zehui2019dropattention}. During training, attention scores $\\alpha_{ij}$ in Eq.~\\eqref{eq:attn-relative-enc} are element-wise randomly set to zero with probability $p$ (\\ie, the dropout rate). However, we make a small simplification. We do not perform normalized rescaling \\cite{zehui2019dropattention} of attention scores but opt for traditional dropout rescaling \\cite{srivastava2014dropout}, as this resulted in more stable training for short-length sequences. \n\n\\paragraph{Mini-batch augmentation with masking.}\n\nFor training data $\\mathcal{D}$, we always have access to the full time-series, that is, including all time-varying covariates $\\mathbf{x}_{1}^{(i)}, \\dots, \\mathbf{x}_{T^{(i)}}^{(i)}$. However, upon deployment, these are no longer observable for $\\tau$-step-ahead predictions with $\\tau \\ge 2$. To reflect this during training, we perform data augmentation at the mini-batch level. For this, we duplicate the training samples: We uniformly sample the length $1 \\leq t_s \\leq T^{(i)}$ of the masking window, and then create a duplicate data sample where the last $t_s$ time-varying covariates $\\mathbf{x}_{t_s}, \\dots, \\mathbf{x}_{T^{(i)}}^{(i)}$ are masked by setting the corresponding attention logits of $\\mathrm{H} = \\mathrm{X}$ in Eq.~\\eqref{eq:attention} to $-\\infty$.\n\nMini-batch augmentation with masking allows us train a single model for both one- and multiple-step-ahead prediction in end-to-end fashion. This distinguishes our CT\\xspace from RMSNs and CRN, which are built on top of encoder-decoder architectures and trained in a two-stage procedure. Later, we experiment with an encoder-decoder version of CT\\xspace but find that it is inferior performance to our end-to-end model.\n\n\\subsection{Theoretical insights}\n\nThe following result provides a theoretical justification that our counterfactual domain confusion loss indeed leads to balanced representations, and, thus, removes the bias induced by time-varying confounders\\footnote{Importantly, our\nloss is different from gradient reversal (GR) in \\cite{ganin2015unsupervised, bica2020estimating}. It builds balanced representations by minimizing \\emph{reversed KL-divergence} between the treatment-conditional distribution of representation and mixture of all treatment-conditional distributions.}.\n\\begin{theorem}\\label{thrm:domain_conf_loss_short}\nWe fix $t \\in \\mathbb{N}$ and define $P$ as the distribution of $\\mathbf{\\bar{H}_t}$, $P_j$ as the distribution of $\\mathbf{\\bar{H}_t}$ given $\\mathbf{A}_t = a_j$, and $P^\\Phi_j$ as the distribution of $\\mathbf{\\Phi}_t = \\Phi(\\mathbf{\\bar{H}_t})$ given $\\mathbf{A}_t = a_j$ for all $j \\in \\{1, \\dots, d_a\\}$. Let $G^j_A$ denote the output of $G_A$ corresponding to treatment $a_j$. Then, there exists an optimal pair $(\\Phi^\\ast, G^\\ast_A)$ such that\n\\begin{align}\\\n \\Phi^\\ast &= \\argmax_{\\Phi} \\sum_{j=1}^{d_a} \\mathbb{E}_{\\mathbf{\\bar{h}}_t \\sim P}\\left[ \\log\\left({G^\\ast}^j_A(\\Phi(\\mathbf{\\bar{h}}_t) \\right)\\right] \\label{eq:phi-star}\\\\\n G^\\ast_A &= \\argmax_{G_A} \\sum_{j=1}^{d_a} \\mathbb{E}_{\\mathbf{\\bar{h}}_t \\sim P_j}\\left[\\log\\left({G}^j_A(\\Phi^\\ast(\\mathbf{\\bar{h}}_t) \\right) \\right] \\mathbb{P}(\\mathbf{A}_t = a_j) \\label{eq:ga-star}\\\\\n& \\text{subject to} \\sum_{i=1}^{d_a} {G}^i_A(\\Phi^\\ast(\\mathbf{\\bar{h}}_t)) = 1.\n\\end{align}\nFurthermore, $\\Phi^\\ast$ satisfies Eq.~(1) if and only if it induces balanced representations across treatments, i.e., $P^{\\Phi^\\ast}_1 = \\ldots = P^{\\Phi^\\ast}_{d_a}$.\n\\end{theorem}\n\\begin{proof}\nSketch: We make use of Prop. 1 in \\cite{bica2020estimating}, and derive an explicit expression for $G^\\ast_A$ for fixed $\\Phi$. Plugging this into our objective for $\\Phi^\\ast$ allows us to obtain an expression similar to Eq.~17 from \\cite{bica2020estimating}. However, there is a crucial difference: we have a \\emph{reversed} KL divergence between $P^{\\Phi^\\ast}_j$ and a mixture of $\\{P^{\\Phi^\\ast}_1, \\dots, P^{\\Phi^\\ast}_{d_a}\\}$. For details we refer to Appendix~\\ref{app:proof}.\n\\end{proof}\n\nIt could be easily shown, that objectives (\\ref{eq:loss-ga}) and (\\ref{eq:loss-conf}) are exactly finite sample versions of (\\ref{eq:ga-star}) and (\\ref{eq:phi-star}) from Theorem~\\ref{thrm:domain_conf_loss_short}, respectively.\n\n\\subsection{Implementation}\n\n\\paragraph{Training.} We implemented CT\\xspace in PyTorch Lightning. We trained CT\\xspace using Adam \\cite{kingma2014adam} with learning rate $\\eta$ and number of epochs $n_e$. The dropout rate $p$ was kept the same for both feed-forward layers and DropAttention (we call it sequential dropout rate). We employed teacher forcing technique \\cite{williams1989learning}. During evaluation of multiple-step-ahead prediction, we switch off teacher forcing and autoregressively feed model predictions. For the parameters $\\alpha$ and $\\beta$ of adversarial training, we choose values $\\beta = 0.99$ and $\\alpha = 0.01$ as in the original works \\cite{tzeng2015simultaneous,yaz2018unusual}, which also performed well in our experiments. We additionally perform an exponential rise of $\\alpha$ during training. \n\n\\paragraph{Hyperparameter tuning.} $p$, $\\eta$, and all other hyperparameters (number of blocks $k$, minibatch size, number of attention heads $n_h$, size of hidden units $d_h$, size of balanced representation $d_r$, size of fully-connected hidden units $n_{\\text{FC}}$) are subject to hyperparameter tuning. Details are in Appendix~\\ref{app:hparams}. \n\n\\begin{table}[tbp]\n \\caption{Results for experiments with real-world medical data (MIMIC-III). Shown: RMSE as mean $\\pm$ standard deviation over five runs.}\n \n \\label{tab:mimic-real-sim-all}\n \n \\begin{center}\n \\tiny\n \\input{tables\/mimic-real-sim-all}\n \\end{center}\n \\vskip -0.1in\n\\end{table}\n\n\\begin{table}[tbp]\n \\caption{Ablation study for proposed CT\\xspace (with counterfactual domain confusion loss, $\\alpha = 0.01$, $\\beta = 0.99$). Reported: normalized RMSE of CT\\xspace with relative changes.}\n \\label{tab:ablation-study}\n \n \\begin{center}\n \\scriptsize\n \\addtolength{\\tabcolsep}{-1.6pt} \n \\input{tables\/ablation-study}\n \\addtolength{\\tabcolsep}{1.6pt} \n \\end{center}\n \\vskip -0.1in\n\\end{table}\n\n\\begin{table*}[tp]\n \\caption{CRN with different training procedures. Results for fully-synthetic data based on tumor growth simulator (here: $\\gamma = 4$).}\n \\label{tab:ablation-study-crn}\n \n \\begin{center}\n \\scriptsize\n \n \\input{tables\/ablation-study-crn}\n \n \\end{center}\n \\vskip -0.1in\n\\end{table*}\n\n\\section{Experiments}\n\nTo demonstrate the effectiveness of our CT\\xspace, we make use of synthetic datasets. Thereby, we follow common practice in benchmarking for counterfactual inference \\cite{lim2018forecasting,bica2020estimating,li2021g}. For real datasets, the true counterfactual outcomes are typically unknown. By using \\mbox{(semi-)}synthetic datasets, we can compute the true counterfactuals and thus validate our CT\\xspace. \n\n\\paragraph{Baselines.} The chosen baselines are identical to those in previous, state-of-the-art literature for estimating counterfactual outcomes over time \\cite{lim2018forecasting,bica2020estimating,li2021g}. These are: \\textbf{MSMs}~\\cite{robins2000marginal,hernan2001marginal}, \\textbf{RMSNs}~\\cite{lim2018forecasting}, \\textbf{CRN}~\\cite{bica2020estimating}, and \\textbf{G-Net} \\cite{li2021g}. Details are in Appendix~\\ref{app:baselines}. For comparability, we use the same hyperparameter tuning for the baselines as for CT\\xspace (see Appendix~\\ref{app:hparams}). \n\n\\subsection{Experiments with fully-synthetic data} \\label{sec:tg-sim}\n\n\\paragraph{Data.} We build upon the pharmacokinetic-pharmacodynamic model of tumor growth \\cite{geng2017prediction}. It provides a state-of-the-art biomedical model to simulate the effects of lung cancer treatments over time. The same model was previously used for evaluating RMSNs \\cite{lim2018forecasting} and CRN \\cite{bica2020estimating}. For $\\tau$-step-ahead prediction, we distinguish two settings: (i)~single sliding treatment where trajectories involve only a single treatment as in \\cite{bica2020estimating}; and (ii)~random trajectories where one or more treatments are assigned. We simulate patient trajectories for different amounts of confounding $\\gamma$. Further details are in Appendix~\\ref{app:syn}. Here, and in all following experiments, we apply hyperparameter tuning (see Appendix~\\ref{app:hparams}).\n\n\\paragraph{Results.} Fig.~\\ref{fig:results-tg-sim} shows the results. We see a notable performance gain for our CT\\xspace over the state-of-the-art baselines, especially pronounced for larger confounding $\\gamma$ and larger $\\tau$. Overall, CT\\xspace is superior by a large margin. \n\nFig.~\\ref{fig:results-tg-sim} also shows a CT\\xspace variant in which we removed the counterfactual domain confusion loss by setting $\\alpha$ to zero, called CT\\xspace($\\alpha = 0$). For comparability, we keep the hyperparameters as in the original CT\\xspace. The results demonstrate the effectiveness of proposed counterfactual domain confusion loss, especially for multi-step-ahead prediction. CT\\xspace also provides a significant runtime speedup in comparison to other neural network methods, mainly due to faster processing of sequential data with self- and cross-attentions, and single stage end-to-end training (see exact runtime comparison in Appendix~\\ref{app:runtime}). We plotted t-SNE embeddings of the balanced representations (Appendix~\\ref{app:t-sne}) to exemplify how the balancing works.\n\n\\subsection{Experiments with semi-synthetic data}\n\n\\paragraph{Data.} We create a semi-synthetic dataset based on real-world medical data from intensive care units. This allows us to validate our CT\\xspace with high-dimensional, long-range patient trajectories. For this, we use the MIMIC-III dataset \\cite{johnson2016mimic}. Building upon the ideas of \\cite{schulam2017reliable}, we then generate patient trajectories with outcomes under endogeneous and exogeneous dependencies while considering treatment effects. Thereby, we can again control for the amount of confounding. Details are in Appendix~\\ref{app:ss-sim}. Importantly, we again have access to the ground-truth counterfactuals for evaluation. \n\n\\paragraph{Results.} Table~\\ref{tab:ss-sim-all} shows the results. Again, CT\\xspace has a consistent and large improvement across all projection horizons $\\tau$ (average improvement over baselines: 38.5\\%). By comparing our CT\\xspace against CT\\xspace($\\alpha = 0$), we see clear performance gains, demonstrating the benefit of our counterfactual domain confusion loss. Additionally, we separately fitted an encoder-decoder architecture, namely \\emph{Encoder-Decoder} \\emph{Causal Transformer}\\xspace (EDCT). This approach leverages a single-subnetwork architecture, where all three sequences are fed into the a single subnetwork (as opposed to three separate networks as in our CT\\xspace). Further, the EDCT leverages the the existing GR loss from \\cite{bica2020estimating} and the similar encoder-decoder two-stage training. Details on this EDCT model are in Appendix~\\ref{app:EDCT}. Here, we find that our combination of end-to-end single-stage learning and three-subnetworks CT\\xspace is superior. \n\n\\subsection{Experiments with real-world data}\n\n\\paragraph{Data.} We now demonstrate the applicability of our CT\\xspace to real-world data and, for this, use intensive care unit stays in MIMIC-III \\cite{johnson2016mimic}. We use the same 25 vital signs and 3 static features. We use (diastolic) blood pressure as outcome and consider two treatments: vasopressors and mechanical ventilation. Details are in Appendix~\\ref{app:real-world-data}. \n\n\\paragraph{Results.} Because we no longer have access to the true counterfactuals, we now report the performance of predicting factual outcomes; see Table~\\ref{tab:mimic-real-sim-all}. All state-of-the-art baselines are outperformed by our CT\\xspace. This demonstrates the superiority of our proposed model. \n\n\\subsection{Ablation study}\n\nWe performed an extensive ablation study (Table~\\ref{tab:ablation-study}) using full-synthetic data (setting: random trajectories) to confirm the effectiveness of the different model components, usage of counterfactual domain confusion loss, and three subnetworks architecture as a whole. Thus, we grouped all the ablations in three categories. First category (\\textsf{a}) contains model components ablations: replacing trainable relative positional encoding (PE) with non-trainable relative PE, generated as described in Appendix~\\ref{app:abs-pe}); replacing our PE with a trainable absolute PE as in original transformer \\cite{vaswani2017attention}; removing attentional dropout; or removing cross-attention layers for all subnetworks. Second category (\\textsf{b}) has loss-related ablations, such as removing EMA of model weights; switching off adversarial balancing, but not EMA; and replacing our domain confusion loss with gradient reversal (GR) as in \\cite{bica2020estimating}. The last group (\\textsf{c}) tests a single-subnetwork version of CT\\xspace, namely EDCT (see Appendix~\\ref{app:EDCT} for details), with our counterfactual domain confusion (DC) loss loss and GR.\n\nOverall, not a single component alone is crucial, but the combination of novel architecture with three-subnetworks and novel DC loss is critical. This is confirmed for long prediction horizon ($\\tau = 6$), when our proposed CT\\xspace achieves the best performance. Notably, the main insight here is: simply switching the backbone from LSTM to transformer and using gradient reversal, as in \\cite{bica2020estimating}, gives unstable results (see ablation ``EDCT w\/ GR ($\\lambda$ = 1)``). Furthermore, our three-subnetworks CT\\xspace with GR loss performs even worse (see ablation ``w\/ GR ($\\lambda$ = 1)``). Hence, this motivates the usage of counterfactual domain confusion loss with EMA of model weights for our CT\\xspace.\n\nTo further demonstrate the effectiveness of our novel counterfactual domain confusion loss, we perform an additional test based on the fully-synthetic dataset (Table~\\ref{tab:ablation-study-crn}). We use (i)~a CRN with GR as in \\cite{bica2020estimating}. We compare it with (ii)~a CRN trained with our proposed counterfactual domain confusion loss. Evidently, our loss also helps the CRN to achieve a better RMSE. \n\n\\subsection{Conclusion}\n\nFor personalized medicine, estimates of the counterfactual outcomes for patient trajectories are needed. Here, we proposed a novel, state-of-the-art methods: the \\emph{Causal Transformer}\\xspace which is designed to capture complex, long-range patient trajectories. \n\n\\FloatBarrier\n\\printbibliography\n\n\\newpage\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction }\n\\label{sec-intro}\n\\andy{intro}\n\nA quantum system, prepared in a state that does not belong to an eigenvalue\nof the total Hamiltonian, starts to evolve\nquadratically in time \\cite{Beskow,Misra}.\nThis characteristic behavior leads to the so-called quantum Zeno phenomenon,\nnamely the possibility of slowing down the temporal\nevolution (eventually hindering\ntransitions to states different from the initial one) \\cite{strev}.\n\nThe original proposals that aimed at verifying this effect\ninvolved unstable systems and were not amenable to\nexperimental test \\cite{Wilkinson}. However, the remarkable idea\n\\cite{Cook} to use a two-level system motivated an interesting\nexperimental test \\cite{Itano},\nrevitalizing a debate on the physical meaning of this\nphenomenon \\cite{Itanodiscuss,PNBR}.\nThere seem to be a certain consensus, nowadays, that the quantum Zeno effect\n(QZE) can be given a dynamical explanation, involving only an\nexplicit Hamiltonian dynamics.\n\nIt is worth emphasizing that the discussion of the last\nfew years mostly stemmed from experimental considerations, related to the\n{\\em practical} possibility of performing experimental tests. Some examples are\nthe interesting issue of ``interaction-free\" measurements \\cite{inn} and the\nneutron-spin tests of the QZE \\cite{PNBR,NNPR}.\nIn practical cases, one cannot neglect the presence of losses and\nimperfections, which obviously conspire against an almost-ideal\nexperimental realization, more so when the total number of ``measurements\"\nincreases above certain theoretical limits.\n\nThe aim of the present paper is to investigate an interesting (and often\noverlooked) feature of what we might call the quantum Zeno dynamics. We shall\nsee that a series of ``measurements\" (von Neumann's projections \\cite{von})\ndoes not necessarily hinder the evolution of the quantum system. On the\ncontrary, the system can evolve away from its initial state, provided it\nremains in the subspace defined by the ``measurement\" itself. This interesting\nfeature is readily understandable in terms of rigorous theorems\n\\cite{Misra}, but it seems to us that it is worth clarifying it by\nanalyzing interesting physical examples. We shall therefore focus our attention\non an experiment involving neutron spin \\cite{PNBR}\nand shall see that in fact this enables\nus to kill two birds with one stone: not only the state of the neutron\nundergoing QZE {\\em will change}, but it will do so in a way that clarifies\nwhy reflection effects may play a substantial role in the experiment analyzed.\n\nIn the neutron-spin example to be considered, the evolution of the\nspin state is hindered when a series of spectral decompositions\n(in Wigner's sense \\cite{Wigner}) is performed on the spin state.\nNo ``observation\" of the spin states, and therefore no projection\n{\\em \\`a la} von Neumann is required, as far as the different\nbranch waves of the wave function cannot interfere after the\nspectral decomposition. Needless to say, the analysis that follows\ncould be performed in terms of a Hamiltonian dynamics, without\nmaking use of projection operators. However, we shall use in this\npaper the von Neumann technique, which will be found convenient\nbecause it sheds light on some remarkable aspects of the Zeno\nphenomenon and helps to pin down the physical implications of some\nmathematical hypotheses with relatively less efforts.\n\nThe paper is organized as follows. We briefly review, in the next\nsection, the seminal theorem for the short-time dynamics of quantum\nsystems, proved by Misra and Sudarshan \\cite{Misra}. Its\napplication to the neutron-spin case is discussed in\nSec.~\\ref{sec-neutr}. In Secs.~\\ref{sec-neutrspin} and\n\\ref{sec-compltrans}, unlike in previous papers \\cite{PNBR,NNPR},\nwe shall incorporate the spatial (1-dimensional, for simplicity)\ndegrees of freedom of the neutron and represent them by an\nadditional quantum number that labels, roughly speaking, the\ndirection of motion of the wave packet. A more realistic analysis\nis presented in Sec.~\\ref{sec-numan}. Finally,\nSec.~\\ref{sec-findisc} is devoted to a discussion. Some additional\naspects of our analysis are clarified in the Appendix.\n\n\n\\setcounter{equation}{0}\n\\section{Misra and Sudarshan's theorem }\n\\label{sec-MisSud}\n\\andy{MisSud}\n\nConsider a quantum system Q, whose\nstates belong to the Hilbert space ${\\cal H}$ and whose evolution\nis described by the unitary operator $U(t)=\\exp(-iHt)$, where $H$\nis a semi-bounded Hamiltonian. Let $E$ be a projection operator and\n$E{\\cal H}E={\\cal H}_E$ the subspace spanned by its eigenstates.\nThe initial density matrix $\\rho_0$ of system Q is taken to belong\nto ${\\cal H}_E$. If Q is let to follow its ``undisturbed\"\nevolution, under the action of the Hamiltonian $H$ (i.e., no\nmeasurements are performed in order to get informations about its\nquantum state), the final state at time $T$ reads\n\\andy{noproie}\n\\begin{equation}\n\\rho (T) = U(T) \\rho_0 U^\\dagger (T)\n \\label{eq:noproie}\n\\end{equation}\nand the probability that the system is still in ${\\cal H}_E$ at time $T$\n is\n\\andy{stillun}\n\\begin{equation}\nP(T) = \\mbox{Tr} \\left[ U(T) \\rho_0 U^\\dagger(T) E \\right] .\n\\label{eq:stillun}\n\\end{equation}\nWe call this a ``survival probability:\" it\nis in general smaller than 1, since the Hamiltonian $H$\ninduces transitions out of ${\\cal H}_E$.\nWe shall say that the quantum systems has ``survived\" if it is\nfound to be in ${\\cal H}_E$ by means of a suitable measurement\nprocess \\cite{MScomment}.\n\nAssume that we perform a measurement at time $t$,\nin order to check whether Q has survived. Such a measurement\nis formally represented by the projection operator $E$. By definition,\n\\andy{inprep}\n\\begin{equation}\n\\rho_0 = E \\rho_0 E , \\qquad \\mbox{Tr} [ \\rho_0 E ] = 1 .\n\\label{eq:inprep}\n\\end{equation}\nAfter the measurement, the state of Q changes into\n\\andy{proie}\n\\begin{equation}\n\\rho_0 \\rightarrow \\rho(t) = E U(t) \\rho_0 U^\\dagger(t) E\/P(t),\n\\label{eq:proie}\n\\end{equation}\nwhere\n\\andy{probini}\n\\begin{equation}\nP(t) = \\mbox{Tr} \\left[ U(t) \\rho_0 U^\\dagger(t) E \\right]\n\\label{eq:probini}\n\\end{equation}\nis the probability that the system has survived. [There is, of\ncourse, a probability $1-P$ that the system has not survived (i.e.,\nit has made a transition outside ${\\cal H}_E$) and its state has\nchanged into $\\rho^\\prime(t) = (1-E) U(t) \\rho_0 U^\\dagger(t)\n(1-E)\/(1-P)$. We concentrate henceforth our attention on the\nmeasurement outcome (\\ref{eq:proie})-(\\ref{eq:probini}).] The above\nis the standard Copenhagen interpretation: The measurement is\nconsidered to be instantaneous. The ``quantum Zeno paradox\"\n\\cite{Misra} is the following. We prepare Q in the initial state\n$\\rho_0$ at time 0 and perform a series of $E$-observations at\ntimes $t_k=kT\/N \\; (k=1,\n\\cdots, N)$. The state of Q after the above-mentioned $N$\nmeasurements reads\n\\andy{Nproie}\n\\begin{equation}\n\\rho^{(N)}(T) = V_N(T) \\rho_0 V_N^\\dagger(T) , \\qquad\n V_N(T) \\equiv [ E U(T\/N) E ]^N\n\\label{eq:Nproie}\n\\end{equation}\nand the probability to find the system in ${\\cal H}_E$ (``survival\nprobability\") is given by\n\\andy{probNob}\n\\begin{equation}\nP^{(N)}(T) = \\mbox{Tr} \\left[ V_N(T) \\rho_0 V_N^\\dagger(T) \\right].\n\\label{eq:probNob}\n\\end{equation}\nEquations (\\ref{eq:Nproie})-(\\ref{eq:probNob}) display the\n``quantum Zeno effect:\" repeated\nobservations in succession modify the dynamics of the quantum system;\nunder general conditions, if $N$ is sufficiently large, all transitions\noutside ${\\cal H}_E$ are inhibited.\n\nIn order to consider the $N \\rightarrow \\infty$ limit (``continuous\nobservation\"), one needs some mathematical requirements: define\n\\andy{slim}\n\\begin{equation}\n{\\cal V} (T) \\equiv \\lim_{N \\rightarrow \\infty} V_N(T) ,\n \\label{eq:slim}\n\\end{equation}\nprovided the above limit exists in the strong sense.\nThe final state of Q is then\n\\andy{infproie}\n\\begin{equation}\n\\widetilde{\\rho} (T) = {\\cal V}(T) \\rho_0 {\\cal V}^\\dagger (T)\n \\label{eq:infproie}\n\\end{equation}\nand the probability to find the system in ${\\cal H}_E$ is\n\\andy{probinfob}\n\\begin{equation}\n{\\cal P} (T) \\equiv \\lim_{N \\rightarrow \\infty} P^{(N)}(T)\n = \\mbox{Tr} \\left[ {\\cal V}(T) \\rho_0 {\\cal V}^\\dagger(T) \\right].\n\\label{eq:probinfob}\n\\end{equation}\nOne should carefully notice that nothing is said about the final\nstate $\\widetilde{\\rho} (T)$, which depends on the characteristics of the\nmodel investigated and on the {\\em very measurement performed}\n(i.e.\\ on the projection operator $E$, which enters in the\ndefinition of $V_N$). Misra and Sudarshan assumed, on physical\ngrounds, the strong continuity of ${\\cal V}(t)$:\n\\andy{phgr}\n\\begin{equation}\n\\lim_{t \\rightarrow 0^+} {\\cal V}(t) = E\n\\label{eq:phgr}\n\\end{equation}\nand proved that under general conditions the operators ${\\cal\nV}(T)$ (exist for all real $T$ and) form a semigroup labeled by the\ntime parameter $T$. Moreover, ${\\cal V}^\\dagger (T) = {\\cal\nV}(-T)$, so that ${\\cal V}^\\dagger (T) {\\cal V}(T) =E$. This\nimplies, by (\\ref{eq:inprep}), that\n\\andy{probinfu}\n\\begin{equation}\n{\\cal P}(T)=\\mbox{Tr}\\left[\\rho_0{\\cal V}^\\dagger(T){\\cal V}(T)\\right]\n= \\mbox{Tr} \\left[ \\rho_0 E \\right] = 1 .\n\\label{eq:probinfu}\n\\end{equation}\nIf the particle is ``continuously\" observed,\nin order to check whether it has survived inside ${\\cal H}_E$ ,\nit will never make a transition to ${\\cal H}-{\\cal H}_E$.\nThis is the ``quantum Zeno paradox.\"\n\nAn important remark is now in order: the theorem just summarized\n{\\em does not} state that the system {\\em remains}\nin its initial state, after the series of very frequent measurements.\nRather, the system is left in the subspace ${\\cal H}_E$,\ninstead of evolving ``naturally\" in the total\nHilbert space ${\\cal H}$. This subtle\npoint, implied by Eqs.\\ (\\ref{eq:infproie})-(\\ref{eq:probinfu}),\nis often not duely stressed in the literature.\n\nNotice also the conceptual gap between\nEqs.\\ (\\ref{eq:probNob}) and (\\ref{eq:probinfob}): To perform an\nexperiment with $N$ finite is only a practical problem, from the\nphysical point of view.\nOn the other hand, the $N \\rightarrow \\infty$ case\nis physically unattainable, and is rather to be regarded as a\nmathematical limit (although a very interesting one).\nIn this paper, we shall not be concerned with this problem\n(thoroughly investigated in \\cite{NNPR}) and shall\nconsider the $N \\to \\infty$ limit\nfor simplicity. This will make the analysis more transparent.\n\n\\setcounter{equation}{0}\n\\section{Quantum Zeno effect with neutron spin }\n\\label{sec-neutr}\n\\andy{neutr}\n\nThe example we consider is a neutron spin in a magnetic\nfield \\cite{PNBR}. (A photon analog was\nfirst outlined by Peres \\cite{Peres}.)\nWe shall consider two different experiments: Refer to Figures 1(a) and 1(b).\nIn the case schematized in Figure~1(a),\n\\begin{figure\n\\begin{center}\n\\epsfig{file=figure1.eps,width=\\textwidth}\n\\end{center}\n\\caption{(a) Evolution of the neutron spin\nunder the action of a magnetic field. An emitter sends a spin-up neutron\nthrough several regions where a magnetic field $B$ is present.\nThe detector $D_0$ detects a spin-down neutron:\nNo Zeno effect occurs.\n(b) Quantum Zeno effect: the neutron spin is ``monitored\"\nat every step, by selecting and detecting the spin-down component.\n$D_0$ detects a spin-up neutron. }\n\\label{fig:fig1}\n\\end{figure}\nthe neutron interacts with several identical regions in which there is\na static magnetic field $B$, oriented along the $x$-direction.\nWe neglect here any losses and assume that\nthe interaction be given by the Hamiltonian\n\\andy{simpleH}\n\\begin{equation}\nH= \\mu B \\sigma_1,\n\\label{eq:simpleH}\n\\end{equation}\n$\\mu$ being the (modulus of the) neutron magnetic moment,\nand $\\sigma_i \\; (i=1,2,3)$ the Pauli matrices.\nWe denote the spin states of the neutron along the\n$z$-axis by $\\vert \\uparrow\\rangle$ and $\\vert \\downarrow\\rangle$.\n\nLet the initial neutron state be\n$\\rho_{0} = \\rho_{\\uparrow \\uparrow} \\equiv\n\\vert \\uparrow \\rangle\\langle\\uparrow \\vert$.\nThe interaction with the magnetic field provokes a rotation of the\nspin around the $x$-direction. After crossing the whole setup, the\nfinal density matrix reads\n\\andy{finalstepT}\n\\begin{equation}\n\\rho (T) \\equiv\ne^{-iHT} \\rho_{0} e^{iHT} =\n\\cos ^2 {\\frac{\\omega T}{2}} \\rho\\sb{\\uparrow \\uparrow}\n + \\sin ^2{\\frac{\\omega T}{2}} \\rho \\sb{\\downarrow \\downarrow}\n - \\frac i2\\sin{\\omega T}(\n \\rho \\sb{\\uparrow \\downarrow}- \\rho \\sb{\\downarrow\\uparrow }),\n\\label{eq:finalstepT}\n\\end{equation}\nwhere $\\omega=2 \\mu B$ and $T$ is the total time spent in the $B$ field.\nNotice that the free evolution is neglected (and so are reflection effects,\nwave-packet spreading, etc.).\nIf $T$ is chosen so as to satisfy the ``matching\" condition\n$\\cos \\omega T\/2 = 0$, we obtain\n\\andy{noZeno}\n\\begin{equation}\n\\rho (T) = \\rho\\sb{\\downarrow \\downarrow}\n\\qquad \\quad \\left( T = (2m+1) \\frac{\\pi}{\\omega}, \\;\\; m \\in {\\bf N} \\right),\n\\label{eq:noZeno}\n\\end{equation}\nso that the probability\nthat the neutron spin is down at time $T$ is\n\\andy{pT}\n\\begin{equation}\nP_{\\downarrow}(T) =1\n\\qquad \\quad \\left( T = (2m+1) \\frac{\\pi}{\\omega}, \\;\\; m \\in {\\bf N} \\right).\n\\label{eq:pT}\n\\end{equation}\nThe above two equations correspond to Eqs.\\ (\\ref{eq:noproie}) and\n(\\ref{eq:stillun}).\nIn our example, $H$ is such that if the\nsystem is initially prepared in the up state, it will evolve to\nthe down state after time $T$.\nNotice that,\nwithin our approximations, the experimental setup\ndescribed in Figure~1(a) is equivalent to the situation where a magnetic field\n$B$ is contained in a single region of space.\n\nLet us now modify the experiment just described by inserting at every step\na device able to select and detect one component [say the down\n($\\downarrow$) one] of the neutron spin. This can be accomplished by\na magnetic mirror M and a detector D.\nThe former acts as a ^^ ^^ decomposer,\" by splitting a\nneutron wave with indefinite spin (a superposed state of up\nand down spins) into two branch waves\neach of which is in a definite spin state\n(up {\\em or} down) along the $z$-axis. The down state is then forwarded to a\ndetector, as shown in Figure~1(b).\nThe magnetic mirror yields a spectral decomposition \\cite{Wigner}\nwith respect to the spin\nstates, and can be compared to the inhomogeneous magnetic field in a\ntypical Stern-Gerlach experiment.\n\nWe choose the same initial state for Q as in the previous experiment\n[Figure 1(a)]. The action of M+D is represented by\nthe operator $E \\equiv \\rho\\sb{\\uparrow \\uparrow} $\n[remember that we follow the evolution along the horizontal direction,\ni.e.\\ the direction the spin-up neutron travels,\nin Figure~1(b)], so that\nif the process is repeated $N$ times, like in Figure~1(b), we obtain\n\\andy{yesZeno}\n\\begin{equation}\n\\rho^{(N)}(T) = V_N(T) \\rho_0 V_N^\\dagger(T)\n = \\left( \\cos ^2 {\\frac{\\omega t}{2}} \\right)^N\n \\rho\\sb{\\uparrow \\uparrow}\n = \\left( \\cos ^2 {\\frac{\\pi}{2N}} \\right)^N\n \\rho\\sb{\\uparrow \\uparrow} ,\n\\label{eq:yesZeno}\n\\end{equation}\nwhere the ``matching\" condition for $T=Nt$ [see Eq.\\\n(\\ref{eq:noZeno})] has been required again. The probability that\nthe neutron spin is up at time $T$, if $N$ observations have been\nmade at time intervals $t \\; (Nt=T)$, is\n\\andy{pZT}\n\\begin{equation}\nP_{\\uparrow}^{(N)}(T) =\n \\left( \\cos ^2 {\\frac{\\pi}{2N}} \\right)^N .\n\\label{eq:pZT}\n\\end{equation}\n\nThis discloses the occurrence of a QZE:\nIndeed, $P_\\uparrow^{(N)}(T)> P_\\uparrow^{(N-1)}(T)$ for $N\\geq 2$,\nso that the evolution is ``slowed down\" as $N$ increases. Moreover,\nin the limit of infinitely many observations\n\\andy{NZeno}\n\\begin{equation}\n\\rho^{(N)}(T) \\stackrel{N \\rightarrow \\infty}{\\longrightarrow}\n\\widetilde{\\rho}(T) = \\rho\\sb{\\uparrow \\uparrow}\n\\label{eq:NZeno}\n\\end{equation}\nand\n\\andy{probfr}\n\\begin{equation}\n{\\cal P}_\\uparrow (T) \\equiv \\lim_{N \\rightarrow \\infty}\nP_\\uparrow^{(N)}(T) = 1.\n\\label{eq:probfr}\n\\end{equation}\nFrequent observations ``freeze\" the neutron spin in its initial state, by\ninhibiting ($N \\geq 2$) and eventually hindering ($N \\rightarrow \\infty$)\ntransitions to other states.\nNotice the difference from Eqs.~(\\ref{eq:noZeno}) and (\\ref{eq:pT}):\nThe situation is completely reversed.\n\n\n\n\\setcounter{equation}{0}\n\\section{The spatial degrees of freedom}\n\\label{sec-neutrspin}\n\\andy{neutrspin}\n\nIn the analysis of the previous section only the spin degrees\nof freedom were taken into account.\nNo losses were considered, even though\ntheir importance was already mentioned in \\cite{PNBR,NNPR}.\nIn spite of such a simplification, the model yields\nphysical insight into the Zeno phenomenon, and has the nice\nadvantage of being solvable.\n\nWe shall now consider a more detailed description. The practical\nrealizability of this experiment has already been discussed, with\nparticular attention to the $N \\rightarrow \\infty$ limit and\nvarious possible losses \\cite{NNPR}. One source of losses is the\noccurrence of reflections at the boundaries of the interaction\nregion and\/or at the spectral decomposition step. A careful\nestimate of such effects would require a dynamical analysis of the\nmotion of the neutron wave packet as it crosses the whole\ninteraction region (magnetic-field regions followed by field-free\nregions containing each a magnetic mirror M that performs the\n``measurement\"). However, it is not an easy task to include the\nspatial degrees of freedom of the neutron in the analysis; instead,\nwe shall adopt a simplified description of the system, which\npreserves most of the essential features and for which an explicit\nsolution can still be obtained. It turns out that the inclusion of\nthe spatial degrees of freedom in the evolution of the spin state\ncan result in completely different situations from the ideal case,\nwhich in turn clarifies the importance of losses in actual\nexperiments and, at the same time, sheds new light on the Zeno\nphenomenon itself.\n\nLet us now try to incorporate the other degrees of\nfreedom of the neutron state in our description.\nLet our state space be the 4-dimensional Hilbert space\n${\\cal H}_p \\otimes {\\cal H}_s$, where\n${\\cal H}_p = \\{ |R \\rangle, |L \\rangle \\}$ and\n${\\cal H}_s = \\{ |\\uparrow \\rangle, |\\downarrow \\rangle \\}$\nare 2-dimensional Hilbert spaces, with $R (L)$ representing a particle\ntraveling towards the right (left) direction along the $y$-axis,\nand $\\uparrow (\\downarrow)$ representing spin up (down) along the $z$-axis.\nWe shall set, in the respective Hilbert spaces,\n\\andy{settings}\n\\begin{equation}\n |R \\rangle = \\coltwovector10} \\def\\downn{\\coltwovector01, \\quad |L \\rangle = \\downn; \\qua\n |\\uparrow \\rangle = \\coltwovector10} \\def\\downn{\\coltwovector01, \\quad |\\downarrow \\rangle = \\downn,\n\\label{eq:settings}\n\\end{equation}\nso that, for example, the state $|R \\downarrow \\rangle$\n represents a spin-down particle traveling towards the right\ndirection ($+y$).\nAlso, for the sake of simplicity, we shall work with vectors, rather than\ndensity matrices (the extension is straightforward).\n\nIn this extended Hilbert space the first\nPauli matrix $\\sigma_1$ acts only on ${\\cal H}_s$ as a spin flipper,\n$\\sigma_1 |\\uparrow \\rangle = |\\downarrow \\rangle$ and\n$\\sigma_1 |\\downarrow \\rangle = |\\uparrow \\rangle$, while\nanother first Pauli matrix $\\tau_1$ acts only on ${\\cal H}_p$ as a\ndirection-reversal operator,\n$\\tau_1|R\\rangle=|L\\rangle$ and $\\tau_1|L\\rangle=|R\\rangle$.\nTo investigate the effects of reflection\nwe assume that the interaction be described by the Hamiltonian\n\\andy{modelH}\n\\begin{equation}\nH= g (1 + \\alpha \\tau_1)(1 + \\beta \\sigma_1),\n\\label{eq:modelH}\n\\end{equation}\nwhere $g, \\alpha$ and $\\beta$ are real constants.\nBy varying these parameters and the total\ninteraction time $T$, the above Hamiltonian can describe various\nsituations in which a neutron, impinging on a $B$-field applied\nalong $x$-axis, undergoes transmission\/reflection and\/or spin-flip\neffects.\n\nIt is worth pointing out that the above Hamiltonian\nincorporates the spatial degrees of freedom in an abstract way:\nOnly the 1-dimensional motion of the neutron, represented by $L$ and $R$,\nhas been taken into account and all other effects (like\nfor instance the spread of the wave packet) are neglected.\nThis amounts to consider a trivial free Hamiltonian, which can be\ndropped out from the outset.\nThis may seem too drastic an approximation;\nhowever, it is not as rough as one may imagine. In fact,\nover the distances involved (a neutron interferometer), the\nspread of the wave packet can always be practically neglected as a first\napproximation.\nThe introduction of the above two degrees of freedom $L$ and $R$\njust corresponds to such a situation and the simplicity\nof the model still enables us to obtain explicit solutions for the\ndynamical evolution. This can be a great advantage.\nA realization of such a quantum Zeno effect experiment is in progress \nat the pulsed ISIS neutron spallation source. \nNeutrons which are trapped between perfect crystal plates pass on each \nof their 2000 trajectories through a flipper device which cause an adjustable \nspin rotation. \nFlipped neutrons immediately leave the storage system where they can be \neasily detected (see e.g.\\ \\cite{Jerica}).\n\nSince the spin flipper $\\sigma_1$ and the direction-reversal\noperator $\\tau_1$ commute with each other and with the Hamiltonian\n(\\ref{eq:modelH}), the energy levels of the system governed by this\nHamiltonian are obviously $E_{\\tau\\sigma} \\equiv g\n(1+\\tau\\alpha)(1+\\sigma\\beta)$ with $\\tau,\\sigma=\\pm$. Moreover,\nthe evolution of the system has the following factorized structure\n\\begin{equation}\\label{fs}\ne^{-iHT}=e^{-igT}e^{-i\\alpha gT\\tau_1}e^{-i\\beta gT\\sigma_1}\ne^{-i\\alpha\\beta gT\\tau_1\\sigma_1}.\n\\end{equation}\nIf a neutron is initially prepared in state $|R \\uparrow \\rangle$,\nthe evolution operator is explicitly expressed as\n\\andy{e-iHT}\n\\begin{equation}\ne^{-iHT}=t_\\uparrow\n +r_\\uparrow\\tau_1\n +t_\\downarrow\\sigma_1\n +r_\\downarrow\\tau_1\\sigma_1,\n\\label{eq:e-iHT}\n\\end{equation}\nwhere $t_\\uparrow, t_\\downarrow, r_\\uparrow$ and $r_\\downarrow$ are the\ntransmission\/reflection coefficients of a neutron, whose spin is\nflipped\/not flipped after interacting with a constant magnetic\nfield $B$, applied along the $x$-direction in a finite region of\nspace (square potential, stationary state problem). See Figure\n\\ref{fig:ssprob}.\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=figure2.eps,width=\\textwidth}\n\\end{center}\n\\caption{Transmission and reflection coefficients for a neutron\ninitially prepared in the $|R \\uparrow \\rangle$ state. }\n\\label{fig:ssprob}\n\\end{figure}\nThese coefficients are connected with the energy levels by the\nfollowing relation\n\\andy{corrs}:\n\\begin{equation}\n\\pmatrix{t_\\uparrow & t_\\downarrow \\cr r_\\uparrow & r_\\downarrow}=\\frac 14\n\\pmatrix{1 & 1 \\cr 1 & -1}\n\\pmatrix{e^{-iE_{++}T} & e^{-iE_{+-}T} \\cr e^{-iE_{-+}T} & e^{-iE_{--}T}}\n\\pmatrix{1 & 1 \\cr 1 & -1}.\n\\label{eq:corrs}\n\\end{equation}\n\nBy specifying the values of the parameters $g, \\alpha, \\beta$ and\nthe total interaction time $T$, one univocally determines\n$t_\\uparrow, t_\\downarrow, r_\\uparrow$ and $r_\\downarrow$.\nDirect physical meaning can therefore be attributed to the constants\n$g, \\alpha$ and $\\beta$ in (\\ref{eq:modelH}) by comparison with the\ntransmission\/reflection coefficients. For example, in order to mimic\na realistic experimental setup with given values of\n$t_{\\uparrow\\downarrow}, r_{\\uparrow\\downarrow}$, it is enough to obtain the\nvalues of $g, \\alpha$ and $\\beta$ from (\\ref{eq:corrs}) and plug them\nin the Hamiltonian (\\ref{eq:modelH}).\nThe model could in principle be further improved by making the constant $g$\nenergy-dependent. We will consider a more realistic\nHamiltonian in Sec.~\\ref{sec-numan}.\n\n\n\\setcounter{equation}{0}\n\\section{The ideal case of complete transmission}\n\\label{sec-compltrans}\n\\andy{compltrans}\n\nIn the following discussions we always assume that our initial\nstate is $|R\\uparrow \\rangle $, i.e., a right-going spin-up neutron, and\nconsider, for definiteness, the case of total transmission with\nspin flipped, i.e., $|t_\\downarrow|^2=1$, {\\em when no measurements are\nperformed}. Of course, this has to be considered as an idealized\nsituation, since a spin rotation can only take place when there is\nan interaction potential (proportional to the intensity of the\nmagnetic field) which necessarily produces reflection effects (with\nthe only exception of plane waves). Stated differently, when the\nspatial degrees of freedom are taken into account in the scattering\nproblem off a spin-flipping potential, complete transmission is\nimpossible to achieve: There are always reflected waves. Our model\nHamiltonian (\\ref{eq:modelH}) must therefore be regarded as a\nsimple caricature of the physical system we are analyzing. Wave\npacket effects will be discussed in Sec.\\ \\ref{sec-numan}.\n\nTo obtain a total transmission with spin flipped, the evolution\noperator should have the form $e^{-iHT}\\propto\\sigma_1$, which is\nequivalent to either\n\\andy{either.or}\n\\begin{eqnarray}\n&e^{-i\\alpha gT\\tau_1}\\propto\\tau_1,\\quad e^{-i\\beta gT\\sigma_1}\n\\propto 1,\\quad e^{-i\\alpha\\beta gT\\tau_1\\sigma_1}\\propto\n\\tau_1\\sigma_1,&\n\\label{eq:either} \\\\\n\\noalign{\\noindent or}\n&e^{-i\\alpha gT\\tau_1}\\propto 1,\\quad e^{-i\\beta gT\\sigma_1}\n\\propto \\sigma_1,\\quad e^{-i\\alpha\\beta gT\\tau_1\\sigma_1}\\propto 1.&\n\\label{eq:or}\n\\end{eqnarray}\nThat is,\n\\andy{condi1,2}\n\\begin{eqnarray}\n{\\rm Case\\ i)}\\hfill&\\cos\\alpha gT=\\sin\\beta gT=\\cos\\alpha\\beta gT=0,&\\hfill\n\\label{eq:condi1}\n\\\\\n\\noalign{\\noindent or}\n{\\rm Case\\ ii)}\\hfill&\\sin\\alpha gT=\\cos\\beta gT=\\sin\\alpha\\beta gT=0.&\\hfill\n\\label{eq:condi2}\n\\end{eqnarray}\n(All other cases, such as total reflection with\/without spin-flip\ncan be analyzed in a similar way.)\nIn both cases,\nthe evolution is readily computed:\n\\andy{evol0}\n\\begin{equation}\ne^{-iHT} |R \\uparrow \\rangle = \\mbox{phase factor} \\times |R \\downarrow \\rangle .\n\\label{eq:evol0}\n\\end{equation}\nThe boundary conditions are such that the neutron is\ntransmitted and its spin flipped with unit\nprobability. For the experimental realization, see \\cite{spinflip}.\nThis is the situation outlined in Figure \\ref{fig:fig1}(a).\n\nWe shall now focus on some interesting\ncases, which illustrate some definite aspects of the QZE.\\ \\\nLet us see, in particular, how the evolution of the quantum state of the\nneutron is modified by choosing different projectors (corresponding to\ndifferent ``measurements\").\n\n\n\\subsection{Direction-insensitive spin measurement}\n\\label{sec-case1}\n\\andy{case1}\n\nWe perform now a series of measurements, in order to check whether the\nneutron spin is up.\nLet us call this type of measurement a ``direction-insensitive spin\nmeasurement,\" for reasons that will become clear later.\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=figure3.eps,height=10cm}\n\\end{center}\n\\caption{(a) Direction-insensitive spin measurement.\n(b) Direction-sensitive spin measurement.}\n\\label{fig:figins}\n\\end{figure}\nRefer to Figure \\ref{fig:figins}(a).\nThe projection operator corresponding to this measurement is\n\\andy{E1}\n\\begin{equation}\nE_1=1-|R\\downarrow\\rangle\\langle R\\downarrow|-|L\\downarrow\\rangle\\langle L\\downarrow|={1\\over2}(1+\\sigma_3),\n\\label{eq:E1}\n\\end{equation}\nthat is, the spin-down components are projected out regardless of\nthe direction of propagation of the neutron. In this case, after\nfrequent measurements $E_1$ performed at time intervals $T\/N$, the\nevolution operator in Eq.~(\\ref{eq:Nproie}) reads\n\\andy{VN1a}\n\\begin{equation}\nV_{N}(T)\n=\\left( E_1e^{-iHT\/N}E_1 \\right)^N\n=E_1(t_\\uparrow+r_\\uparrow\\tau_1)^N ,\n\\label{eq:VN1a}\n\\end{equation}\nwhere $t_\\uparrow\\sim1-igT\/N$ and $r_\\uparrow\\sim-i\\alpha gT\/N$ for large $N$\n[see Eq.\\ (\\ref{eq:e-iHT})]. Taking the limit, one obtains the\nfollowing expression for the QZE evolution operator defined in Eq.\\\n(\\ref{eq:slim}):\n\\andy{V1(T)a}\n\\begin{equation}\n{\\cal V}(T)\n=\\lim_{N\\to\\infty}V_N(T)\n =e^{-igT}E_1e^{-i\\alpha gT\\tau_1}.\n\\label{eq:V1(T)a}\n\\end{equation}\n\nInteresting physical situations can now be investigated. Choose,\nfor instance, $gT= \\pi, \\alpha= -1\/2, \\beta= -1$, which belongs to\nCase i) in Eq.\\ (\\ref{eq:condi1}) [so that, without measurements,\nthe neutron is totally transmitted with its spin flipped, as shown\nin Eq.\\ (\\ref{eq:evol0})]. When the direction-insensitive\nmeasurements are continuously performed, the QZE evolution is\n${\\cal V}(T)=-iE_1\\tau_1$ and the final state is\n\\andy{fst0}\n\\begin{equation}\n{\\cal V}(T)|R\\uparrow\\rangle=-i|L\\uparrow\\rangle,\n\\label{eq:fst0}\n\\end{equation}\ni.e., {\\em the neutron spin is not flipped, but the neutron itself\nis totally reflected}. This clearly shows that reflection\n``losses\" can be very important; as a matter of fact, reflection effects\n{\\em dominate}, in this example. Notice that this is always an example\nof QZE: The projection operator $E_1$ in\n(\\ref{eq:E1}) {\\em prevents} the spin from flipping.\nThe point here is, however, that $E_1$ is\nnot ``tailored\" so as to prevent the wave function from being reflected!\n\n\\subsection{Another particular case: seminal model}\n\\label{sec-semmod}\n\\andy{semmod}\n\nLet us now focus on a model corresponding to\nCase ii) in Eq.~(\\ref{eq:condi2}). The choice of parameters, e.g.\\\n$gT=\\pi\/2, \\alpha=2n, \\beta=-1$,\nobviously fulfills these conditions for\narbitrary integer $n$.\nTotal transmission with spin flipped occurs again when\nno measurement is performed.\n\nWhen direction-insensitive spin measurements, described by\nprojections $E_1$, are performed at time intervals $T\/N$ and in the\n$N\\to \\infty$ limit, the QZE evolution operator in\nEq.~(\\ref{eq:V1(T)a}) becomes simply ${\\cal V}_1(T)=-i(-1)^nE_1$\nand the final state is\n\\begin{equation}\\label{eq:fst00a}\n{\\cal V}_1(T)|R\\uparrow\\rangle=-i(-1)^n|R\\uparrow\\rangle,\n\\end{equation}\nso that the ``usual\" QZE is obtained. When $n=0$ this is our\nseminal model \\cite{PNBR}, reviewed in Sec.~\\ref{sec-neutr}.\nObviously, the case $n=0$ is not rich enough to yield information\nabout reflection effects. In the following subsection the case of\nnonzero $n$ will be discussed.\n\n\\subsection{Direction-sensitive spin measurements}\n\\label{sec-case2}\n\\andy{case2}\n\nWe now consider a different type of spin measurement.\nLet the measurement be characterized by the\nfollowing projection operator\n\\andy{E2}\n\\begin{equation}\nE_2=1-|R\\downarrow\\rangle\\langle R\\downarrow|,\n\\label{eq:E2}\n\\end{equation}\nwhich projects out those neutrons that are transmitted with their\nspin flipped. Notice that spin-down neutrons that are reflected are\nnot projected out by $E_2$: for this reason we call this a\n``direction-sensitive\" spin measurement. Refer to\nFigure~\\ref{fig:figins}(b). Even though the action of this\nprojection is not easy to implement experimentally, this example\nclearly illustrates some interesting issues related to the\nMisra--Sudarshan theorem. We shall see that the action of the\nprojector $E_2$ will yield a very interesting result. For large\n$N$, the evolution is given by\n\\begin{equation}\nV_{2,N}(T)=\n\\left(E_2e^{-iHT\/N}E_2\\right)^N =\ne^{-igT}\\left(1-i\\frac{gT}{N} Z\\right)^NE_2+O(1\/N),\n\\end{equation}\nwhere $Z\\equiv E_2(H\/g-1)E_2$.\nThe QZE evolution is given by the limit\n\\begin{equation}\\label{eq:ev2}\n{\\cal V}_2(T)=\\lim_{N\\to\\infty}V_{2,N}(T)=e^{-igT}e^{-igTZ}E_2 .\n\\end{equation}\nTo compute its effect on the initial state $|R\\uparrow\\rangle$,\nwe note that, when acting on states $|R\\uparrow\\rangle, |L\\uparrow\\rangle$\nand $|L\\downarrow\\rangle$, which span the ``survival\" subspace,\nthe $Z$ operator behaves as\n\\begin{equation}\nZ \\left(\\matrix{|R\\uparrow\\rangle\\cr |L\\uparrow\\rangle\\cr\n|L\\downarrow\\rangle}\\right)\n=\\left(\\matrix{0&\\alpha&\\alpha\\beta\\cr \\alpha&0&\\beta\\cr\n\\alpha\\beta&\\beta&0}\\right)\n\\left(\\matrix{|R\\uparrow\\rangle\\cr |L\\uparrow\\rangle\\cr\n|L\\downarrow\\rangle}\\right).\n\\end{equation}\nLet us choose for definiteness $\\beta=-1$, so that\n\\begin{equation}\n(Z-1\/2)^2|R\\uparrow\\rangle=\\theta^2|R\\uparrow\\rangle,\n\\label{eq:Z2Rup}\n\\end{equation}\nwith $\\theta=\\sqrt{8\\alpha^{2}+1}\/2$.\nThus the final state can be readily obtained\n\\andy{VTRup2}\n\\begin{equation}\n{\\cal V}_2(T)|R\\uparrow\\rangle\n=\ne^{-3igT\/2}\\Biggl[\n\\left(\\cos gT\\theta+\\frac{i}{2\\theta}\\sin gT\\theta\n \\right)|R\\uparrow\\rangle\n+\\frac{i\\alpha}{\\theta}\\sin gT\\theta\n\\Bigl(|L\\downarrow\\rangle-|L\\uparrow\\rangle\\Bigr)\\Biggr].\n\\label{eq:VTRup2}\n\\end{equation}\nTherefore, for a continuous direction-sensitive\n(namely, $E_2$) measurement, the probability of\nfinding the initial state $|R\\uparrow\\rangle$\nis not unity. Part of the wave function will be\nreflected, although the neutron would have been totally transmitted\nwithout measurement [see (\\ref{eq:evol0})] or with an ``$E_1$-measurement\"\n[see (\\ref{eq:fst00a})].\n\nClearly, the action of the projector $E_2$ yields a completely\ndifferent result from that of $E_1$, in (\\ref{eq:fst00a}). This is\nobvious and easy to understand: the state (\\ref{eq:VTRup2}) belongs\nto the subspace of the ``survived\" states, {\\em according to the\nprojection $E_2$.} Notice also that the probability loss due the\nmeasurements is zero, in the limit, because the QZE evolution\n(\\ref{eq:ev2}) is unitary within the subspace of the \\lq\\lq\nsurvived\" states.\n\n\\setcounter{equation}{0}\n\\section{A more realistic model }\n\\label{sec-numan}\n\\andy{numan}\n\nLet us now introduce a more realistic (albeit static) model. Such a\nmodel can be shown to be derivable from a Hamiltonian very similar\nto the one studied in the previous sections by a suitable\nidentification of parameters (see Appendix A). The effect of\nreflections in the QZE will now be tackled by directly solving a\nstationary Schr\\\"odinger equation, which will be set up as follows.\n\nLet a neutron with energy $E=k^2\/2m$ and spin up ($+z$ direction),\nmoving along the $+y$ direction, impinge on $N$ regions of constant\nmagnetic field pointing to the $x$ direction, among which there are\n$N-1$ field-free regions. The thickness of a single piece of\nmagnetic field is $a$ and the field-free region has size $b$. The\nconfiguration is shown in Figure \\ref{fig:figYu1}.\n\\begin{figure}\n\\epsfig{file=figure4.eps,width=\\textwidth}\n\\caption{Spin-up neutron moving along the $+y$ direction with energy\n$E$. The magnetic field points to the $+x$ direction and is zero in\nthe region between $y_n^\\prime$ and $y_n$, in which the\nmeasurements will be made. In these field-free regions the wave\nfunctions are $|\\psi_n^\\prime\\rangle$ before measurement and\n$|\\psi_n\\rangle$ after the measurement.}\n\\label{fig:figYu1}\n\\end{figure}\nThus we have the one-dimensional scattering problem of a neutron\noff a piecewise constant magnetic field with total thickness\n$D=Na$. The stationary Schr\\\"odinger equation is described by the\nHamiltonian\n\\andy{realH}\n\\begin{equation}\nH_{\\rm Z}=\\frac{p_y^2}{2m}+\\mu B\\sigma_1\\Omega(y),\n\\label{eq:realH}\n\\end{equation}\nwhere $\\mu$ is the modulus of the neutron magnetic momentum, $B$\nthe strength of the magnetic field and\n\\begin{equation}\n\\Omega(y)\n=\\left\\{\\matrix{0&\\mbox{for}&y<0,&y_n^\\primey_N^\\prime$ reads\n\\begin{equation}\\label{w2}\n|\\psi_N\\rangle= e^{iky}[t_\\uparrow|\\uparrow\\rangle+t_\\downarrow |\\downarrow\\rangle].\n\\end{equation}\nSince $[\\sigma_1,H_{\\rm Z}]=0$, it is convenient to work with the\nbasis $|\\pm\\rangle=(|\\uparrow\\rangle\\pm|\\downarrow\\rangle)\/\\sqrt 2$,\ni.e., the eigenstates of $\\sigma_1$ belonging to eigenvalues $\\pm 1$.\nFor later use we denote $r_\\pm=r_\\uparrow\\pm r_\\downarrow$ and\n$t_\\pm=t_\\uparrow\\pm t_\\downarrow$.\n\nIn the field-free region, before the point $y=m_n$ where the $n$th\nmeasurement is assumed to take place, $y_n^\\prime1$, the transmission amplitude of the same neutron passing\nthrough a magnetic field with a lattice-like structure as depicted\nin Figure 4 can be written as\n\\begin{equation}\nt_\\pm\n=\\frac {e^{-iky_N}t_{a\\pm}}{e^{-iky_1}[N]_\\pm -[N-1]_\\pm t_{a\\pm}}.\n\\end{equation}\nFor a neutron in its spin-up state $|\\uparrow\\rangle$, the transmission\namplitude with spin unflipped is then $t_\\uparrow=(t_++t_-)\/2$ and that\nwith spin flipped is $t_\\downarrow=(t_+-t_-)\/2$. As a result, for a spin-up\nneutron to go through a constant potential of width $y_N=D=Na$\nwithout reflection and with spin flipped, i.e., $|t_\\downarrow|=1$, one\nshould require $k_\\pm D=n_\\pm\\pi$ or\n\\begin{equation}\\label{ttc}\nE=\\frac{\\pi^2(n_+^2+n_-^2)}{4mD^2},\\quad\n\\mu B=\\frac{\\pi^2(n_+^2-n_-^2)}{4m D^2},\n\\end{equation}\nwith $n_\\pm$ two arbitrary integers, their difference $n_+-n_-$ \nbeing an odd number. In this case of complete transmission\n$|t_\\downarrow|=1$, the energy $E$ must be larger than the potential $\\mu\nB$. The rest of the analysis above, however, is valid also when the\nenergy is less than the potential.\n\nNow we consider the case where $N$ tends to infinity and the\nmagnetic field possesses a periodic lattice structure. The relation\n(\\ref{eq:rel1}) still holds and in order to preserve the\ntranslational symmetry along the $y$ axis (that is, to keep the \nHamiltonian invariant under a translation of $(a+b)$ along the \n$y$-axis), one should have \n$|q_\\pm|=1$ owing to the Bloch theorem. \nEquivalently, the trace of the transfer matrix $e^{ikb\\tau_3}M_\\pm$\nas given in Eq.~(\\ref{eq:trace}) should be less than one. This\ndetermines the energy band of the system: those energies that make\nthe absolute value of this trace greater than 1 will be forbidden,\nbecause for these energies $|q_\\pm|$ or $|q_\\pm|^{-1}$ becomes\nlarger than one and $[N]_\\pm$ tends exponentially to infinity when\n$N$ approaches infinity. For large $N$, even if there is no\nperiodical structure, there is always some $k$ that makes this\ntrace greater than one (e.g. $kb+k_\\pm a=l\\pi$). Therefore the\ntransmission probability will tend to zero exponentially when $N$\nbecomes large, even though the energy may be very large relative to\nthe potential. This shows that reflection effects in presence of a\nlattice structure are very important; as we shall see, this feature\nis preserved even when projection operators are interspersed in the\nlattice.\n\n\\subsection{Direction-insensitive projections}\n\\label{sec-insensitive}\n\\andy{insensitive}\n\nWe consider now the second situation, when direction-insensitive\nmeasurements are performed at points $m_n$s. By this kind of\nmeasurement, the spin-down components are projected out and the\nspin-up components evolve freely regardless whether the neutron is\ntravelling right or left.\n\nThe boundary conditions imposed by this kind of measurement at point\n$m_n$ for the wave function $|\\psi_n\\rangle$ and $|\\psi^\\prime_n\\rangle$\nin the field-free region are expressed as\n\\begin{equation}\nR_{n,\\downarrow}=L^\\prime_{n,\\downarrow}=0,\n\\quad \\pmatrix{R_{n,\\uparrow}^\\prime \\cr L_{n,\\uparrow}^\\prime \\cr}\n=e^{-ikb\\tau_3}\\pmatrix{R_{n,\\uparrow}\\cr L_{n,\\uparrow} \\cr},\n\\end{equation}\nwhere $R_{n,\\uparrow}=(R_{n,+}+R_{n,-})\/2$ and\n$R_{n,\\downarrow}=(R_{n,+}-R_{n,-})\/2$ for right-going components and\nsimilar expressions for the left-going and primed components.\nTherefore, application of Eq.~(\\ref{ev1}) $N$ times yields\n\\begin{equation}\\label{ev2}\n\\pmatrix{R_{N,\\uparrow}\\cr L_{N,\\uparrow} \\cr}\n=(e^{ikb\\tau_3}M_1)^N\\pmatrix{R_{0,\\uparrow}\\cr L_{0,\\uparrow} \\cr},\n\\end{equation}\nwhere the $2\\times 2$ transfer matrix $M_1$ has the\nfollowing matrix elements\n\\begin{equation}\n(M_1)_{ij}\n=\\bar{M}_{ij}-\\Delta M_{i2}\\Delta M_{2j}\/\\bar{M}_{22}\n\\end{equation}\nwith $\\bar M=(M_++M_-)\/2$ and $\\Delta M=(M_+-M_-)\/2$.\n\nNow we take the limit as required by a ``continuous\" measurement,\ni.e., $N\\to\\infty$, $a\\to0$ keeping $Na=D$ finite and $Nb\\to 0$. By\nthe definition (\\ref{eq:Mpm}) of the transfer matrix, we have the\nsmall-$a$ expansions\n\\begin{equation}\n\\bar M=1+ika\\tau_3+O(a^2),\\quad\n\\Delta M= \\zeta ka(\\tau_2-i\\tau_3)+O(a^2)\n\\end{equation}\nwith $\\zeta\\equiv\\mu B\/2E$, obtaining\n\\begin{equation}\n\\lim_{N\\to\\infty}(e^{ikb\\tau_3}M_1)^N= e^{ikD\\tau_3}.\n\\end{equation}\nRecall that $t_\\uparrow=e^{-ikD}R_{N,\\uparrow}$ is the transmission amplitude,\n$L_{0,\\downarrow}=r_\\downarrow$ the reflection amplitude and $L_{N,\\uparrow}=0$ and\n$R_{0,\\uparrow}=1$ because of the boundary conditions. After taking the\nlimit $N\\to\\infty$ in Eq.~(\\ref{ev2}), we see that the transmission\n(survival) probability becomes one, i.e., $|t_{\\uparrow}|^2=1$, for {\\em\nany} input energy and magnetic field. This reveals another aspect\nof neutron QZE: When the energy of the neutron is smaller than the\npotential, the transmission probability decays exponentially when\nthe length increases and no measurement is performed; By contrast,\nwhen continuous direction-insensitive measurements are made, one\ncan obtain a total transmission!\n\nIf we choose the energy of the neutron and the potential as in\nEq.~(\\ref{ttc}), without measurements the neutron will be totally\ntransmitted with its spin flipped. On the other hand, if the\nspin-up state is measured continuously, the neutron will be totally\ntransmitted with its spin unflipped. This is exactly the QZE in the\nusual sense. Our analysis enables us to see that two kinds of QZEs\nare taking place: One is the QZE for the right-going neutron, by\nwhich we obtain a total transmission of the right-going input\nstate, and another one is for the left-going neutron, which\npreserves the zero amplitude of the left-going input state. This\ncase corresponds to projector $E_1$ in our simplified model in\nSec.~\\ref{sec-semmod}.\n\n\n\\subsection{Direction-sensitive projections}\n\\label{sec-sensitive}\n\\andy{sensitive}\n\nThe third case we consider is the direction-sensitive measurement.\nBy this kind of measurement the left-going components (or the\nreflection parts) evolve freely, no matter whether spin is up or\ndown, and the right-going components are projected to the spin-up\nstate. The corresponding boundary conditions are\n\\begin{equation}\nR_{n,\\downarrow}=0,\\qquad L_{n,\\pm}=e^{-ikb}L^\\prime_{n,\\pm}.\n\\end{equation}\n\nIf we apply Eq.~(\\ref{ev1}) $N$ times, supplemented with these\nboundary conditions, the following relations among the transmission\nand reflection amplitudes are obtained\n\\begin{equation}\\label{ev3}\ne^{ikD}\\pmatrix{t_{\\uparrow}\\cr 0\\cr 0}=\\left(e^{ikb\\Sigma_3}M_2\\right)^N\n\\pmatrix{1\\cr r_{\\uparrow}\\cr r_{\\downarrow}},\n\\end{equation}\nwhere $\\Sigma_3$ is a diagonal matrix $\\Sigma_3={\\rm diag}\\{1,-1,-1\\}$\nand the $3\\times3$ transfer matrix $M_2$ is given by\n\\begin{equation}\nM_2=\\pmatrix{\\bar{M}_{11}&\\bar{M}_{12}&\\Delta M_{12}\\cr\n \\bar{M}_{21}&\\bar{M}_{22}&\\Delta M_{22}\\cr\n \\Delta M_{21}&\\Delta M_{22}&\\bar{M}_{22}\\cr}.\n\\end{equation}\n\nIn the limit of continuous measurements ($N\\to\\infty$, $a\\to 0$,\nwhile keeping $D=Na$ constant, and $Nb\\to 0$), the transfer matrix\nis expanded as\n\\begin{equation}\nM_2=1-ika\/3+ika Z_2+O(a^2),\n\\end{equation}\nfor small $a$, with ($\\zeta=\\mu B\/2E$)\n\\begin{equation}\\label{z2}\nZ_2\\equiv\\pmatrix{4\/3&0&-\\zeta\\cr\n 0&-2\/3&\\zeta\\cr\n \\zeta&\\zeta&-2\/3\\cr},\n\\end{equation}\nand we have\n\\begin{equation}\n\\lim_{N\\to\\infty}(e^{ikb\\Sigma_3}M_2)^{N}=e^{-ikD\/3}e^{ikD Z_2}.\n\\end{equation}\n\nNotice that the matrix $Z_2$ satisfies $\\Sigma_3\nZ_2\\Sigma_3=Z_2^\\dagger$, from which we obtain, in the above limit,\nthe conservation of probability\n\\begin{equation}\n|t_\\uparrow|^2+|r_\\uparrow|^2+|r_\\downarrow|^2=1.\n\\end{equation}\nThis shows that there are no losses caused by the continuous\ndirection-sensitive measurements. On the other hand, the\ntransmission amplitude with spin unflipped is explicitly given by\n\\begin{equation}\nt_\\uparrow={{e^{-i4kD\/3}}\\over\\left(e^{-ikD Z_2}\\right)_{11}}\n\\end{equation}\nwhich implies that the transmission probability $|t_\\uparrow|^2$ is in\ngeneral {\\em not} equal to one. To have a general impression of its\nbehavior, we plot $T_{\\uparrow}=|t_\\uparrow|^2$ as a function of $kD$ and\n$\\zeta$ in Figure~\\ref{fig:figYu2}.\n\nSome comments are in order. There are two critical values for\n$\\zeta$, namely $0$ and $\\zeta_c=4\\sqrt3\/9\\approx 0.77$. When $0\\le\n\\zeta<\\zeta_c$, the matrix $Z_2$ has three real eigenvalues and the\ntransmission probability will oscillate depending on $kD$. When\n$\\zeta=\\zeta_c$ the transmission probability will decay according\nto $(kD)^{-2}$.\nIn fact, if one defines $G=Z_2-2\/3$, it is easy to show that \n$e^{-ikDG}=1-ikDG+(e^{2ikD}-1-2ikD)G^2\/4$, because $G$ satisfies \n$G^2(G+2)=0$.\nThen one can explicitly confirm that the element $(e^{-ikDG})_{11}$ \nincludes a linear $kD$ term, which gives the $(kD)^{-2}$ behavior to \nthe transmission probability. Finally, when $\\zeta>\\zeta_c$ the\nmatrix $Z_2$ has two imaginary eigenvalues and therefore the\ntransmission probability decays exponentially with $kD$. This can\nbe seen clearly in Figure~5(a). An interesting case arises when we\nconsider $1\/2<\\zeta<\\zeta_c$ or $E<\\mu B<8\\sqrt3E\/9\\approx 1.5E$.\nWithout measurements, the transmission probability decays\nexponentially when the length of the magnetic field is increased,\nbecause the input energy is smaller than the potential. When\ncontinuous measurements are performed, however, the transmission\nprobability will oscillate as the length of the magnetic field\nincreases.\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=fig5a.eps,height=8cm}\n\\end{center}\n\\begin{center}\n\\epsfig{file=fig5b.eps,height=8cm}\n\\end{center}\n\\caption{The transmission probability with spin unflipped\n$T_{\\rm up}=|t_\\uparrow|^2$ is plotted as a function of $kD$ and $z=\\zeta$ in\n(a) and as a function of\n$B_1=\\sqrt{m\\mu B}D$\nand $kD$ in (b).}\n\\label{fig:figYu2}\n\\end{figure}\n\nAs we can see in Figure~\\ref{fig:figYu3}, although the conditions\n(\\ref{ttc}) for total transmission in absence of measurements have\nbeen imposed, the transmission probability $T_{\\uparrow}$ is not one, as\nit would be for the ``ordinary\" QZE. Reflections are unavoidable.\nThis case corresponds to the projector $E_2$ considered in the\nsimplified model.\n\\begin{figure}\n\\begin{center}\n\\epsfig{file=fig6.eps,width=\\textwidth}\n\\end{center}\n\\caption{The transmission probablity with spin unflipped $T_\\uparrow=|t_\\uparrow|^2$\nas a function of $n$, when the conditions \n (\\ref{ttc})\nfor total transmission are satisfied with $n_-=n$ and $n_+=n+9$.}\n\\label{fig:figYu3}\n\\end{figure}\n\nAs we have seen, there are peculiar reflection effects in presence\nof projections, when $D$ (total length) is varied. This is clearly\nan interference effect, which can lead to enhancement of reflection\n``losses,\" if the ``projection\" does not suppress the left\ncomponent of the wave (this is what happens for $E_2$). This proves\nthat reflection effects can become very important in experimental\ntests of the QZE with neutron spin, if, roughly speaking, the total\nlength of the interaction region ``resonates\" with the neutron\nwavelength. It is interesting that such a resonance effect takes\nplace even though the dynamical properties of the system are\nprofoundly modified by the projection operators, in the limit of\n``continuous\" measurements, leading to the QZE.\n\nFinally, we would like to stress again that we are performing an\nanalysis in terms of stationary states (i.e.,\ntransmission\/reflection coefficients for plane waves), while at the\nsame time we are analyzing a quantum Zeno phenomenon, which is\nessentially a time-dependent effect. This is meaningful within our\napproximations, where the wave-packet spread is neglected and the\nmeasurements are performed with very high frequency. A more\nsophisticated argument in support of this view is given in Appendix\nA. In the present context wave packets effects, if taken into\naccount, would result in a sort of average of the effects shown in\nFigures 5 and 6 (which refer to the monochromatic case); however,\nour general conclusions would be unaltered. It is worth stressing\nthat, in neutron optics, effects due to a high sensitivity to\nfluctuation phenomena (such as fluctuations of the intensity of the\nmagnetic field) become important at high wave number and constitute\nan experimental challenge \\cite{fluctB}.\n\n\n\n\\setcounter{equation}{0}\n\\section{Summary}\n\\label{sec-findisc}\n\\andy{findisc}\n\nWe have analyzed some peculiar features of a quantum Zeno-type\ndynamics by discussing the noteworthy example of a neutron spin\nevolving under the action of a magnetic filed in presence of\ndifferent types of measurements (``projections\").\n\nThe ``survival probability\" depends on our definition of\n``surviving,\" i.e., on the choice of the projection operator $E$.\nDifferent $E$s will yield different final states, and Misra and\nSudarshan's theorem \\cite{Misra} simply makes sure that the\nsurvival probability is unity: the final state belongs to the\nsubspace of the survived products.\n\nIn the physical case considered (neutron spin), our examples\nclarify that the practical details of the experimental procedure by\nwhich the neutron spin is ``measured\" are very important. For\nexample, in order to avoid constructive interference effects,\nleading to (unwanted) enhancement of the reflected neutron wave, it\nis important to devise the experimental setup in such a way that\nreflection effects are suppressed.\n\n\n\\section*{Acknowledgments}\nThis work is partly supported by the Grant-in-Aid for International \nScientific Research: Joint Research \\#10044096 from the Japanese \nMinistry of Education, Science and Culture, by Waseda University \nGrant for Special Research Projects No.~98A--619 and by the \nTMR-Network of the European Union ``Perfect Crystal Neutron Optics\"\nERB-FMRX-CT96-0057.\n\n\n\n\n\\renewcommand{\\thesection}{\\Alph{section}}\n\\setcounter{section}{1}\n\\setcounter{equation}{0}\n\\section*{Appendix A}\n\\label{sec-appA}\n\\andy{appA}\n\n\\renewcommand{\\thesection}{\\Alph{section}}\n\\renewcommand{\\thesubsection}{{\\it\\Alph{section}.\\arabic{subsection}}}\n\\renewcommand{\\theequation}{\\thesection.\\arabic{equation}}\n\\renewcommand{\\thefigure}{\\thesection.\\arabic{figure}}\n\nIn this appendix, we shall endeavor to establish a connection\nbetween the models analyzed in Secs.~\\ref{sec-neutrspin} and\n\\ref{sec-numan}. In other words, we will examine whether the\nparametrization of the Hamiltonian of the form (\\ref{eq:modelH}) is\ncompatible with the more realistic one considered in\n(\\ref{eq:realH}) and in such a case find which values are to be\nassigned to the parameters $\\alpha,\\beta$ and $g$. To this end, it\nis enough to consider the scattering (i.e., the transmission and\nreflection) process of a neutron off a single constant magnetic\nfield $B$ of width $a$. We compare the scattering amplitudes\ncalculated on the basis of the simple abstract Hamiltonian\n(\\ref{eq:modelH}) and of the more realistic one (\\ref{eq:realH}).\nNotice that the process is treated as a dynamical one in the former\ncase ($T$ is regarded, roughly speaking, as the time necessary for\nthe neutron to go through the potential), while in the latter case\nwe treat it as a stationary problem.\n\nObserve first that the tranfer matrix $M_\\pm$ in (\\ref{eq:Mpm}),\nderived for the stationary scattering process, yields the following\ntransmission\/reflection amplitudes\n\\andy{RRLL}\n\\begin{eqnarray}\nR'_{1,\\uparrow\n={1\\over2}\\left({1\\over(M_+)_{22}}+{1\\over(M_-)_{22}}\\right),&&\nR'_{1,\\downarrow\n={1\\over2}\\left({1\\over(M_+)_{22}}-{1\\over(M_-)_{22}}\\right),\\nonumber\\\\\n&&\\label{eq:RRLL}\\\\\nL_{0,\\uparrow\n=-{1\\over2}\\left({(M_+)_{21}\\over(M_+)_{22}}\n+{(M_-)_{21}\\over(M_-)_{22}}\\right),\n&& L_{0,\\downarrow}\n=-{1\\over2}\\left({(M_+)_{21}\\over(M_+)_{22}}\n-{(M_-)_{21}\\over(M_-)_{22}}\\right).\\nonumber\n\\end{eqnarray}\nIt is easy to show that the relations (\\ref{eq:RRLL}) are\nequivalent to\n\\andy{corrs'}\n\\begin{equation}\n\\left(\n\\begin{array}{cccc}\n1 & 1 & 1 & 1 \\\\\n1 & -1 & 1 & -1 \\\\\n1 & 1 & -1 & -1 \\\\\n1 & -1 & -1 & 1\n\\end{array}\n\\right)\n\\left(\n\\begin{array}{c}\nR'_{1,\\uparrow}\\\\\nR'_{1,\\downarrow}\\\\\nL_{0,\\uparrow}\\\\\nL_{0,\\downarrow}\n\\end{array}\n\\right)\n=\n\\left(\n\\begin{array}{c}\n{\\cal M}_{-,+}\\\\\n{\\cal M}_{-,-}\\\\\n{\\cal M}_{+,+}\\\\\n{\\cal M}_{+,-}\n\\end{array}\n\\right),\n\\label{eq:corrs'}\n\\end{equation}\nwhere we have introduced\n\\andy{calMpm}\n\\begin{equation}\n{\\cal M}_{+,\\pm}={1+(M_\\pm)_{21}\\over(M_\\pm)_{22}},\\quad\n{\\cal M}_{-,\\pm}={1-(M_\\pm)_{21}\\over(M_\\pm)_{22}}.\n\\label{eq:calMpm}\n\\end{equation}\nIt is important to realize that these quantities are just phase\nfactors. In fact, since\n\\andy{Melements}\n\\begin{equation}\n(M_\\pm)_{21}=i\\sinh\\eta_\\pm\\sin k_\\pm a\n\\quad\\mbox{and}\\quad\n(M_\\pm)_{22}=\\cos k_\\pm a-i\\cosh\\eta_\\pm\\sin k_\\pm a\n\\label{eq:Melements}\n\\end{equation}\nand\n\\andy{abs2}\n\\begin{equation}\n|1\\pm(M_\\pm)_{21}|^2=|(M_\\pm)_{22}|^2=1+\\sinh^2\\eta_\\pm\\sin^2k_\\pm a,\n\\label{eq:abs2}\n\\end{equation}\ntheir absolute values are unity. Thus we can rewrite them in the\nform\n\\andy{calMpp}\n\\begin{equation}\n{\\cal M}_{+,\\pm}=e^{i(\\xi_\\pm+\\phi_\\pm)},\\quad\n{\\cal M}_{-,\\pm}=e^{i(-\\xi_\\pm+\\phi_\\pm)},\n\\label{eq:calMpp}\n\\end{equation}\nwhere\n\\andy{phases}\n\\begin{equation}\n\\xi_\\pm=\\tan^{-1}(\\sinh\\eta_\\pm\\sin k_\\pm a)\\quad\\mbox{and}\\quad\n\\phi_\\pm=\\tan^{-1}(\\cosh\\eta_\\pm\\tan k_\\pm a).\n\\label{eq:phases}\n\\end{equation}\nObserve now that (\\ref{eq:corrs}), dynamically derived from the\nabstract Hamiltonian (\\ref{eq:modelH}), is equivalent to\n\\andy{trvsgT}\n\\begin{equation}\n\\left(\\matrix{t_\\uparrow\\cr\n t_\\downarrow\\cr\n r_\\uparrow\\cr\n r_\\downarrow\\cr}\n\\right)\n={1\\over4}\\left(\\matrix{1&1&1&1\\cr\n 1&-1&1&-1\\cr\n 1&1&-1&-1\\cr\n 1&-1&-1&1\\cr}\n \\right)\n \\left(\\matrix{e^{-iE_{++}T}\\cr\n e^{-iE_{+-}T}\\cr\n e^{-iE_{-+}T}\\cr\n e^{-iE_{--}T}\\cr}\n \\right).\n\\label{eq:trvsgT}\n\\end{equation}\nThe apparent similarity between the above relation and\n(\\ref{eq:corrs'}), valid in the stationary scattering setup,\ninduces us to look for a more definite connection between the two\ncases.\n\nIf we slightly generalize the abstract Hamiltonian \n(\\ref{eq:modelH}) \n\\andy{model2H}\n\\begin{equation}\nH_{\\rm dyn}=g[1+\\alpha\\tau_1+\\beta\\sigma_1+\\gamma\\tau_1\\sigma_1],\n\\label{eq:model2H}\n\\end{equation}\nby introducing the additional parameter $\\gamma$, we easily find\nthe correspondence existing between the parameters involved: The\nincident wave number $k$ of the neutron and the configuration of\nthe static potential (strength $B$ and width $a$) determine the\nscattering data, which are reproducible by an appropriate choice of\nparameters $\\alpha,\\beta,\\gamma$ and $gT$ in the dynamical process\ngoverned by the Hamiltonian (\\ref{eq:model2H}).\n\nFor definiteness, consider the case of narrow potential, that is,\n$a\\to0$ or $ka\\ll1$. Incidentally, notice that this is the case of\ninterest for the analysis of the QZE. The above $\\xi_\\pm$ and\n$\\phi_\\pm$ are then approximated as\n\\andy{xiphi}\n\\begin{equation}\n\\xi_\\pm\\sim\\pm\\zeta ka,\\qquad\n\\phi_\\pm\\sim(1\\mp\\zeta)ka,\n\\label{eq:xiphi}\n\\end{equation}\nwhere we set $\\zeta=\\mu B\/2E=m\\mu B\/k^2$, as in\nSec.~\\ref{sec-numan}. In the limit $a\\to0$, the evolution time $T$\nis also considered to be of the same order of $a$ and the\ntransmission and reflection coefficients are expressed, in terms of\nthe parameters $\\alpha,\\beta,\\gamma$ and $gT$, as\n\\andy{abgg'}\n\\begin{equation}\n\\left(\\matrix{t_\\uparrow\\cr\n t_\\downarrow\\cr\n r_\\uparrow\\cr\n r_\\downarrow\\cr}\\right)\n\\sim\\left(\\matrix{1\\cr\n -i\\beta gT\\cr\n -i\\alpha gT\\cr\n -i\\gamma gT\\cr}\\right).\n\\label{eq:abgg'}\n\\end{equation}\nIn the stationary scattering problem, the same quantities are calculated\nto be\n\\andy{kxiphi}\n\\begin{equation}\n\\left(\\matrix{t_\\uparrow\\cr\n t_\\downarrow\\cr\n r_\\uparrow\\cr\n r_\\downarrow\\cr}\\right)\n=\\left(\\matrix{e^{-ika}R'_{1,\\uparrow}\\cr\n e^{-ika}R'_{1,\\downarrow}\\cr\n L_{0,\\uparrow}\\cr\n L_{0,\\downarrow}\\cr}\\right)\n\\sim\\left(\\matrix{1-ika+i(\\phi_++\\phi_-)\/2\\cr\n i(\\phi_+-\\phi_-)\/2\\cr\n -i(\\xi_++\\xi_-)\/2\\cr\n -i(\\xi_+-\\xi_-)\/2\\cr}\\right)\n\\sim\\left(\\matrix{1\\cr\n -i\\zeta ka\\cr\n 0\\cr\n -i\\zeta ka\\cr}\\right).\n\\label{eq:kxiphi}\n\\end{equation}\nTherefore, the following abstract Hamiltonian\n\\andy{rH}\n\\begin{equation}\nH_{\\rm dyn}=\\mu B(1+\\tau_1)\\sigma_1\n\\label{eq:rH}\n\\end{equation}\ncan reproduce the desired scattering data when the system evolves under\nthis Hamiltonian for time $T=a\/v=ma\/k$.\n\nIt is also interesting to see how such a dynamical Hamiltonian\n$H_{\\rm dyn}$ may reproduce the transfer matrix $M_\\pm$\n(\\ref{eq:Mpm}), which further confirms the equivalence between the\ntwo formalisms, stationary and dynamical, governed by the\nHamiltonians $H_{\\rm Z}$ and $H_{\\rm dyn}$, respectively. For this\npurpose, consider first a neutron, initially prepared in state\n$|R\\pm\\rangle$, subject to the dynamical evolution engendered by\n$H_{\\rm dyn}$ for time $T=ma\/k$. By definition, the transfer matrix\nconnects the scattering products in the following way\n\\andy{Rinc}\n\\begin{equation}\n\\pmatrix{R_{1,\\pm}^\\prime \\cr 0 \\cr}\n=M_\\pm\\pmatrix{1 \\cr L_{0,\\pm} \\cr}.\n\\label{eq:Rinc}\n\\end{equation}\nThese scattering amplitudes are given by the corresponding\nmatrix elements of the evolution operator $e^{-iHT}$,\n\\andy{R'L}\n\\begin{equation}\ne^{-ika}R_{1,\\pm}'=\\langle R\\pm|e^{-iHT}|R\\pm\\rangle,\\qquad\nL_{0,\\pm}=\\langle L\\pm|e^{-iHT}|R\\pm\\rangle,\n\\label{eq:R'L}\n\\end{equation}\nwhich reduces, for small $T$, to\n\\andy{r'l}\n\\begin{equation}\nR_{1,\\pm}'\\sim1+ika\\mp i\\mu BT,\\qquad L_{0,\\pm}\\sim\\mp i\\mu BT.\n\\label{eq:r'l}\n\\end{equation}\nOn the other hand, if a neutron is prepared in $|L\\pm\\rangle$, we have\nthe relation\n\\andy{Linc}\n\\begin{equation}\n\\pmatrix{R_{1,\\pm}' \\cr e^{-ika} \\cr}\n=M_\\pm\\pmatrix{0 \\cr L_{0,\\pm} \\cr},\n\\label{eq:Linc}\n\\end{equation}\nwhere\n\\andy{R'Lr'l}\n\\begin{equation}\nR_{1,\\pm}'=e^{ika}\\langle R\\pm|e^{-iHT}|L\\pm\\rangle\\sim\\mp i\\mu BT,\\qquad\nL_{0,\\pm}=\\langle L\\pm|e^{-iHT}|L\\pm\\rangle\\sim1\\mp i\\mu BT.\n\\label{eq:R'Lr'l}\n\\end{equation}\nIt is now an easy task to determine the matrix elements of $M_\\pm$ from\nthe above relations (\\ref{eq:Rinc})--(\\ref{eq:R'Lr'l}).\nWe obtain\n\\andy{mpm}\n\\begin{equation}\nM_\\pm\\sim\\pmatrix{\n1+ika\\mp i\\mu BT &\\mp i\\mu BT\\cr\n\\pm i\\mu BT &1-ika\\pm i\\mu BT}\n=1-i[\\pm\\mu B(i\\tau_2+\\tau_3)-2E\\tau_3]T.\n\\label{eq:mpm}\n\\end{equation}\nBy defining a ``generator\" $G_{\\rm d}$\n\\andy{Gd}\n\\begin{equation}\nG_{\\rm d}=\\mu B(i\\tau_2+\\tau_3)\\sigma_1-2E\\tau_3,\n\\label{eq:Gd}\n\\end{equation}\nthe transfer matrix $M_\\pm$ for finite $a$ (or $T$) can be rewritten as\n\\andy{e-iGdT}\n\\begin{equation}\nM_\\pm\n=\\langle\\pm|e^{-iG_{\\rm d}T}|\\pm\\rangle,\n\\label{eq:e-iGdT}\n\\end{equation}\nwhich is nothing but the transfer matrix (\\ref{eq:Mpm}), obtained\nfor the stationary-state problem from the Hamiltonian $H_{\\rm Z}$.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}