diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzgyir" "b/data_all_eng_slimpj/shuffled/split2/finalzzgyir" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzgyir" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\ac{AutoML} promises to automate the synthesis of \\ac{ML} pipelines, handling \\ac{HPO}, algorithm selection and pipeline structure search. Many publications have proven the superiority of Bayesian optimization for \\ac{HPO} and algorithm selection, formulated as \\ac{CASH}, over classic approaches like grid or random search \\cite{Bergstra2013,Hutter2011}. More recently, methods for composing complete \\ac{ML} pipelines from a set of algorithms have been proposed. Those methods have a holistic view on pipeline synthesis: pipeline structure search is considered as an extension of \\ac{CASH} where, instead of a single algorithm, a combination of multiple algorithms is selected and optimized simultaneously. Due to the exponential growth in complexity, the pipeline structure space is usually not evaluated thoroughly.\n\nIn contrast to current \\ac{AutoML} approaches, data scientists often create an \\ac{ML} pipeline in several small distinct steps. Starting from an empty pipeline, data scientists add algorithms to an \\ac{ML} pipeline incrementally by taking a detailed look at how the data set evolves in a pipeline in combination with their profound experience. Only if a pipeline structure performs well enough, a fine-tuning via hyperparameters is performed.\n\nIn this paper, we propose an alternative data-centric view on pipeline structure synthesis inspired by human behavior that allows an adaption of a pipeline to a specific data set. Through extensive use of meta-learning, we are able to dynamically prune the pipeline structure search space depending on meta-features of intermediate data sets. Intermediate data sets are the outputs of the execution of each individual step in a pipeline. Furthermore, the \\ac{HPO} of a pipeline candidate is boosted via knowledge-sharing between different pipelines. The main contributions of this paper are as follows:\n\\begin{itemize}\n \\item We reformulate the \\ac{CASH} and pipeline synthesis problem to enable efficient measures to reduce the pipeline search space and warm-starting \\ac{CASH}.\n\t\\item We present a data-centric approach for incremental pipeline synthesis and hyperparameter optimization without expert knowledge inspired by human behavior called \\name{dswizard}.\n\t\\item To ensure reproducibility of the results, we publish our meta-learning base consisting of \\(13.5\\) million unique \\ac{ML} pipelines on \\(28\\) data sets.\n\\end{itemize}\n\nIn Section~\\ref{sec:related_work} related work regarding pipeline structure synthesis and meta-learning in \\ac{AutoML} is discussed. Section~\\ref{sec:methods} describes how we model the pipeline synthesis and the creation of the meta-learning base. The effectiveness of this approach is evaluated in Section~\\ref{sec:experiments} followed by a short conclusion in Section~\\ref{sec:conclusion}.\n\n\n\n\n\\section{Preliminary and Related Work}\n\\label{sec:related_work}\n\nLet a classification task---containing a data set \\(D = \\{ (\\vec{x}_1,y_1), \\dots,(\\vec{x}_m,y_m) \\} \\) with \\(\\vec{x}_i \\in \\mathbb{X}^{d}\\) being the input domain and \\(y \\in \\mathbb{Y} \\subset N\\) the target domain and a loss function \\(\\mathcal{L}: \\mathbb{Y}^2 \\rightarrow \\mathbb{R} \\)---be given. Furthermore, let a fixed set of algorithms be given as \\(\\mathcal{A} = \\left\\{ A^{(1)}, A^{(2)}, \\dots, A^{(n)} \\right\\} \\). Each algorithm \\(A^{(i)}\\) is a transformation \\(\\phi: \\mathbb{Z} \\rightarrow \\mathbb{Z}'\\) between two arbitrary domains. In case of \\(\\mathbb{Z}' = \\mathbb{Y}\\) we denote the algorithm as a \\textit{classifier}, otherwise as a \\textit{preprocessor}. Usually \\(A^{(i)}\\) is configured by hyperparameters \\(\\vec{\\lambda}^{(i)}\\) from a domain \\(\\Lambda_{A^{(i)}}\\). \\(A^{(i)}\\) transforming a data set \\(D\\) being configured by \\(\\vec{\\lambda}\\) is denoted as \\(\\phi^{(i)}_{\\vec{\\lambda}}(D)\\). An \\ac{ML} pipeline \\(\\mathcal{P}\\) is a sequential combination of algorithms mapping data from an input domain to a target domain \\(f_{\\mathcal{P}}: \\mathbb{X}^d \\rightarrow \\mathbb{Y}\\). It consists of a pipeline structure \\(g\\)---usually modeled as a \\ac{DAG}---with length \\(|g|\\), algorithms \\(\\vec{A} = [A_1, \\dots, A_{|g|}]\\) and the according hyperparameters \\(\\vec{\\lambda} = [\\Lambda_1, \\dots, \\Lambda_{|g|}]\\). \\ac{AutoML} aims at generating a pipeline \\(\\mathcal{P}\\) that optimizes\n\\begin{equation*}\n\t(g, \\vec{A}, \\vec{\\lambda})^* \\in \\argmin_{g \\in G, \\vec{A} \\in \\mathcal{A}^{|g|}, \\vec{\\lambda} \\in \\Lambda_{A_1} \\times \\dots \\times \\Lambda_{A_{|g|}} } \\pi \\left(g, \\vec{A}, \\vec{\\lambda}, D \\right)\n\\end{equation*}\nwith\n\\begin{equation*}\n\t\\pi \\left(g, \\vec{A}, \\vec{\\lambda}, D \\right) = \\dfrac{1}{m} \\sum_{i = 1}^m \\mathcal{L}(\\hat{y}_{i}, y_{i})\n\\end{equation*}\nwith \\(\\hat{y}_{i}\\) being the predicted output on the sample \\(\\vec{x}_i\\). We refer to this extension of the \\ac{CASH} problem as \\ac{PSO} problem.\n\nThe \\ac{CASH} notation, as originally introduced by \\cite{Thornton2013}, extends \\ac{HPO} by introducing an additional categorical meta-hyperparameter that represents an algorithm choice. This approach does not scale well as the search space grows exponentially with the length of the pipeline \\cite{Kandasamy2015}. To counter this problem, many frameworks use a fixed pipeline structure based on best-practices reducing \\ac{PSO} to \\ac{CASH}, e.g.\\, \\cite{Komer2014,Swearingen2017}. \\name{autosklearn} \\cite{Feurer2015} allows the omission of single steps in a fixed pipeline, effectively replacing a fixed pipeline structure with a small set of pipeline candidates. Similarly, \\name{P4ML} \\cite{Gil2018} uses a set of hand-crafted, best-practice pipelines for a wide variety of task instances. Appropriate pipeline candidates are selected based on data set meta-features followed by a fine-tuning via \\ac{HPO}. Yet, even when selecting from a set of fixed structures, the pipeline structure cannot be freely adapted to a specific problem instance.\n\n\\name{TPOT} \\cite{Olson2016} uses genetic programming to solve the \\ac{PSO} problem. \\name{RECEIPE} \\cite{DeSa2017} extends \\name{TPOT} by incorporating a context-free grammar to guide the construction of pipeline structures. Even though this approach is able to build flexible tree-shaped pipelines, experiments have shown that genetic programming approaches tend to build pipelines using only one or two algorithms \\cite{Zoller2019}.\n\nMultiple approaches that use a \\ac{MCTS} \\cite{Coulom2006} for pipeline synthesis have been proposed. \\name{ML-Plan} \\cite{Mohr2018} traverses a hierarchical task network with a \\ac{MCTS} to perform \\ac{PSO}. By design, the structure is determined first followed by the \\ac{HPO}. To assess the score of incomplete pipelines, random path completion is used, which does not scale well to high dimensions \\cite{Hutter2018}. Similarly, \\name{AlphaD3M} \\cite{Drori2019} uses a combination of \\ac{MCTS} and neural networks to build pipeline structures based on a grammar while ignoring \\ac{HPO} completely. These approaches are more flexible in comparison to semi-fixed pipelines but still enforce specific pipeline patterns.\n\nMany \\ac{AutoML} approaches use meta-learning to warm-start \\ac{CASH} or find promising pairs of preprocessors and classifiers \\cite{Hutter2018}. \\name{AlphaD3M} uses meta-features and the algorithms in the current pipeline to predict the performance of a possible pipeline candidate. However, all those approaches only calculate meta-features of the initial data set.\n\n\n\\section{\\name{DSWIZARD} Methodology}\n\\label{sec:methods}\n\nOur approach, dubbed \\name{dswizard}, is inspired by the behavior of a human data scientist creating an \\ac{ML} pipeline. Starting from a very basic pipeline with default hyperparameters, the pipeline structure is extended gradually based on the characteristics of the intermediate data sets and the experience from previous tasks. After obtaining a combination of algorithms, a fine-tuning of the hyperparameters is performed. Fig.~\\ref{fig:procedure} contains an overview of our proposed approach.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{fig\/procedure_horizontal.pdf}\n\n\\caption{General pipeline synthesis procedure in \\name{dswizard}.}\n\\label{fig:procedure}\n\\end{figure}\n\nStarting from an empty pipeline and a given data set, at first the meta-features of the data set are extracted. Based on these meta-features, a suited algorithm is selected using information from a pre-computed meta-learning base. Next, the data set is transformed using the selected algorithm and its default hyperparameters. Whenever the pipeline ends in a classifier, a fine tuning via \\ac{HPO} is performed to obtain a performance measure. This procedure is repeated until a predefined time budget is exhausted. Finally, ensemble selection \\cite{Caruana2004} is used to create an ensemble of the best performing pipelines.\n\nMore formally, we reformulate the \\acl{PSO} problem as a bilevel optimization problem:\n\n\\begin{equation*}\n\\begin{split}\n\t(g, \\vec{A}, \\vec{\\lambda}^*)^* \\in \\argmin_{g \\in G, \\vec{A} \\in \\mathcal{A}^{|g|}}\t\\quad & \\pi \\left(g, \\vec{A}, \\vec{\\lambda}^*, D \\right) \\\\\n\t\\mathrm{s.t.} \t\t\t\t\\quad & \\vec{\\lambda}^* \\in \\argmin_{ \\vec{\\lambda} \\in \\Lambda_{A_1} \\times \\dots \\times \\Lambda_{A_{|g|}} } \\pi \\left(g, \\vec{A}, \\vec{\\lambda}, D \\right).\n\\end{split}\n\\end{equation*}\n\nThe outer optimization problem is used for pipeline structure search and algorithm selection. The inner optimization problem performs the \\ac{HPO} of the selected algorithms. The implementation is publicly available on GitHub.\\footnote{\n See \\url{https:\/\/github.com\/Ennosigaeon\/dswizard}.\n}\n\n\n\n\\subsection{Incremental Pipeline Structure Search}\nAs each pipeline has to have a finite length and the set of algorithms \\(\\mathcal{A}\\) is finite, it is possible to enumerate the complete pipeline search space up to a depth \\(d\\). The resulting search space can be interpreted as a layered \\ac{DAG}. Each node\/state \\(s_t\\) represents a pipeline candidate \\(g_{s_t}\\), i.e.\\,, a list of algorithms, and a vector of meta-features of the intermediate data set \\(D_{s_t}\\) obtained by applying the (incomplete) pipeline candidate to the input data set; each edge\/action represents an algorithm \\(A_t \\in \\mathcal{A}\\). All nodes in a layer have pipeline candidates with identical lengths. We use an adapted \\ac{MCTS} \\cite{Coulom2006} to efficiently traverse the search graph. In contrast to all existing solutions, we explicitly calculate the meta-features of each intermediate data set and not only the initial data set.\n\nThe policy for \\ac{MCTS}'s \\textit{selection} and \\textit{expansion} phase---inspired by \\textit{polynomial upper confidence trees} \\cite{Auger2013}---is\n\\begin{equation*}\n\tA_t \\in \\argmax_{A \\in \\mathcal{A}} o(s_t) \\cdot \\big( Q(s_t, A) + c(t) \\cdot U(s_t, A) \\big)\n\\end{equation*}\nweighting the exploitation \\(Q\\) and exploration \\(U\\) by a function \\(c\\) for a given action \\(A \\in \\mathcal{A}\\) and state \\(s_t\\). Exploitation is defined as\n\\begin{equation*}\n\tQ(s_t, A) = \\dfrac{P(s_t, A)}{1 + N(s_t, A)} \\sum_{s' \\in s_t(A)} \\nu(s')\n\\end{equation*}\nwith \\(P(s_t, A)\\) being a prior performance estimate (see Section~\\ref{sec:meta-learnig}), \\(N(s_t, A)\\) being the number of times action \\(A\\) was selected in state \\(s_t\\), \\(s_t(A)\\) the state obtained after applying action \\(A\\) and \\(\\nu(s')\\) a previously observed performance in state \\(s'\\). Exploration, defined as\n\\begin{equation*}\n\tU(s_t, A) = \\dfrac{\\sqrt{\\sum_{b \\in \\mathcal{A}} N(s_t, b)}}{1 + N(s_t, A)}~,\n\\end{equation*}\ncalculates the multiplicative inverse of the relative number of selections of \\(A\\), giving a higher weight to less frequently selected actions. To account for overfitting we introduce an additional exponential term\n\\begin{equation*}\n\to(s_t) = 1 - \\dfrac{c^{|g_{s_t}|}}{c^{l_{\\mathrm{max}}}}\n\\end{equation*}\nthat reduces the node reward depending on the current pipeline length, a maximum pipeline length \\(l_{\\mathrm{max}}\\) and a scaling constant \\(c > 1\\).\n\nThe \\ac{MCTS} procedure is adapted such that selection can be aborted if the current node has a higher reward than all child nodes. Similarly, expansion can be skipped. During each expansion step the data set in \\(s_t\\) is transformed by \\(A_t\\) and stored. Usually, \\ac{MCTS} uses a \\textit{simulation} to calculate the reward of \\(s_t\\). However, a few random simulations do not scale well in high dimensional search spaces and many simulations are prohibitively expensive \\cite{Hutter2018}. Instead, expansion is repeated recursively until the pipeline candidate ends in a classifier. After \\(e\\) consecutive expansions the selection of a classifier is enforced. Conceptually, this is similar to a random simulation. However, as we immediately know the meta-features of each intermediate data set, the simulation can be guided by the meta-learning base explained in Section~\\ref{sec:meta-learnig}. This approach explicitly does not restrict the pipeline structure via a grammar or similar measures.\n\nThe reward \\(\\nu(s_{t + 1})\\) is not directly obtained during the \\ac{MCTS}. Instead it is computed via the \\ac{HPO} procedure described in Section~\\ref{sec:hpo}. Therefore, pipeline structure search and \\ac{HPO} can be solved independently of each other while still being tightly coupled.\n\nFinally, it remains to decide how many \\ac{HPO} samples are drawn to determine the reward of a state. To quickly discard unpromising structures, we wrap the complete structure search procedure in a multi-fidelity approach similar to \\name{Hyperband} \\cite{Li2018}. Yet, instead of deterministically calculated budgets---the number of \\ac{HPO} iterations---we adapt the greediness of the policy depending on the remaining optimization time\n\\begin{equation*}\n c(t) = w \\cdot \\left( \\exp\\left(\\dfrac{t_\\mathrm{max} - t}{t_\\mathrm{max}}\\right) - 1 \\right)~,\n\\end{equation*}\nwith \\(t_\\mathrm{max}\\) being the total optimization time and \\(w\\) a non-negative weighting constant.\nFor each pipeline candidate a small fixed number of \\ac{HPO} iterations is performed leading to more \\ac{HPO} iterations on well-performing candidates.\n\nUsing this procedure, the pipeline structure search space is incrementally expanded whenever a new layer of the graph is visited. Simultaneously, ineffective---the algorithm does not modify the data set---incompatible or bad performing transformations can be identified quickly. Consequently, the search graph is pruned efficiently, often even without any \\ac{HPO}.\n\n\n\n\\subsection{Hyperparameter Optimization}\n\\label{sec:hpo}\n\nAfter fixing the pipeline structure, its actual performance has to be assessed. In general, this step is computationally equally expensive as optimizing the hyperparameter for a fixed pipeline. Consequently, an efficient \\ac{HPO} procedure is key to evaluating various pipeline structures.\n\nTraditional \\ac{CASH} procedures model the problem for a fixed structure \\(g\\) with---for simplicity---only one algorithm as \n\\begin{equation*}\n\t\\vec{\\lambda}^* \\in \\argmin_{ A^{(i)} \\in \\mathcal{A}, \\vec{\\lambda} \\in \\Lambda_{A^{(i)}} } \\pi_g \\left(A^{(i)}, \\vec{\\lambda}, D \\right) .\n\\end{equation*}\nselecting the actual algorithm and its hyperparameters simultaneously, configuring all algorithms accordingly and finally evaluating the performance on the input data set. This approach has three major drawbacks:\n\\begin{enumerate*}\n\t\\item The transformation of the data set being processed is not considered.\n\t\\item The algorithms in a pipeline may be incompatible with each other due to implicit requirements of the used algorithms on the input data. Selecting and fitting all algorithms at once may lead to wasted optimization time as incompatibilities are only detected during fitting \\cite{Nguyen2020}.\n\t\\item Sharing knowledge about well performing configurations between different pipeline structures using the same subset of algorithms is not possible.\n\\end{enumerate*}\nInstead we propose, to use a distinct optimization instance \n\\begin{equation*}\n\t\\vec{\\lambda}_i^* \\in \\argmin_{\\vec{\\lambda} \\in \\Lambda_{A^{(i)}}} \\pi \\left(A^{(i)}, \\vec{\\lambda}, D \\right)\n\\end{equation*}\nfor each algorithm only considering \\ac{HPO}. To prevent an improper impact of previous algorithms and their hyperparameter on the optimization, we additionally require that all meta-features of the transient data set \\(D\\) have to be similar. Otherwise, a new \\ac{HPO} instance is created. This allows sharing knowledge about well-performing hyperparameters between identical algorithms in different pipeline candidates, given that the pipeline prefixes yielded similar data sets.\n\nThe hyperparameters of each algorithm can be selected ``on-the-fly\" in order of appearance of the algorithms. After selecting the hyperparameters for each algorithm, the final performance is back-propagated to update each optimizer leading to a formulation compatible with current \\ac{CASH} formulations\n\\begin{equation*}\n\t\\vec{\\lambda}^* \\in \\argmin_{\\vec{\\lambda}_1 \\in \\Lambda_{1}, \\dots , \\vec{\\lambda}_{|g|} \\in \\Lambda_{|g|} } \\pi \\left(\n\tA_{|g|}, \\vec{\\lambda}_{|g|}, \\phi_{\\vec{\\lambda}_{|g|}}^{(|g|)} \\left(\n\t\\phi_{\\vec{\\lambda}_{|g - 1|}}^{(|g - 1|)} \\left(\n\t\\dots\n\t\\phi_{\\vec{\\lambda}_{1}}^{(1)} (D) \\right) \\right)\\right)\n\\end{equation*}\nwith \\(\\vec{\\lambda}^* = \\vec{\\lambda}_{1} \\cup \\dots \\cup \\vec{\\lambda}_{|g|}\\) and \\(\\vec{A}\\) being provided via the previously described structure search. Consequently, the hyperparameter search space for a single algorithm is significantly smaller than the complete \\ac{CASH} search space. This imposes two major benefits:\n\\begin{enumerate*}\n\t\\item Bayesian optimization methods have been proven to be more powerful for low dimensional search spaces \\cite{Kandasamy2015}.\n\t\\item The small search space improves the applicability of warm-starting. Based on the meta-features of the intermediate data set, samples for warm-starting can be extracted from previously evaluated configurations on different pipeline structure candidates.\n\\end{enumerate*}\nEach individual optimization problem can be solved via standard procedures like \\name{SMAC} or \\name{hyperopt}. In the context of this work tree Parzen estimators \\cite{Bergstra2013} are used. Each instance of this procedure yields a new performance measure \\(\\nu(s_{t + 1})\\) for the \\ac{MCTS} procedure.\n\n\n\n\n\\subsection{Meta-Learning}\n\\label{sec:meta-learnig}\nTraditional \\ac{MCTS} uses \\textit{simulations} to determine the score of an unvisited node. As extensive simulations are prohibitively expensive in the context of \\ac{ML}, current \\ac{AutoML} tools use a small number of random completions, potentially restricted by a grammar, to estimate the reward of a state, e.g.\\, \\cite{Mohr2018}. We propose to guide the random completions by considering intermediate meta-features.\n\nTo get a diverse foundation for the meta-learning base, we collected \\(30\\) unique data sets from \\name{OpenML} \\cite{Vanschoren2014}. Starting from the input data set, each available algorithm is applied using its default hyperparameters. The transformed data set is added to the meta-learning base---in case of a classifier, the transformed data set consists of the input data set with the prediction added as a new feature. This procedure is repeated exhaustively until the maximum pipeline length of five algorithms is reached. For each data set in the meta learning base, \\(40\\) meta-features are extracted. As the meta-feature extraction has to be applied in each stage of the \\ac{MCTS}, a fast calculation of the meta-features is important, which limits the available meta-features to general, statistical, information-theoretical and simple model-based meta-features.\\footnote{The complete list of all data sets, algorithms and used meta-features is available in the online Appendix alongside the source code at \\url{https:\/\/git.io\/JIOaJ}.}\n\nIf the applied algorithm comprises a classifier, the current performance is evaluated. For preprocessing algorithms the performance is estimated using all subsequent classification algorithms. Using this approach, we extracted the performance of over \\(13.5\\) million unique pipelines on \\(30\\) diverse data sets.\n\nTo account for the stochastic environment, the performance prediction of an algorithm for a given state is modeled by a normal distribution\n\\begin{equation*}\n\tP(s_t, A) \\sim \\mathcal{N} \\left( RF_\\mu(s_t, A), RF_\\sigma(s_t, A) \\right)\n\\end{equation*}\nwith \\(RF_\\mu\\) and \\(RF_\\sigma\\) being two random forest regression models trained on the mean and standard deviation of the performance, respectively. The complete meta-learning base, namely the raw data and trained models, is publicly available alongside the source code but we also plan to publish all pipelines on \\name{OpenML}.\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nTo prove the effectiveness and efficiency of our approach, \\name{dswizard} is compared with the two best established \\ac{AutoML} tools: \\name{autosklearn} and \\name{TPOT}. Additionally, we perform an ablation study in which we test a variation of \\name{dswizard} without meta-learning, dubbed \\name{dswizard*}, to get a better impression of the impact of meta-learning during structure synthesis on the final performance.\n\n\n\\subsection{Experiment Setup}\n\nTo ensure fair and comparable results, the existing \\name{OpenML} \\ac{AutoML} benchmark framework \\cite{Gijsbers2019} is used for all experiments. We reuse the predefined constraints of a \\(60\\) minute optimization timeout per fold. Experiments are conducted on \\textit{e2-standard-4} virtual machines on Google Cloud Platform equipped with Intel Xeon E5 processors with 4 cores and 16 GB memory.\n\nAll frameworks are evaluated on \\(28\\) publicly available binary and multiclass classification data sets from established and curated \\ac{AutoML} benchmark suits \\cite{Bischl2017,Gijsbers2019}. More specifically, \\name{OpenML} tasks are used for each data set. A task provides information about a data set, for example how train-test splits have to be done or which loss function to use, to enable comparable results. The performance of each final configuration is computed using a hold-out validation data set. For binary and multiclass tasks \\textit{AUC} and \\textit{logloss} are used as metric, respectively.\n\nTo eliminate the impact of different search spaces on the final performance, the existing \\name{TPOT} and \\name{autosklearn} adapters are adopted to use the same search space as \\name{dswizard}. This includes the available algorithms, hyperparameters per algorithm as well as search ranges.\\footnote{\t\n At least if supported by the frameworks. For example, \\name{TPOT} can only handle discretized continuous hyperparameters.\n} The complete search space consists of \\(35\\) algorithms with \\(38\\) categorical and \\(62\\) numerical hyperparameters. A short summary of the configuration space is provided in the online Appendix.\n\nTo prevent a leaking of information via meta-learning in \\name{dswizard}, we construct an individual meta-learning base for each data set excluding the data set under evaluation. The \\name{OpenML} \\ac{AutoML} benchmark's \\name{autosklearn} adapter always performs label encoding before passing the data to the optimization routine. Similarly, the \\name{TPOT} adapter always performs label encoding and an imputation of missing data. As these algorithms are not obtained via an optimization procedure, they are not further considered.\n\n\n\\subsection{Experiment Results}\n\nTable \\ref{tbl:auc_results_cash_solver} and \\ref{tbl:logloss_results_cash_solver} contain the final test performances of all evaluations. For each data set, the mean performance and standard deviation over \\(10\\) folds is reported. Bold face represents the best mean value for each data set. Results not significantly worse than the best result, according to a two-sided Wilcoxon signed-rank test with \\(p = 0.05\\), are underlined. If a framework consistently failed to yield results for at least three folds, the performance for that data set is not recorded.\n\n\\begin{table}\n\\center\n\n\\caption{\n\tFinal test performance for all tested binary classification data sets using \\textit{AUC} as metric. Larger values are better.\n}\n\\label{tbl:auc_results_cash_solver}\n\n\\newrobustcmd{\\B}{\\fontseries{b}\\selectfont}\n\n\\begin{tabular}{@{} l l l l l @{}}\n\t\\toprule\n\tData Set\t\t& \\name{Autosklearn}\t\t& \\name{TPOT}\t\t& \\name{dswizard} & \\name{dswizard*}\\\\\n\t\\midrule\n\nAustralian \t& 0.9337 \\(\\pm\\) 0.0186\t& 0.9380 \\(\\pm\\) 0.0180\t& \\B 0.9660 \\(\\pm\\) 0.0125\t& 0.9461 \\(\\pm\\) 0.0219\t\\\\\nada\\_agnostic \t& 0.9046 \\(\\pm\\) 0.0161\t& 0.9034 \\(\\pm\\) 0.0136\t& \\B 0.9138 \\(\\pm\\) 0.0140\t& \\ul{0.9121 \\(\\pm\\) 0.0171}\t\\\\\nadult \t& \\B 0.9279 \\(\\pm\\) 0.0052\t& --- \t& 0.8933 \\(\\pm\\) 0.0121\t& 0.8921 \\(\\pm\\) 0.0103\t\\\\\nbank-marketing \t& \\B 0.9350 \\(\\pm\\) 0.0068\t& --- \t& 0.9172 \\(\\pm\\) 0.0302\t& 0.9090 \\(\\pm\\) 0.0132\t\\\\\nblood-transfusion\t& 0.7288 \\(\\pm\\) 0.0544\t& \\ul{0.7601 \\(\\pm\\) 0.0664}\t& \\B 0.7885 \\(\\pm\\) 0.0649\t& \\ul{0.7310 \\(\\pm\\) 0.0632}\t\\\\\ncredit-g \t& 0.7719 \\(\\pm\\) 0.0318\t& 0.7731 \\(\\pm\\) 0.0517\t& \\B 0.8527 \\(\\pm\\) 0.0260\t& 0.8050 \\(\\pm\\) 0.0315\t\\\\\neeg-eye-state \t& \\B 0.9909 \\(\\pm\\) 0.0033\t& --- \t& \\ul{0.9903 \\(\\pm\\) 0.0041}\t& --- \t\\\\\nhiggs \t& \\B 0.8084 \\(\\pm\\) 0.0061\t& 0.7902 \\(\\pm\\) 0.0123\t& 0.7263 \\(\\pm\\) 0.0844\t& 0.7667 \\(\\pm\\) 0.0345\t\\\\\njasmine \t& 0.8814 \\(\\pm\\) 0.0167\t& 0.8880 \\(\\pm\\) 0.0167\t& \\B 0.9073 \\(\\pm\\) 0.0177\t& 0.8977 \\(\\pm\\) 0.0209\t\\\\\nkc2 \t& 0.8162 \\(\\pm\\) 0.0931\t& \\ul{0.8252 \\(\\pm\\) 0.1599}\t& \\B 0.8911 \\(\\pm\\) 0.0447\t& 0.7867 \\(\\pm\\) 0.0522\t\\\\\nkr-vs-kp \t& \\ul{0.9992 \\(\\pm\\) 0.0014}\t& 0.9975 \\(\\pm\\) 0.0046\t& \\B 0.9995 \\(\\pm\\) 0.0009\t& \\ul{0.9979 \\(\\pm\\) 0.0057}\t\\\\\nnomao \t& \\B 0.9956 \\(\\pm\\) 0.0009\t& 0.9936 \\(\\pm\\) 0.0044\t& 0.9939 \\(\\pm\\) 0.0015\t& 0.9935 \\(\\pm\\) 0.0018\t\\\\\nnumerai28.6 \t& \\ul{0.5291 \\(\\pm\\) 0.0053}\t& \\ul{0.5267 \\(\\pm\\) 0.0042}\t& \\B 0.5311 \\(\\pm\\) 0.0116\t& \\ul{0.5286 \\(\\pm\\) 0.0081}\t\\\\\nphoneme \t& \\ul{0.9629 \\(\\pm\\) 0.0101}\t& \\ul{0.9656 \\(\\pm\\) 0.0082}\t& \\B 0.9662 \\(\\pm\\) 0.0086\t& \\ul{0.9638 \\(\\pm\\) 0.0087}\t\\\\\nsa-heart \t& 0.7586 \\(\\pm\\) 0.0485\t& \\ul{0.7551 \\(\\pm\\) 0.1196}\t& \\ul{0.7968 \\(\\pm\\) 0.1065}\t& \\B 0.8321 \\(\\pm\\) 0.0651\t\\\\\nsylvine \t& \\ul{0.9899 \\(\\pm\\) 0.0037}\t& \\ul{0.9855 \\(\\pm\\) 0.0075}\t& \\B 0.9903 \\(\\pm\\) 0.0032\t& 0.9865 \\(\\pm\\) 0.0042\t\\\\\n\n\t\\bottomrule\n\\end{tabular}\n\n\\end{table}\n\n\\begin{table}[ht!]\n\\center\n\n\\caption{\n\tFinal test performance for all tested multiclass classification data sets using \\textit{logloss} as metric. Smaller values are better.\n}\n\\label{tbl:logloss_results_cash_solver}\n\n\\newrobustcmd{\\B}{\\fontseries{b}\\selectfont}\n\n\\begin{tabular}{@{} l l l l l @{}}\n\t\\toprule\n\tData Set\t\t& \\name{Autosklearn}\t\t& \\name{TPOT}\t\t& \\name{dswizard} & \\name{dswizard*} \\\\\n\t\\midrule\n\nHelena \t& \\B 3.0091 \\(\\pm\\) 0.1153\t& --- \t& \\ul{3.0226 \\(\\pm\\) 0.0829}\t& 3.2283 \\(\\pm\\) 0.2200\t\\\\\nJannis \t& \\B 0.7016 \\(\\pm\\) 0.0235\t& 0.7297 \\(\\pm\\) 0.0417\t& 0.7582 \\(\\pm\\) 0.0524\t& 0.7861 \\(\\pm\\) 0.0856\t\\\\\nShuttle \t& \\B 0.0006 \\(\\pm\\) 0.0004\t& --- \t& \\ul{0.0011 \\(\\pm\\) 0.0013}\t& --- \t\\\\\nanalcatdata\\_auth\t& \\ul{0.0691 \\(\\pm\\) 0.1285}\t& \\ul{0.0123 \\(\\pm\\) 0.0210}\t& \\ul{0.0182 \\(\\pm\\) 0.0378}\t& \\B 0.0073 \\(\\pm\\) 0.0090\t\\\\\nanalcatdata\\_dmft\t& 1.7520 \\(\\pm\\) 0.0233\t& --- \t& \\B 1.7147 \\(\\pm\\) 0.0288\t& 1.7498 \\(\\pm\\) 0.0676\t\\\\\ncar \t& \\ul{0.0045 \\(\\pm\\) 0.0075}\t& \\B 0.0033 \\(\\pm\\) 0.0035\t& \\ul{0.0122 \\(\\pm\\) 0.0295}\t& \\ul{0.0450 \\(\\pm\\) 0.1028}\t\\\\\nconnect-4 \t& \\B 0.4181 \\(\\pm\\) 0.0685\t& --- \t& 0.6831 \\(\\pm\\) 0.1107\t& 0.6989 \\(\\pm\\) 0.0745\t\\\\\njungle\\_chess\\_2pc\t& \\B 0.1786 \\(\\pm\\) 0.0350\t& 0.2390 \\(\\pm\\) 0.0106\t& --- \t& --- \t\\\\\nmfeat-factors \t& 0.1061 \\(\\pm\\) 0.0388\t& 0.1160 \\(\\pm\\) 0.0488\t& \\B 0.0615 \\(\\pm\\) 0.0387\t& \\ul{0.1122 \\(\\pm\\) 0.0533}\t\\\\\nmfeat-morphologic\t& 0.6183 \\(\\pm\\) 0.0687\t& 0.6560 \\(\\pm\\) 0.1035\t& \\B 0.5599 \\(\\pm\\) 0.0780\t& 0.5916 \\(\\pm\\) 0.0821\t\\\\\nsegment \t& \\ul{0.0737 \\(\\pm\\) 0.0692}\t& \\ul{0.0596 \\(\\pm\\) 0.0379}\t& \\B 0.0511 \\(\\pm\\) 0.0293\t& \\ul{0.0696 \\(\\pm\\) 0.0362}\t\\\\\nvehicle \t& 0.4525 \\(\\pm\\) 0.0495\t& \\ul{0.4407 \\(\\pm\\) 0.0898}\t& \\B 0.3889 \\(\\pm\\) 0.0576\t& \\ul{0.4329 \\(\\pm\\) 0.0378}\t\\\\\n\n\t\\bottomrule\n\\end{tabular}\n\n\\end{table}\n\nIt is apparent that, on average, \\name{dswizard} outperforms the other frameworks. However, absolute performance differences are small and, especially for \\textit{logloss}, often not significant. \\name{TPOT} and \\name{dswizard}\/\\name{dswizard*} struggled with some data sets. For \\name{TPOT}, single configuration evaluations often exceeded the global timeout leading to an aborted evaluation. In contrast, \\name{dswizard}\/\\name{dswizard*} exceeded the available memory leading to a crash. The results for \\name{dswizard*} show that meta-learning is able to significantly boost the results of \\name{dswizard} for \\(16\\) of \\(28\\) data sets. Yet, even without meta-learning the more thoroughly evaluation of the pipeline search space yielded configurations outperforming either \\name{TPOT} or \\name{autosklearn} on \\(13\\) data sets.\n\nFig.~\\ref{fig:structures} shows the structure of the final pipelines aggregated over all data sets and folds. For better readability we substituted each used algorithm by an abstract algorithm class, namely \\textit{balancing}, \\textit{classifier}, \\textit{decomposistion}, \\textit{discretization}, \\textit{encoding}, \\textit{filtering}, \\textit{generation}, \\textit{imputation}, \\textit{scaling} and \\textit{selection}. The assignment of algorithm to algorithm class is available in the online Appendix. Additionally, we treat ensembles of pipelines as sets of individual pipelines. Possible pipeline starts are indicated by rounded corners. The frequency of node and edge visits is encoded using a color scheme. Darker colors represent a more frequent usage. For better readability, edges and nodes that appear in less than \\(1\\%\\) of all pipelines are excluded.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/autosklearn_circo.pdf}\n \\caption{\\name{autosklearn}}\n \\label{fig:autosklearn_structure}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/tpot_circo.pdf}\n \\caption{\\name{TPOT}}\n \\label{fig:tpot_structure}\n \\end{subfigure}\n \n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/dswizard_circo.pdf}\n \\caption{\\name{dswizard}}\n \\label{fig:dswizard_structure}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/dswizard_star_circo.pdf}\n \\caption{\\name{dswizard*}}\n \\label{fig:dswizard_star_structure}\n \\end{subfigure}\n \n \n \\caption{Schematic representation of final structures}\n \\label{fig:structures}\n\\end{figure}\n\nIn Fig.~\\ref{fig:autosklearn_structure} it is clearly visible that \\name{autosklearn} uses a semi-fixed pipeline structure where single steps can be omitted. A strict order of algorithms exists, from the top left of the graph to the bottom. \\textit{Imputation}, \\textit{balancing} and \\textit{classifier} are present in each pipeline and the remaining algorithms are roughly selected with identical frequencies. On average, each pipeline contains \\(5.48\\) algorithms. Due to the semi-fixed structure of \\name{autosklearn}, often ineffective algorithms are present in pipelines, e.g.\\, imputation even though the data set does not contain missing values. \\name{TPOT} is able to construct pipelines with highly varying shapes. However, each pipeline, on average, only contains \\(1.66\\) algorithms, mostly classifiers. Even though more complex pipelines are constructed, those complex pipelines represent less than \\(15\\%\\) of all pipelines leading to the heavily pruned graph depicted in Fig.~\\ref{fig:tpot_structure}. \\name{dswizard} constructs more diverse pipelines in terms of selected algorithms and transitions between algorithms. Without meta-learning (compare Fig.~\\ref{fig:dswizard_structure} and \\ref{fig:dswizard_star_structure}), structure search is less guided leading to longer pipelines (\\(2.85\\) vs. \\(3.07\\) algorithms) and a more uniform distribution of the selected algorithms. Yet, as \\name{dswizard*} performs worse than \\name{dswizard}, the guidance seems to be helpful to find well performing pipeline structures.\n\nFinally, we take a more detailed look at \\textit{credit-g} and \\textit{higgs}, two data sets where \\name{dswizard} performs especially well and badly, respectively. For \\textit{credit-g} \\(71\\)\\% of the pipelines in the final ensemble have a long complex structure that can not be created by \\name{autosklearn}. Most pipelines combine stacked classifiers with multiple preprocessors. For \\textit{higgs} the meta-learning directs the structure search in a wrong direction leading to many ineffective transformations. As a result, only very basic pipelines containing combinations of the same four algorithms are constructed. Even with \\ac{HPO}, these simple pipelines do not perform well.\n\nIn summary, \\name{dswizard} significantly outperforms either \\name{autosklearn} or \\name{TPOT} on \\(~42\\)\\% of the data sets. Moreover, it has a similar performance to the state-of-the-art on \\(~32\\)\\% of the data sets. \n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe presented a data-centric approach for solving the \\ac{PSO} problem inspired by human behaviour using \\ac{MCTS} in combination with Bayesian optimization. The possibility to expand the search space incrementally allows for a dynamic adaptation of the pipeline structure to a specific data set. Unpromising regions of the pipeline search graph can be identified and discarded quickly---often even without \\ac{HPO}---through the extensive use of meta-features allowing an efficient traversal of the growing search space. Furthermore, sharing knowledge between pipeline structures is implemented for warm-starting \\ac{HPO}. This allows a more thorough exploration of the pipeline structure search space with a dynamic adaptation to a specific data set while still obtaining competitive results. \n\n\n\n\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOf the many dark matter candidates to have been proposed, those within the classification of Weakly Interacting Massive Particles (WIMPs) are often considered to be the best motivated (for reviews, see Refs.~\\cite{jungmanrev,bertonerev}). In particular, if a stable particle with an electroweak-scale mass and electroweak-scale interactions exists, such particles would have annihilated among themselves at a rate in the early universe that naturally led them to freeze-out with a relic density similar to the measured dark matter abundance. This process of dark matter annihilation is also predicted to be taking place in the present universe, providing the basis for dark matter indirect detection experiments. Such experiments search for the annihilation products of dark matter particles, including electrons and\/or positrons, antiprotons, photons, and neutrinos. Promising sites for the observation of dark matter annihilation products include the core of the Sun~\\cite{SOS}, the Earth~\\cite{earth}, our Galactic halo~\\cite{galaxy}, Galactic center~\\cite{GC}, and dwarf satellite galaxies~\\cite{dsph}. Here, we examine the sensitivity of the IceCube neutrino detector to dark matter annihilations taking place in dwarf satellite galaxies in the Milky Way.\n\n\nIn the case of dark matter candidates that annihilate primarily to gauge bosons or hadronic final states, gamma-ray telescopes provide a more sensitive test of dark matter annihilations taking place in dwarf spheroidals than can be accomplished with existing or planned neutrino telescopes.\n This is not necessarily the case, however, if the dark matter annihilates largely to leptons. Dark matter annihilating to leptonic final states has become increasingly well motivated by the anomalous observations recently reported by several cosmic ray experiments, including \n PAMELA, which has observed an excess of cosmic ray positrons (relative to electrons) between 10 and 100 GeV~\\cite{adriani} (along with previous experiments, including HEAT~\\cite{heat} and AMS-01~\\cite{ams01}, which also reported evidence for such an excess). A surplus of cosmic ray electrons and\/or positrons has also been reported by the Fermi Gamma-Ray Space Telescope (FGST)~\\cite{fgstelectrons}, however this excess is less than that previously reported by ATIC~\\cite{chang}. \n\n\n Although an excess of cosmic ray positrons\/electrons may come from more conventional sources such as pulsars~\\cite{pulsars},\n a great deal of interest has been generated in the possibility that these signals might result from dark matter particles annihilating in the local halo of the Milky Way. Efforts to produce such signals with dark matter, however, have faced some model-building challenges. First, as there is no observed excess of cosmic ray antiprotons, the annihilation channels must be ``leptophilic,\" {\\it i.e.}~the WIMPs must annihilate preferentially to leptons. Many models have been proposed with this property~\\cite{leptophilic}. Second, the local halo density of dark matter is insufficient to explain the signals unless the annihilation rate is supplemented by a large factor $\\sim 10 - 10^3$ relative to that predicted for a typical smoothly distributed thermal relic. Such a boost may arise due to the properties of the dark matter candidate itself; for example, via a nonperturbative Sommerfeld enhancement to the low-velocity cross section resulting from the exchange of a light state~\\cite{sommerfeld}, or a Breit-Wigner enhancement~\\cite{Feldman,Ibe:2008ye}. Because of the smaller velocity dispersion of dark matter particles in smaller halos, the annihilation cross section in a dwarf galaxy may be even higher than that in the halo of the Milky Way~\\cite{sommerfeld,kuhlen, strigari1,zentner}. Alternatively, an enhancement to the annihilation rate over the standard prediction can result if the dark matter is not distributed smoothly. In this paper, we examine bounds on the observed WIMP annihilation cross section, $\\langle \\sigma v \\rangle$, including any relevant enhancements or boost factors, the origin of which we leave unspecified. \n\n\nThe prospects for IceCube observations of high energy neutrinos from dark matter annihilations or decays in the Galactic halo have been considered in~\\cite{yuksel,ICann,ICdec,Mandal:2009yk}. It was found that IceCube will be sensitive to WIMP annihilation cross sections of order $10^{-24}$ to $10^{-23}$ cm$^{3}$s$^{-1}$, depending on the WIMP mass. As Super-Kamiokande is located in the Northern Hemisphere, and therefore is subject to significantly reduced atmospheric backgrounds from the direction of the Galactic center, its sensitivity to the neutrino flux from dark matter annihilations in the inner Milky Way has been studied in~\\cite{meade,hisano} and found to be roughly an order of magnitude less sensitive than the IceCube projections. Nonetheless, Ref.~\\cite{meade} finds that Super-Kamiokande already constrains annihilations to $\\tau^+ \\tau^-$ as the explanation for the cosmic ray anomalies (though this channel is also disfavored by the large expected $\\gamma$ flux), while Ref.~\\cite{hisano} finds that Super-Kamiokande may also constrain the scenario in which neutrinos are directly produced in the annihilations as frequently as charged leptons. While these studies are concerned with the signature of annihilations occurring in the Galactic center, here we focus instead on the annihilations taking place in satellite dwarf spheroidal galaxies of the Milky Way. In particular, we estimate the neutrino fluxes from WIMP annihilation in the dwarf galaxies Draco, Willman 1, and Segue 1, and examine the prospects for their observation with the IceCube neutrino telescope located at the South Pole.\n\n\n\\section{Dark Matter Annihilation In Dwarf Spheroidal Galaxies}\n\n \nIn general, the differential flux of particles of type $j$ from the annihilation of dark matter particles of mass $m_X$ is given by\n \\begin{equation}\n \\label{dN_dE}\n {d \\Phi_j(\\Delta \\Omega, E_j) \\over dE_j} = {\\langle \\sigma v \\rangle \\over 8 \\pi m_X^2}\n \\sum_F f_F {dN_{j,F} \\over dE_{j,F}} \\times \\bar J(\\Delta \\Omega) \\Delta \\Omega \n \\end{equation}\nwhere $f_F$ is the fraction of annihilations which produce a final state $F$, and $dN_{j,F}\/dE_{j,F}$ is the differential spectrum of particles $j$ from an annihilation to final state $F$.\n$\\bar J(\\Delta \\Omega)$ is the square of the dark matter density integrated along the line-of-sight, averaged over the solid angle $\\Delta \\Omega$:\n \\begin{equation}\n \\label{j}\n \\bar{J}(\\Delta \\Omega) = {1 \\over \\Delta \\Omega} \\int_{\\Delta \\Omega} d\\Omega \\,\n \\int_{l.o.s.} \\rho_X^2(s) ds.\n \\end{equation}\n\n\n\nDwarf spheroidal galaxies are promising sources for indirect dark matter searches for a number of reasons. First, they contain relatively large dark matter densities, and thus may produce sizable fluxes of annihilation products. Furthermore, dwarf spheroidals contain relatively little in the way of baryonic material (stars, gas, etc.) and are astrophysically simple. The typical mass of known dwarf spheroidal galaxies is $\\sim 10^7 \\, {\\rm M_\\odot}$, which is distributed over a volume on the order of a cubic kiloparsec. Dwarfs are largely devoid of astrophysical activity, and have very large mass to light ratios, Segue 1 being a particularly extreme example~\\cite{geha}. The systems therefore lack astrophysical sources which could potentially mimic a signal of dark matter annihilation. Although simulations suggest that many more are likely to exist, approximately 25 dwarf galaxies within the local group have been discovered thus far.\n\nThe dark matter distribution $\\rho(r)$ within a dwarf spheroid can be fit with a five parameter density profile~\\cite{strigari3} \n\\begin{equation}\n\\label{genrho}\n\\rho(r) = \\frac{\\rho_0}{(\\frac{r}{r_s})^a(1+(\\frac{r}{r_s})^b)^\\frac{c-a}{b}},\n\\end{equation}\nwhere $r$ is the distance from the dynamical center of the dwarf galaxy, $r_s$ is the scale radius, and $\\rho_0$ is the central core density. Typical ranges for the parameters $a$, $c$, and $b$, which determine the inner slope, outer slope, and transition between the two, respectively, are: $a=[0.0-1.5]$, $b=[0.5-1.5]$, and $c=[2-5]$. $N$-body simulations find cusped inner profiles with $a=[1.0-1.5]$. The profile of each individual halo can vary, however, depending upon its own merger history. Given this uncertainty, we follow Ref.~\\cite{strigari3} which marginalized over these five parameters, including $r_s$ and $\\rho_0$, over the ranges specified above with flat priors. Although the parameter ranges are generous, Ref.~\\cite{strigari3} finds that the velocity dispersion data fix the line-of-sight integral, $\\bar{J}(\\Delta \\Omega)$, to lie in a relatively small range. For details, see Ref.~\\cite{haloprifiles}. \n\nPresently, more than half a dozen dwarf galaxies are known which could potentially provide an observable flux of neutrinos. These dwarfs are relatively near the Solar System (tens of kpc) and, importantly, are in the northern hemisphere. This enables IceCube's background of atmospheric muons to be largely avoided by looking through the Earth, towards the northern hemisphere. \nWe have selected three nearby dwarfs as sources of interest for IceCube; Draco, Willman 1, and Segue 1. In Table~\\ref{tab:dwarfs}, we display the relevant properties of these satellite galaxies~\\cite{ned}, with $\\bar{J}(\\Delta\\Omega)\\Delta\\Omega$ and errors representing the $2\\sigma$ range taken from Ref.~\\cite{sehgalTeVPA}.\n\\begin{table}[h]\n \\caption{Properties of selected Milky Way dwarf galaxies.\n \\label{tab:dwarfs}}\n \\begin{tabular}{l c c c}\n Galaxy & Distance (kpc) & Log$_{10}\\bar{J}(\\Delta\\Omega)\\Delta\\Omega$ & Declination \\\\\n \\hline\n Draco & $80$ & $18.63 \\pm 0.73$ & $+57^\\circ 54' 55\"$ \\\\\n Willman 1 & $38$ & $19.55 \\pm 1.19$ & $+51^\\circ 03' 00\"$ \\\\\n Segue 1 & $23$ & $19.88 \\pm 0.82$ & $+16^\\circ 04' 55\"$\n \\end{tabular}\n\\end{table}\nWe note that there is some uncertainty in the observational status of Willman 1; it is currently unclear whether it is a dwarf galaxy or a globular cluster. It follows that there is considerably more uncertainty in the value of $\\bar{J}(\\Delta\\Omega)\\Delta\\Omega$.\n\n\nIceCube is designed to have an angular resolution of approximately $1^\\circ$ for muon tracks~\\cite{resconi}. Given this resolution, the dwarf galaxies in Table~\\ref{tab:dwarfs} are sufficiently small and distant that we can treat them as point sources. Atmospheric \\c{C}erenkov Telescopes and the Fermi Gamma Ray Space Telescope have angular resolutions much smaller than IceCube. For example, VERITAS has observed Draco, Ursa Minor, and Willman 1 over regions within $0.15^\\circ$ of each galaxy's center~\\cite{hui}. \n\n\n\nSeveral dwarf galaxies have been observed by ACTs as\npotential sources of gamma-rays:\nDraco by MAGIC~\\cite{albert} and STACEE~\\cite{stacee}, Sagittarius (which is visible from\nthe Southern Hemisphere) by H.E.S.S.~\\cite{aharonian}, and Draco, Willman 1, and Ursa Minor by VERITAS~\\cite{hui}. None of these telescopes have observed any significant signal and, therefore, have placed upper limits on the flux of gamma-rays coming from these sources. The strongest constraints on the annihilation rate of WIMPs in dwarf galaxies come from H.E.S.S. and VERITAS~\\cite{strigari3}. H.E.S.S. observations of the center of the Sagittarius dwarf yield an upper limit on the gamma-ray flux above 250 GeV of \n$\\Phi(E_{\\gamma}>{\\rm 250 \\,GeV}) \\mathrel{\\raise.3ex\\hbox{$<$\\kern-.75em\\lower1ex\\hbox{$\\sim$}}} 3.6 \\times 10^{-12} \\,{\\rm cm^{-2} s^{-1}}$\nat the 95\\% confidence level~\\cite{aharonian}. \nVERITAS observations of Draco, Ursa Minor, and Willman 1 bound the gamma-ray flux from each of these objects above 200~GeV to be $\\Phi(E_{\\gamma}>{\\rm 200 \\,GeV}) \\mathrel{\\raise.3ex\\hbox{$<$\\kern-.75em\\lower1ex\\hbox{$\\sim$}}} 2.4 \\times 10^{-12} \\,{\\rm cm^{-2} s^{-1}}$~\\cite{hui}. \n\n\n\n\n\n\n\n \n\n\n\\section{Muon Neutrinos From Dwarf Spheroidal Galaxies}\n\nIn this study, we consider three leptophilic scenarios: WIMPs which annihilate solely to $\\tau^+ \\tau^-$, solely $\\mu^+ \\mu^-$, or to both $\\mu^+ \\mu^-$ and $\\nu_\\mu \\bar{\\nu}_\\mu$, each with branching fractions of 50\\%. \nThe differential flux of neutrinos from annihilations directly to neutrino-antineutrino pairs in a dwarf galaxy is given by Eq.~\\ref{dN_dE}, and the $\\nu_\\mu$ differential spectrum at production takes the very simple form\n\\begin{equation}\n\\label{nuspectrum}\n\\frac{dN_{\\nu_\\mu}}{dE_{\\nu_\\mu}}=\\frac{dN_{\\bar{\\nu}_\\mu}}{dE_{\\bar{\\nu}_\\mu}}=\\delta(E_{\\nu_\\mu}-m_X).\n\\end{equation}\nIn the cases in which the WIMPs annihilate to $\\tau^+\\tau^-$ or $\\mu^+\\mu^-$, we use PYTHIA~\\cite{pythia} to calculate the resulting neutrino spectra. In all cases, we include the effects of three-flavor vacuum oscillations~\\cite{pdg}.\n\n\n\nAtmospheric muon neutrinos constitute the most serious background for detection of $\\nu_\\mu$ from dark matter annihilation products from northern hemisphere dwarf spheroidal galaxies. This background is a function of the zenith angle observed~\\cite{gaisser}. We use Ref.~\\cite{Honda:2006qj} to calculate the atmospheric neutrino background specific to the direction of each of the three source dwarfs. We note that the predicted background spectra are in agreement with the observations of Amanda-II~\\cite{Collaboration:2009nf}. \n\n\n\n\nThe rate of muon tracks from charged current neutrino interactions observed at IceCube is calculated by combining the incoming spectrum of muon neutrinos with the probability of those neutrinos being converted to muons above the energy threshold of the telescope~\\cite{halzen}, which is given by\n\\begin{equation}\n\\label{nuprob}\nP(E_\\nu, E^{{\\rm thr}}_\\mu)_{\\mu}=\\rho \\,\\, {\\rm N_A} \\int^1_0 dy \\frac{d\\sigma_{\\nu N}}{dy}(E_\\nu,y) \\, [R_{\\mu}(E_\\nu (1-y),E^{{\\rm thr}}_\\mu)+L].\n\\end{equation}\nHere, $\\rho=\\rho_{\\rm ice}\\approx0.9$ g cm$^{-3}$ is the density of the target medium (ice)\nand $\\rm N_A=6.022\\times10^{23}$ is Avogadro's number. The total charged current\ncross section for neutrino nucleon scattering, $\\sigma_{\\nu N}$, can be taken as\n\\begin{equation}\n\\frac{\\sigma_{\\nu N}(E_\\nu)+\\sigma_{\\bar\\nu N}(E_\\nu)}{2}=3.06\\times10^{-36} \\bigg(\\frac{E_\\nu}{600 \\,{\\rm GeV}}\\bigg)^{0.96} {\\rm cm^{2}}\n\\end{equation}\nfor 100 GeV $< E_\\nu < 1$ TeV, and is approximately flatly distributed in $y$~\\cite{gandhi}. Finally, $R_{\\mu}+L$ is the effective detector size, the sum of the physical length of the detector, $L$, and the distance a muon \ntravels before its energy falls below the threshold of the experiment, $R_\\mu$. The muon range in ice is \n\\begin{equation}\nR_{\\mu} \\approx 2.4\\,{\\rm km} \\times \\ln\\bigg[\\frac{2.0+4.2 (E_{\\mu}\/{\\rm TeV})}{2.0+4.2 (E_\\mu^{{\\rm thr}}\/{\\rm TeV})}\\bigg].\n\\end{equation}\nThe event rate for both the background and the signal is given by the integral\n\\begin{equation}\n\\label{events}\n{N}=\\int A_{\\rm eff} \\, P(E_\\nu,E_\\mu^{{\\rm thr}})\\frac{d\\Phi}{dE_\\nu} dE_\\nu.\n\\end{equation}\nThe effective area, $A_{\\rm eff}$, in IceCube is approximately one square kilometer. The DeepCore supplement to the \nIceCube detector is designed to increase the sensitivity to low energy neutrino-induced muons with 10 GeV $\\lesssim \nE_\\mu \\lesssim$ \n100 GeV, with the improvement most substantial at the lowest energies~\\cite{nuArea}. However, the dark matter \nannihilation signal in IceCube and DeepCore is dominated by muons with energies close to the dark matter mass: for the lowest WIMP masses considered here, $m_X \n=100$ GeV, the signal comes primarily from neutrinos with 50 GeV $\\lesssim E_\\nu \\lesssim$ 100 GeV. In this \ncase, considering only the IceCube and DeepCore sensitivities reported in Fig. 4 of \nRef.~\\cite{nuArea}, we expect an improvement of roughly 20\\% over IceCube alone. \n\nWIMP masses below $\\sim 100$ GeV are not compatible with the anomalous cosmic ray excesses. However, the sensitivity \nto neutrinos with $E_\\nu \\sim 10$ GeV is improved by including the DeepCore addition by more than an order of \nmagnitude over that of IceCube alone~\\cite{nuArea}. At the same time, since the maximal angle between a \nmuon track and the primary neutrino increases at low neutrino energies as $1\/\\sqrt{E_\\nu}$, the angular resolution is \nsignificantly degraded at low energies, resulting in a decreased signal to background ratio for localized objects such \nas the dwarf satellite galaxies in Table~\\ref{tab:dwarfs}. We therefore estimate that the DeepCore addition would \nimprove the sensitivity to low-mass dark matter annihilating in dwarf satellite galaxies by, at most, a factor of a few. \nHowever, as the cosmic ray anomalies do not favor light leptophilic dark matter, we report only on the \nsensitivity of the IceCube neutrino detector to dark matter annihilations in dwarf galaxies for WIMPs with $m_X > 100$ \nGeV.\n\nIn comparing the event rate to that predicted from the background of atmospheric neutrinos, we include events with energy up to 25\\% greater than the WIMP mass, which conservatively accounts for the finite energy resolution of IceCube~\\cite{resconi}.\nWe can now calculate the exposure time, T, necessary to achieve a detection with a given level of statistical significance over the background of atmospheric neutrinos.\nSince the background is essentially isotropic over the angular window of interest we follow the analysis of Ref.~\\cite{bergstrom} \nto find the minimum exposure time,\n\\begin{equation}\n{\\rm T}= \\sigma_{stat}^2 \\frac{N_{\\rm atm}+N_{\\rm DMA}}{N_{\\rm DMA}^2},\n\\end{equation}\nwhere $N_{\\rm DMA}$ is the event rate from dark matter annihilations in \nthe dwarf and $N_{\\rm atm}$ is the event rate from the atmospheric background. \nIn the following section, we use this approach to determine the sensitivity of IceCube to dark matter annihilations taking place in the dwarf galaxies Draco, Willman 1, and Segue 1. \n\n\n\\section{Prospects For IceCube And Comparison To Gamma-Ray Searches}\n\nWe now compare the relative merits of gamma-ray and neutrino observations\nin their ability to place limits on the dark matter annihilation cross section in leptophilic models. \nIn Fig.~\\ref{fig:draco}, we present IceCube's sensitivity to neutrinos from dark matter annihilations in Draco, after 10 years of observation. In the left panel, we show results for the annihilation channels $XX \\rightarrow \\mu^+ \\mu^-$ in red (lower contours) and $XX \\rightarrow \\tau^+ \\tau^-$ in blue (upper contours). As a consequence of neutrino oscillations, red and blue contours are distinct only at low $m_X$. The thick dashed lines denote the projected 2$\\sigma$ upper limit on the annihilation cross section as a function of mass (using the $2\\sigma$ low value for $\\bar{J}(\\Delta\\Omega)\\Delta\\Omega$), whereas the shaded regions bounded by thin solid lines denote \nthe minimal cross sections necessary for a 5$\\sigma$ discovery, given the $\\pm 2\\sigma$ range of values for $\\bar{J}(\\Delta\\Omega)\\Delta\\Omega$. \nIn the right panel, results are shown for the case in which $XX \\rightarrow \\mu^+ \\mu^-$ and $XX \\rightarrow \\nu_\\mu \\bar{\\nu}_\\mu$ each occur with branching fractions of 50\\%. One can see that, due to the uncertainty in the dark matter halo properties, the potential discovery regions extend well below the exclusion contours. \n\nFor comparison, in each frame we also show the corresponding upper limits on the dark matter annihilation cross section from VERITAS observations of Draco, as found in Ref.~\\cite{strigari3}. In the left panel, the grey and black dotted curves represent the upper limits on $\\langle \\sigma v \\rangle$ assuming annihilations to $\\mu^+ \\mu^-$ and $\\tau^+ \\tau^-$, respectively, while in the right panel, the grey dotted curve is the VERITAS upper limit assuming $X X \\rightarrow \\mu^+ \\mu^-$ and $XX \\rightarrow \\nu_\\mu \\bar{\\nu}_\\mu$ with equal branching fractions. We note that Ref.~\\cite{strigari3} uses the 90\\% confidence level lower limit on $\\bar{J}(\\Delta\\Omega)\\Delta\\Omega$, while we consider the slightly more generous $2\\sigma$ lower limit. Our projected sensitivities are therefore slightly more conservative. As Draco is the least bright of the dwarf galaxies considered here, we find that for WIMP masses $\\gtrsim 200$ GeV, VERITAS has already constrained the annihilation cross section roughly as tightly as IceCube will with 10 years of data for annihilation purely to $\\mu^+ \\mu^-$ or $\\tau^+ \\tau^-$. In the case of annihilations to $\\mu^+ \\mu^-$, IceCube may place competitive limits on the WIMP annihilation cross section only for $m_X \\mathrel{\\raise.3ex\\hbox{$>$\\kern-.75em\\lower1ex\\hbox{$\\sim$}}} 5$ TeV or $m_X \\lesssim 200$ GeV. For annihilation to $\\tau^+ \\tau^-$ the current constraints on $\\langle \\sigma v \\rangle$ from VERITAS observations of Draco are stronger than any IceCube will be able to achieve in the next 10 years for $m_X > 200$ GeV. This stems from the fact that taus often decay hadronically, resulting in many more high energy photons than in the purely leptonic decays of muons. For annihilation to taus, a substantial portion of IceCube's discovery reach has already been excluded by VERITAS. \n\n\nInstead of presenting our results as limits (or discovery reach) on the dark matter annihilation cross section, we can alternatively describe them in terms of the boost factor to the annihilation rate in a given dwarf spheroidal galaxy. Along the right side of each frame, the constraints and discovery prospects are given in terms of this quantity, which we define as the annihilation rate divided by that predicted for dark matter with $\\langle \\sigma v \\rangle = 3 \\times 10^{-26}$ cm$^3$\/s and distributed smoothly over the halo. This definition allows us to leave the origin of the boost factor unspecified.\n\n\nTurning to the right panel, we find that if neutrinos and muons are each are directly produced in 50\\% of annihilations, IceCube will have comparable or better sensitivity than VERITAS for all WIMP masses.\nIn fact, in this case IceCube's discovery reach is largely untested by ACTs. \nWe note that if the branching fraction to neutrinos is larger than 50\\%, IceCube's sensitivity will be further increased, while that of ACTs and other gamma-ray telescopes will be reduced.\n\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\mbox{\\includegraphics[width=.47\\textwidth]{mutauDra.pdf}}\n\\hspace{0.03\\textwidth}\n\\mbox{\\includegraphics[width=.47\\textwidth]{nuDra.pdf}}\n\\end{center}\n\\vspace{-0.5cm}\n\\caption{\\it Projected upper limits on the dark matter annihilation cross section, $\\langle \\sigma v \\rangle$ (alternatively, boost factor), at 95\\% confidence level as a function of mass (thick dashed) from neutrino observations of the dwarf spheroidal galaxy Draco. Also shown are the 5$\\sigma$ discovery regions (shaded). In the left frame, the red (lower) region and lines denote the case in which dark matter annihilates to $\\mu^+\\mu^-$, whereas the blue (upper) regions and lines denote annihilations to $\\tau^+\\tau^-$. The right frame is for the case in which dark matter annihilates to both $\\mu^+\\mu^-$ and $\\nu_{\\mu}\\bar{\\nu}_{\\mu}$ with equal probability. In each frame, the dotted lines denote the current limits from gamma-ray observations by VERITAS. See text for more details.\n\\label{fig:draco}}\n\\end{figure}\n\n\\begin{figure}[t]\n\\begin{center}\n\\mbox{\\includegraphics[width=.47\\textwidth]{mutauWil.pdf}}\n\\hspace{0.03\\textwidth}\n\\mbox{\\includegraphics[width=.47\\textwidth]{nuWil.pdf}}\n\\end{center}\n\\vspace{-0.5cm}\n\\caption{\\it The same as Fig.~1, but for observations of the dwarf spheroidal galaxy Willman 1. \n\\label{fig:willman}}\n\\end{figure}\n\nIn Fig.~\\ref{fig:willman} we present the same information as Fig.~\\ref{fig:draco}, but for the dwarf galaxy Willman 1. Willman 1 is somewhat brighter than Draco, and therefore we expect all sensitivities to improve. However, there is also considerably more uncertainty in the halo profile, leading to large discovery regions and inflated prospective limits. For annihilations to charged leptons only, boost factors of ${\\cal O}(10^2)$ may be accessible at IceCube if $\\bar{J}(\\Delta\\Omega)\\Delta\\Omega$ is near its $2\\sigma$ upper limit. For low $m_X$, these annihilation cross sections have not yet been probed by ACTs. For annihilation to $\\mu^+\\mu^-$ and $\\nu_\\mu \\bar{\\nu}_\\mu$, shown in the right panel, boost factors as low as ${\\cal O}(10)$ may be accessible in the most optimistic case.\n\n\\begin{figure}[t]\n\\begin{center}\n\\mbox{\\includegraphics[width=.47\\textwidth]{mutauSeg.pdf}}\n\\hspace{0.03\\textwidth}\n\\mbox{\\includegraphics[width=.47\\textwidth]{nuSeg.pdf}}\n\\end{center}\n\\vspace{-0.5cm}\n\\caption{\\it The same as Fig.~1, but for observations of the dwarf spheroidal galaxy Segue 1. In this figure, the dotted lines denote the preliminary limits from the Fermi Gamma-Ray Space Telescope~\\cite{fgst}.\n\\label{fig:segue}}\n\\end{figure}\n\nFinally, we examine the closest of the three dwarfs we have considered, Segue 1. In Fig.~\\ref{fig:segue}, we present IceCube's projected sensitivity to dark matter annihilations in Segue 1, along with the current upper limits on the annihilation cross section from FGST measurements~\\cite{fgst}. For annihilation to muons (taus), IceCube may eventually set stronger exclusion limits for $m_X > 450 (750)$ GeV. If there is a 50\\% branching fraction to neutrinos, IceCube will do better for all WIMP masses shown. As Segue 1 is newly discovered, we expect that ACTs will observe this galaxy in the near future.\n\n\n\n\\section{Discussion and Conclusions}\n\nIn this article, we have considered the prospects for the kilometer-scale neutrino telescope IceCube to detect neutrinos from dark matter annihilating in dwarf spheroidal galaxies, and have compared these to the constraints placed by gamma-ray telescopes. We find that if dark matter annihilates primarily to muons, taus, and\/or neutrinos, IceCube can potentially provide constraints comparable to or even stronger than those obtained by current gamma-ray telescopes.\n\n\nOther dwarf spheroidal galaxies, such as Ursa Minor, which yield weaker gamma-ray limits on the dark matter annihilation cross section will also be less promising to be observed by IceCube. An analysis of the neutrino flux from all dwarf galaxies accessible to IceCube could potentially improve the sensitivity to the annihilation cross section, however, the maximal improvement assuming all dwarfs are equal is proportional to $1\/\\sqrt{N}$, where $N$ is the number of dwarfs in the sample. All dwarfs, of course, are not equal. In order to carry out such an analysis, one must assume that the mechanism responsible for the boosted (relative to thermal) annihilation cross section results in the same observed cross section in each dwarf galaxy. As the velocity dispersion in each dwarf is independent and the clumpiness of each dwarf is unknown, there is no reason to expect that the observed annihilation cross section in any two dwarfs should be the same, thus, at this point, the only appropriately model-independent interpretation is to view the annihilation signal from each dwarf independently. Given the velocity dispersion in each dwarf, one could derive limits on particular dark matter models assuming a specific velocity-dependent enhancement to the annihilation cross section. Such a study may be useful in the near future.\n\n\n\\bigskip \n\nKF is supported by the US Department of Energy and MCTP via the Univ.~of Michigan and the National Science Foundation under Grant No.~PHY-0455649; DS and DH are supported by the US Department of Energy, including grant DE-FG02-95ER40896; DH is also supported by NASA grant NAG5-10842; PS is supported by the National Science Foundation under Grant No.~PHY-0455649. MB is supported by the US Department of Energy, under grant DE-FG03-92-ER40701. K. F. would like to thank the Aspen\nCenter for Physics and the Texas Cosmology Center, and P. S. would like to thank MCTP.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction: \\label{INTRODUCCION}}\n\nRecurrent neural networks are used to model different areas of the brain such as cortex and prefrontal cortex, which are areas that have high recurrency in their connections, even in early sensory areas that receive stimuli from subcortical areas \\cite{Murphy2009}. Different approaches, topologies, and training methods have been proposed using such networks \\cite{SUSSILLO2014156, BARAK20171, NIPS2019_9694}. The advances made, have been guided by results obtained in different experiments such as multiple single-unit recording or neuroimaging data \\cite{10.1371\/journal.pcbi.1005542, Pandarinath2018}. For example, some models, as ORGANICs \\cite{Heeger22783}, have been inspired by the progress in the field of Machine learning \\cite{Hassabis2017}, where configurations such as LSTM and GRU are widely spread and have been used to process temporal sequences since they do not have the same limitations as RNN to process long time dependencies \\cite{69e088c8129341ac89810907fe6b1bfe, NIPS2015_5955, pascanu2013, 279181}.\n\n\nHowever, the dynamics of the simple RNN still constitutes a vast field of study. It is essential to understand the dynamics behind such models because they are used to construct different hypotheses about the functioning of the brain areas and to explain the observed experimental results \\cite{BARAK20171, KAO2019122}. \n\nIt has long been known that network dynamics is strongly influenced by the eigenvalues spectrum of the weight matrix that describes synaptic connections. This spectrum has been the subject of study under different connectivity model hypotheses \\cite{PhysRevLett.97.188104, PhysRevE.88.042824, PhysRevE.100.052315, doi:10.1162\/neco.2009.12-07-671, Goldman2009}.\n\n\nIn present work the focus is the study of the dynamics of recurrent neural networks trained to perform Boolean-type operations with temporal stimuli at their input that simulate being sensory signals. In particular, network's recurrent weights have been trained starting initially from matrices with weights given by a normal distribution with zero mean and variance $\\frac{1}{N}$ by using backpropagation through time with the Adam method \\cite{DBLP:journals\/corr\/KingmaB14}. \n\nIn our previous work, whose preliminary version can be found in \\cite{jarne2019detailed}, we have illustrated a set of properties of these networks. In the present work, the different aspects of dynamics have been studied in-depth and an interpretation will be provided for the results of the numerical simulations corresponding to networks trained for the AND, OR, XOR tasks, and a Flip Flop.\n\nThe motivation for the selection of these tasks is double. On the one hand, to simulate flow control processes that can occur in the cortex when receiving stimuli from subcortical areas \\cite{10.3389\/fncom.2011.00001}. On the other hand, these tasks are the basic and the lowest level for computing in any digital system. In the case of the Flip Flop, it is the simplest sequential system that one can build \\cite{floyd2003digital}. \n\nIt has been previously proposed that some sets of neurons in the brain could roughly function as gates \\cite{10.3389\/fncom.2011.00001}. On the other hand, it also is interesting in itself the dynamics of trained networks for the Flip Flop task. It has been previously studied in \\cite{DBLP:journals\/neco\/SussilloB13, SUSSILLO2014156}, but in this case with a more complex task referring to a 3-bit register called in the work a 3-bit Flip Flop.\n\nSo far, there are few detailed studies on the eigenvalues of the matrix of recurrent weights performed in trained networks. For example the work of Rivikind and Barak \\cite{PhysRevLett.118.258101} stands out. Although the framework of this work is Reservoir Computing. Present work shares some of the observations made by the authors on the results. Other works considered matrices with partially random and partially structured connectivity, such as the work described in \\cite{PhysRevE.88.042824, PhysRevE.91.012820, 10.1371\/journal.pcbi.1006309}. There ware also considered the results of these works in the present analysis.\n\nMost of the existing literature on eigenvalues and dynamics is regarding the study of networks with random connections \\cite{doi:10.1137\/1129095, PhysRevLett.97.188104, PhysRevE.97.062314}. Besides, older works on dynamics consider, for example, other constraints such as symmetric matrices \\cite{VIBERT1994589}.\n\nFor these reasons, the present analysis represents a significant contribution through the study of eigenvalues when considering non-normal matrices and trained networks. It is surprising the richness in the dynamics that can be observed when considering a minimal model of trained networks that perform the tasks.\n\nThe model is presented in Section \\ref{MODELO}. In Section \\ref{RESULTADOS NUMERICOS} results are shown and also how to classify the realizations obtained after training (network's simulations). In Section \\ref{DISCUSION} the different aspects that arise from the realizations are discussed. Finally, in Section \\ref{CONCLUSIONS}, some remarks and further directions are presented.\n\n\\section{Description of the model \\label{MODELO}}\n\nEquation \\ref{eq-01} rules the dynamics of the interconnected $n$ units in a neural network, where $i=1,2...,n$. \\cite{Hopfield3088}.\n\n\\begin{center}\n\\begin{equation}\n\\frac{dh_i(t)}{dt}=-\\frac{h_i(t)}{\\tau}+\\sigma \\left( \\sum_{j}w^{Rec}_{ij}h_j(t)+\\sum_{j}w^{in}_{ij} x_j \\right)\n\\label{eq-01}\n\\end{equation} \n\\end{center}\n\n$\\tau$ represents the time constant of the system. $\\sigma$ is a non-linear activation function. $x_j$ are the components of the vector $\\mathbf{X}$ of the input signal. The matrix elements $w^{Rec}_{ij}$ are the synaptic connection strengths of the matrix $\\mathbf{W^{Rec}}$ and $w^{in}_{ij}$ the matrix elements of $\\mathbf{W^{in}}$ from the input units. Where, as already mentioned in Section \\ref{INTRODUCCION}, matrices have recurrent weights given from a normal distribution with zero mean and variance$\\frac{1}{N}$.\n\nThe readout in terms of the matrix elements $w^{out}_{ij}$ from $\\mathbf{W^{out}}$ is:\n\n\\begin{center}\n\\begin{equation}\n\\mathbf{Z(t)}= \\sum_{j}w^{out}_{ij}h_j(t)\n\\label{eq-02}\n\\end{equation}\n\\end{center}\n\nFor this study it was considered $\\sigma () = tanh () $ and $ \\tau = 1 $, without loss of generality. The model is discretized through the Euler method for implementation. It was implemented in python using Keras and Tensorflow \\cite{chollet2015keras, tensorflow2015-whitepaper}, which allows making use of all current algorithms and optimizations developed and maintained by a large research community. The procedure has previously been used in \\cite {jarne2019detailed}, where it was described in detail. \n\nNetworks were trained using backpropagation through time with the adaptive minimization method called Adam. Although the training method is not intended to be biologically plausible, in a recent publication, arguments are presented regarding that, under certain approaches, this phenomenon could be plausible \\cite{Lillicrap2020}.\n\nThe stimuli presented at the input of the networks, corresponding to the training sets, are time series containing rectangular pulses with random noise corresponding to 10 \\% of the pulse amplitude. The possible combinations presented are: 2 simultaneous pulses at each input, 1 in one or the other, or no pulse, constituting the four possible binary combinations, as seen on the right side of Figure \\ref{fig_01}. The target output completes the set, and it will depend on which of the functions you want to teach the network (AND, OR, XOR, or Flip-Flop).\n\n\\begin{figure*}[htb!]\n\\includegraphics[width=16cm]{figs\/01_modelo_.png}\n\\caption{Model. In the training stage, the time series are entered into the network in the 4 possible combinations constituting a set with 15000 samples with noise. The training algorithm adjusts the weights, according to the target function, to obtain the trained matrices $ \\mathbf{W^{in}}$ and $ \\mathbf{W^{Rec}}$ of each one. \\label{fig_01}}\n\\end{figure*}\n\nNetworks of two different sizes were considered for the study: 50 and 100 units, the latter as a control case. With 50 units the tasks can be learned in reasonable computational time and with good accuracy. It was considered two types of initial conditions for the recurrent matrices: Random Normal distribution and Random Orthogonal, the second case is an additional constraint. It is initialized with an orthogonal matrix obtained from the decomposition of a matrix of random numbers drawn from a normal distribution. \n\nAlthough successfully trained networks can also be obtained using the identity matrix for initialization, this initial condition is far from the random connectivity paradigm previously used.\n\n\\section{Results \\label{RESULTADOS NUMERICOS}}\n\nNetworks were trained to carry out all the mentioned tasks (AND, OR, XOR, and Flip Flop). Two different initial conditions were considered for the matrix of recurrent weights, as did in \\cite{jarne2019detailed} and also mentioned in the previous section. More than 20 networks were trained for each task and initial condition. The realizations obtained were studied and classified one by one. \n\nTo do this, a noise-free testing set, corresponding to the four possible binary options, was used to study the response of each network. First, the behavior of some k units was plotted as a function of time $ (h_k (t)) $ for each of the possible stimuli combinations. The lower-left panel of Figure \\ref{fig_02} shows the response of the set of $ (h_k (t)) $ corresponding to a network trained for the AND task with a stimulus at one of its inputs. In this case, input A is elicitated. After the stimulus of one-input only, as expected, the network' s output must remain in a \"Low\" state, since in the task AND the output only goes to a \"High\" state when both inputs receive a stimulus. \n\n\\begin{figure*}[bth!]\n\\includegraphics[width=14cm]{figs\/02_metodos.png}\n\\caption{Methods. Upper left panel: Weight distribution of $ \\mathbf{W ^{ Rec}}$. Upper right panel: eigenvalue distribution in the complex plane corresponding to the decomposition of the $\\mathbf{ W ^ {Rec}} $ matrix. Lower left panel: a possible combination of stimuli (High-Low) presented to the network and the temporal response of some units and the output. Lower right panel: decomposition into singular values in the 3 main components or axes for the 50 unites states $h_k(t)$ and the considered period.\\label{fig_02}}\n\\end{figure*}\n\nA decomposition into singular values was performed with the entire set of the output's units $ h_i (t) $. The behavior of the system was plotted into the 3 axes of greatest variance. This is shown in the lower-right panel of Figure \\ref{fig_02}.\n\nFor each realization, the distribution of the recurrent weights pre and post-training was plotted. The distribution moments are estimated in each case. Then, the decomposition of $ \\mathbf {W ^{rec}} $ in their eigenvectors and eigenvalues is obtained. An example of one network is presented in the upper part of Figure \\ref{fig_02}. In the left-panel is shown the distribution of the weight matrix with its moments. In the right-panel is presented the distribution of the eigenvalues in the complex plane. The behavior is described in detail in Section \\ref{dist_autovalores}.\n\nFrom inspecting the different realization \\textbf{[See Supplementary Information]}, some general observations associated with these systems emerge first. These are explained below.\n\nThe first observation is that the recurrent weights distributions of the trained networks do not differ too much respect to the pre-training ones. It is possible to compare the differences by studying the pre and post-training moments of the distributions The changes between the initial and final states of the distributions were studied through a linear regression comparing it to the identity function and then considering the percentage variations.\n\nIt was observed that the variation of the post-training mean is less than \\% 6 for all the tasks with a tendency to decrease with respect to the initial condition. Regarding the standard deviation, the variations are less than 0.5\\%. In the case of Skewness and Kurtosis, they increase slightly by a maximum of 15 \\% in the worst case, and in the case that least varies, the variation is less than 0.5 \\%. For full tables and details see \\textbf{[Appendix \\ref{A}]}. \n\nThe second observation is that when training the networks for AND and OR, XOR, and Flip Flop tasks, similar configurations for the distributions of the eigenvalues arise, which will be described in more detail later in Section \\ref{dist_autovalores}.\n\nIf we carefully examine the realizations obtained, and think in terms of the response of the network to the stimuli, it is possible to group AND and XOR as similar tasks, OR as a simpler one, and Flip Flop as a slightly more sophisticated task related to AND and XOR.\n\nFirst, let us consider the case of the AND and XOR tasks. When certain combinations of stimuli appear at the input, the output must be activated. When other combinations of stimuli appear, it is necessary to passivate the activity of the output. The appropriate combination of stimuli, for one case or the other, is given according to the Boolean rule. \n\nBoth tasks have in common that, for no stimulus, the response must be zero. For AND, the combination that activates the output is High-High and those which passivate it is High-Low or Low-High. Exactly the opposite case is how the XOR function works. \n\nThere are 3 general states of the system for both tasks: the resting state, a second state in which the stimuli produce a high-level output, and another one where the stimuli elicited activities that are combined in a way that the output is in a passive state, despite the stimulus of one input.\n\nThe OR task is simpler, in the sense that for any combination of stimuli presented at the input, the state of the output must be active or high-level. In the case of not having any stimulus, the output must be zero. For this task, there is no combination of stimuli that leads the output to be passivated, as in the previous case of the AND and XOR functions. There are only two possible general states for the system: The resting state and the state that activates the output.\n\nIn the case of the 1-bit Flip-Flip, one stimulus at the input called \"S\" brings the system to the high-level state, while a stimulus in \"R\" takes it the system to the passivated state. Two consecutive stimuli at \"S\" or \"R\" should not generate changes in the system. \n\nThis task is more complex since there the changes depend on one specific input. You have to consider also that the system has to remain in the same state when applying two consecutive stimuli meaning that the system must ignore the consecutive activation of the same input of each input.\n\nIt is possible to summarize these ideas by saying: AND and XOR need to have at least two general modes associated with the possible states of the system, plus the rest state. The same is for the Flip Flop, which also needs to remain unchanged when consecutive stimuli. OR needs to have at least one mode associated with the high-level state and the rest state.\n\nFrom the realizations is observed that it is not unique how each network manages to maintain the state of the output for which it was trained, as we have previously indicated in \\cite{jarne2019detailed}. There are different ways to combine the network weights to have different internal states that result in the same required output rule. These lead to different behavior in the dynamics.\n\n\\subsection{Classification of the realizations} \\label{classification}\n\nThe considerations of the previous section allow classifying all the obtained realizations in the simulations into different groups, starting by using the observation of the behavior of $ h_i (t) $ when each network input is elicitated with the four possible different combinations in the inputs.\n\nLet's start with the case of the AND and XOR functions. Since there are at least two general modes associated with system states, let's start with the passivated output mode. The following situations may occur:\n\n\\begin{enumerate}\n\n\\item When the stimulus arrives, the $ h_i (t)$ activities start a transitory regime that ends in a sustained oscillation, each with a different face and amplitude. The superposition is given by $ \\mathbf {W ^ {out }} $ and allows to passivate the output. This is the example shown in Figure \\ref{fig_02}.\n\n\\item When the stimulus arrives, the $ h_i (t) $ start a transitory regime that leads to a fixed level other than zero for each one, and whose superposition, given by $ \\mathbf {W ^ {out}} $, allows to passivate the output.\n\n\\item The $ h_i (t) $, when the stimulus arrives, passes to a transitory regime that attenuates to zero and the output is zero as a result of the attenuation of the $ h_i (t) $.\n\n\\end{enumerate}\n\nIf we now consider the mode of the excited output-state, it is possible to have situations 1) and 2), but not 3). In general, it is observed in the numerical simulations that the sustained oscillatory mode is more often associated with the passivated state of the output, as shown in Figure \\ref{fig_02}.\n\nLet's illustrate this situation with the realization with label $XOR\\ \\#id10$, represented in Figure \\ref{fig_03}, where the excited output-state appears as a fixed point final state, while the passivated output appears as an oscillatory state.\n\n\\begin{figure*}[tbh!]\n\\includegraphics[width=8.85cm]{figs\/03_xor_10_a.png}\n\\includegraphics[width=8.85cm]{figs\/03_xor_10_c.png}\n\\includegraphics[width=8.85cm]{figs\/03_xor_10_b.png}\n\\includegraphics[width=8.85cm]{figs\/03_xor_10_d.png}\n\n\\caption{Upper panel: Excited output states (fixed-point state) for either input stimulus for the XOR function. Lower panel: Rest state (left) and passivated oscillatory state (right) of the output in response to the presence of two simultaneous stimuli. Realization with label $XOR\\ \\#id10$\\label{fig_03}}\n\\end{figure*}\n\nThe possible combinations listed above correspond to the observed for the different realizations: It is possible to have either an excited state with oscillatory behavior for the $h_i(t)$ or an excited state with a fixed point. The same is true for the passivated state of the output.\n\nNow let's consider the OR task, as described before. In this case, there is at least one mode corresponding to any combination of stimuli. The situations that can occur are:\n\n\\begin{enumerate}\n\n\\item With any stimulus of the inputs, $ h_i (t) $ passes from a transient to a fixed point.\n\n\\item With any stimulus of the inputs, $ h_i (t) $ goes from a transient to a sustained oscillation regime.\n\n\\end{enumerate}\n\nThe case of zero output corresponds only to zero stimuli at the inputs. Figure \\ref{fig_05} shows the description of the situation corresponding to case 1.\n\n\\begin{figure*}[tbh!]\n\\includegraphics[width=8.85cm]{figs\/04_or_01_a.png}\n\\includegraphics[width=8.85cm]{figs\/04_or_01_b.png}\n\\includegraphics[width=8.85cm]{figs\/04_or_01_c.png}\n\\includegraphics[width=8.85cm]{figs\/04_or_01_d.png}\n\\caption{Excited output states (fixed-point state) for either input stimulus for the OR function. The case of zero output corresponds only to zero stimuli at the inputs. The network shown corresponds to the simulation with the label: $OR\\ \\#id01$\\label{fig_05}}\n\\end{figure*}\n\n\\subsection{A second stimulus}\\label{one second}\n\nIf, after a certain time, the network receives a second stimulus equal to the previous one (in one or both inputs), it is possible to classify the response of the system according to which was the previous input state and what is the task for which it was trained. \n\nFor example, let's consider the situation where the network is trained for the AND task and presents the passivated output state: In the case of receiving a second stimulus at both inputs, the network migrates to a new state, so the output goes to a high-level state (as seen in panel \\textbf{a)} of Figure \\ref{fig_07}). If it receives a single second stimulus, the system is disturbed, but it returns to the passivated condition (generally an oscillatory state) so that the output is set at zero, as seen in panel \\textbf{b)} of Figure \\ref {fig_07}.\n \nNow let's consider the case where the output is in a high-level state, and the system receives two simultaneous stimuli. In this case, the system is disturbed, but it remains at the high-level state, as shown in panel \\textbf{c)} of Figure \\ref{fig_07}. If the network receives a new stimulus (in one of the inputs only), the state to which it migrates depends on each particular realization, and it is not possible to classify the response in a general way. For the realization shown in panel \\textbf{d)} of Figure \\ref{fig_07}, the system goes to the passivated state.\n\n\\begin{figure*}[tbh!]\n\\includegraphics[width=17.7cm]{figs\/07_segundo_in_pos.png}\n\\caption{Trained network for the AND task (corresponding to the realization with label $AND\\ \\#\\ id15 $) is subjected to a second stimulus in one or two inputs, identical to the first one. Each of the panels in the Figure shows the different relevant situations described in Section \\ref{one second}.\\label{fig_07}}\n\\end{figure*}\n\nIf the network receives a second stimulus with the opposite level of the first one (in one or two of its inputs), it is possible again to classify the response of the system according to the previous state. This is illustrated in Figure \\ref{fig_08}. \n\nLet's consider the AND task. One possible state is to have the output at a low-level, corresponding to the passivated state produced by a single previous stimulus (panels \\textbf{a)} and \\textbf{b)} of Figure \\ref{fig_08}). As shown in panel \\textbf{a)}, if the network receives two stimuli, the output migrates to a negative-level. If it receives a single negative stimulus, the system is disturbed, but it remains in the passivated state, shown in panel \\textbf{b)} of Figure \\ref{fig_08}.\n\nNow, consider the system output being in a high-level state and receive one (panels \\textbf{e)} and \\textbf{f)}) or two negative stimuli (panel \\textbf{c})). In both cases, the state of the output depends on the realization, and it is not possible to classify the response in a general way.\n\nIf the network is at a low level and receives two negative stimuli, it migrates to a negative state. This case is shown in panel \\textbf{d)}. If the network receives a single negative stimulus, it migrates to the passivated state, shown in the lower central panel of Figure \\ref{fig_08}.\n\n\\begin{figure*}[tbh!]\n\\includegraphics[width=17.7cm]{figs\/07_segundo_in_neg.png}\n\\includegraphics[width=8.85cm]{figs\/cero_pasivada_.png}\n\\caption{Trained network for the AND task (corresponding to the realization with label $AND \\#\\ id15$) subjected to a second negative stimulus in one or two inputs. Each of the panels in the Figure shows the different relevant situations described in Section \\ref{one second}, and the behavior of the system, according to the case.\\label{fig_08}}\n\\end{figure*}\n\n\\subsection{The Flip Flop}\\label{FF}\n\nThe Flip-Flop case is more difficult to analyze. However, when observing the response of the networks to a second positive stimulus, it is possible to detect the different situations that could arise in favor of having a Flip Flop. The high level of the output, which corresponds to a transient, could migrate to either a fixed point or a sustained oscillation until the stimulus on the other input changes its value. Or, also the stimulus on the same input disturbs it a little with noise but allows the system to have a sustained state. \n\nIndeed, this situation is shown in Figure \\ref{fig_10} where the output result is shown for one of the realizations obtained, corresponding to the network with label $FF\\ id \\# 05 $. Here it is shown two consecutive stimuli at the Set Input, and the another at the Reset input.\n\nIn a Flip Flop it is necessary that, when stimulating the \"R\" input, the system migrates to a fixed-point or an oscillatory state, corresponding to the passivated output state. By stimulating the \"S\" input, the system must similarly migrate to an active state. The system must have also a mechanism that allows ignoring consecutive stimuli.\n\n\\begin{figure*}[tbh!]\n\\includegraphics[width=14cm]{figs\/05_ff_16.png}\n\\caption{Example of one simulation performed with a trained network for the Flip Flop task. This case corresponds to the network with label $FF\\ id \\# 05 $. The state of the SET and RESET inputs are shown as a function of time. The outputs and the temporal evolution of the activity of some units are also shown. On the right panel, the decomposition into singular values is presented.\\label{fig_10}}\n\\end{figure*}\n\n\\subsection{The eigenvalue distributions of the realizations}\\label{dist_autovalores}\n\nFrom the analysis of trained networks, the third observation that emerges is that there are some regular patterns in the distribution of matrix eigenvalues. This happens for trained network's matrices that have been initialized before training with the random normal condition as well as for that trained starting from the orthogonal random normal condition.\n\nThese patterns can be characterized. Trained networks have patterns on the distribution of eigenvalues very similar to the initial condition (pre-training), but with some eigenvalues outside the unitary circle. \nLet's consider, for example, the initial condition of the example previously presented in Figure \\ref{fig_03} of the XOR function, and let's compare it with the trained network. This is shown in Figure \\ref{fig_11}.\n\n\\begin{figure*}[tbh!]\n\\includegraphics[width=7.5cm]{figs\/06_xor_10_ini.png}\n\\includegraphics[width=7.5cm]{figs\/06_xor_10_trained.png}\n\\caption{Comparison between the distribution of eigenvalues corresponding to the pre-training and post-training condition for the network previously considered in the example shown in Figure \\ref{fig_03}. On the left panel it is shown the orthogonal condition reflected in the distribution of the eigenvalues. On the right panel it is shown the result that after training. A few eigenvalues migrate out of the unitary circle. \\label{fig_11}}\n\\end{figure*}\n\nThe Figure shows that except for a small group of eigenvalues that migrated out of the unit circle, the rest remain on the unit circle. This situation is repeated in all the obtained simulations \\textbf{[See Supplementary Information]}. From this observations, it is proposed that these eigenvalues outside the unit circle are directly related to the modes of $h_i (t)$ that configure the possible states of the output, as we have suggested in \\cite{jarne2019detailed}, which is also compatible with the observations made in \\cite{doi:10.1162\/neco.2009.12-07-671}.\n\nThe location of the eigenvalues outside the unitary circle seems to be related to the behavior (or mode) observed for the different stimuli discussed in the previous section. Indeed, for all the realizations obtained corresponding to the different tasks, it was possible to link the position of the eigenvalues with the approximate behavior of the unit's activity $ h_i (t) $.\n\nFigure \\ref{fig_12} shows the distributions of the eigenvalues for all the realizations presented in previous sections that have been used as illustrative examples.\n\n\\begin{figure*}[tbh!]\n\\includegraphics[width=18cm]{figs\/eigen__.png}\n\\caption{Distributions of eigenvalues in the complex plane for the realizations used to exemplify the different modes obtained as a result of training and initial conditions. It is observed that the dominant values outside the unitary circle can be real or complex. In Section \\ref{DISCUSION}, the different situations are discussed, and also the link with the behavior is explained.\\label{fig_12}}\n\\end{figure*}\n\nIn section \\ref{DISCUSION}, it is argued why the analysis of the recurrent weights matrix allows a good approximated description of the different modes obtained for each realization and stimulus type. But first, let's classify the different distributions of eigenvalues of the realizations, and let's relate them to the results presented in Section \\ref{classification}.\n\nLet's consider the AND and XOR tasks. It is mostly observed for these tasks that the $\\mathbf{W^{Rec}}$ matrices present 3 eigenvalues outside the unitary circle. One usually is a real eigenvalue, and the others constitute a complex conjugate pair. Different cases can occur in this frequent situation. Those are described below.\n\nThe fixed level of activity $h_i(t)$ is usually associated with the excited level of the output, while the complex conjugate pair is usually associated with the passivated level. Exceptionally, it is possible to observe a few cases where this is the other way around. It is also observed that the oscillation frequency of $ h_i(t)$ always correlates with the angle in the complex plane defined by Equation \\ref{ang}.\n\n\\begin{center}\n\\begin{equation}\n \\theta= arctan \\left(\\frac{Im(\\lambda_L)}{Re(\\lambda_L)}\\right)\n\\end{equation}\n\\label{ang}\n\\end{center}\n \n$\\theta$ is measured respect to the positive semi-axis, $\\lambda_L $ is the complex dominant eigenvalue outside the unitary circle (imaginary part is not zero). Small angles correspond to slower frequency oscillations of the activity $ h_i (t)$, while larger angles correspond to faster oscillations, as is shown also in \\textbf{[Supplementary Information]}. \n\nWhen the eigenvalues outside the unit circle are pure reals (a rare situation where there are usually 2 or 3 eigenvalues outside the unit circle), the states of the $ h_i (t) $ correspond to non-zero sustained fixed levels. This happens for both passivated output and excited output.\n\nWhen the eigenvalues outside the unit circle are 2 pure reals, but one is on the side of the negative semi-axis, a fixed-level mode appears for the $ h_i (t) $ and another mode with very fast oscillations \\textbf{[See Supplementary Information]}.\n\nExceptionally, some trained networks have more than one complex conjugate pair. In this case, the oscillatory behavior is usually more complex, but it seems to be dominated by the eigenvalue more distant from the unitary circle. In cases of high-frequency oscillations, modulations can also be observed in the levels of $ h_i (t)$, \\textbf{[See Supplementary Information]}.\n\nLet's consider the results obtained for the OR task. In this case, as mentioned in Section \\ref{classification}, it is enough to have one general mode for the activity of the units, since it is possible either having the state of rest or the excited state of the output. There is no passivated state in this task. In the case of matrices with the initial condition orthogonal, mostly the configurations have 3 eigenvalues outside the unitary circle: the complex conjugate pair and the pure real eigenvalue. In the case of random normal matrices, it is most common to have only one pure real eigenvalue. \n\nThis difference between both conditions appears because when the eigenvalues are located on the edge of the circle (orthogonal initial condition) it less difficult for the training algorithm to move a complex conjugate pair outside the unitary circle. Whereas, if the initial condition is random normal, it is a bit more computationally expensive to push more of one eigenvalue, since they are more likely located further from the edge.\n\nDepending on the proximity to the edge, it is possible to have configurations with a single-mode or two. In the case of having two, the stimuli generally elicitate the mode corresponding to the pure real eigenvalue, since the $ h_i (t)$ go from the transitory state to the fixed level $ h_i (t) $, which is consistent the previous observation in the AND and XOR tasks, where the oscillatory state corresponds usually to the passivated output level, a state that does not occur for any combination of stimuli in the OR task.\n\nLet's consider the Flip Flop task. For this task, the minimum situation for the system to fulfilling the task is analogous to what happens in networks that learned AND or XOR tasks. For a given combination, the network must be able to have the passivated state of the output.\n\nThe cases obtained in this work can be classified into similar categories as before. Nevertheless, this task has an additional complexity related to the distance between consecutive stimuli and the capacity of the system between stimuli to pass from the transient to the steady-state.\n\nIn general, in most situations it is found that a fixed point state corresponding to the real eigenvalue appears and a complex conjugate pair, which is also generally related to the passivated state of the output.\n\n\\section{Discussion \\label{DISCUSION}}\n\n\nTo interpret the results obtained in the realizations classified in previous sections, let us begin by making some approximations regarding the system that will allow us to understand the behavior of the $ h_i (t) $.\n\nIf the units operate away from the saturation regime, we could do a linearization of the system that will allow us to make an approximate description of the long term dynamics. That would allow us to associate our observations with some well-known results.\n\nFrom the equation \\ref{eq-01} we can consider the linear model given by Equation \\ref{eq-lin}, using the first order Taylor expression for $tanh()$.\n\n\\begin{equation}\n\\frac{dh_i(t)}{dt}=-h_i(t)+\\sum^N_{j=1}w^{Rec}_{ij}h_j(t)+\\mathbf{I(t)}.h_{0,i}\n\\label{eq-lin}\n\\end{equation}\n\nIn the absence of external input, the system has a single fixed point that corresponds to $ h_i = 0 $ for all units $i$. We can write the external input as a time variable component $ \\mathbf {I (t)} $ and a term $ h_ {o, i} $ that corresponds to the activation of each unit. Let us then consider a vector $ h_ {o} $ N-dimensional, and let's approximate the input pulse $ \\mathbf { I (t)} $ by the delta function. that means that the duration of the pulse is short with respect to the length of the considered time series, as is our case. In addition, the norm of $ h_ {o} $ is 1, which is equivalent to saying $ h (0) = h_0 $.\n\nThe solution of the system given by the equation \\ref{eq-lin} following \\cite{9910223127702121,10.5555\/1205781,10.1371\/journal.pcbi.1007655} is obtained by diagonalizing the system and making a base change of the vector $ \\mathbf{h} $ such that:\n\n\\begin{equation}\n\\mathbf{ h }=\\mathbf{V} \\mathbf{\\tilde{h}} \n\\end{equation}\n\nThen, it is possible to write the connectivity matrix $ \\mathbf {W^{Rec}} $ in a diagonal base containing the eigenvectors $ v_i $ as columns and the array $ \\mathbf {\\Lambda} $ has the eigenvalues $ \\lambda_i $ on the diagonal as shown in Equation \\ref{eq-lin-2}.\n\n\\begin{equation}\n\\mathbf{W^{Rec}} \\rightarrow \\mathbf{\\Lambda}=\\mathbf{V^{-1}W^{Rec}V}\n\\label{eq-lin-2}\n\\end{equation}\n\nThis is used to decouple the equations. We now write the decoupled equations of $ \\tilde {h} _i $ for the vector in the new base as in \\ref{eq-lin-3}:\n\n\\begin{equation}\n\\frac{d\\tilde{h}_i(t)}{dt}=-\\tilde{h}_i(t)+\\lambda_i\\tilde{h}_i+\\delta(t).\\tilde{h}_{0,i}\n\\label{eq-lin-3}\n\\end{equation}\n\nIn this way we obtain the solution for $ h $ in terms of the $ h_i $\n\n\\begin{equation}\n\\mathbf{h}(t)=\\sum^N_{i=1}\\tilde{h}_i(t)\\mathbf{v_i}\n\\label{eq-lin-4}\n\\end{equation}\n\nwith\n\\begin{equation}\n\\tilde{h}_i(t)=e^{t(\\lambda_i-1)}\n\\label{eq-lin-5}\n\\end{equation}\n\nThus, the long term dynamic is governed by the eigenmodes with the eigenvalue (or eigenvalues) with the largest real part. It is observed that this is true for all the realizations obtained in this work since this state always corresponds to some of the responses to the combinations of the stimuli, being the active or passivated, oscillatory, or fixed-point output. In fact, for the realizations that have complex dominant eigenvalues, if we numerically estimate the frequency of oscillation, for the oscillatory states, of the activity $ h_i $, it is worth approximately:\n\n\\begin{equation}\nf= \\frac{1}{2 \\pi} \\frac{ Re(\\lambda _{max})} { Im ( \\lambda_{max})}\n\\label{eq-lin-6}\n\\end{equation}\n\nWhich is consistent with estimates made in \\cite {10.1371\/journal.pcbi.1006309, PhysRevE.88.042824}.\n\nThe matrix of a trained network as shown in is not normal, so the previous analysis is not fully complete. Although the matrices of the simulations are approximately normal when considering orthogonal condition (see Appendix \\ref{B}), since they do not deviate much from the initial condition after training, they are enough not-normal so that there is a transient amplified effect that leads the system from the initial condition to the long term dynamics observed. This happens for all realizations (see Appendix \\ref{B} for more details). \n\nThe departure from the normal condition of the matrix can be estimated through the parameter Henrici's departure from normality, obtained as in Equation \\ref{eq-lin-7}.\n\n\\begin{equation}\nd_F(\\mathbf{W^{Rec}})=\\frac{\\sqrt{(||\\mathbf{W^{Rec}}||^2-\\sum^N_{i=1}|\\lambda_i|^2)}}{||\\mathbf{W^{Rec}}||}\n\\label{eq-lin-7}\n\\end{equation}\n\nWhere, for normalization, it was divided by the norm of the matrix.\n\nThe long term dynamics was previously obtained through linearization. The departure from normality is which leads the system from equilibrium to the final state and make appear more complex patterns for the activity \\cite{Asllanieaau9403}.\n\nIt was observed in some realizations that appear for example high-frequency oscillations that sometimes include modulations.\n\nThe observed transient can be also related to the norm of $ \\mathbf{h(t)} $. This norm is the euclidean distance between the equilibrium point of the system and the activity at time $ t $. It is estimated as:\n\n\n\\begin{equation}\n||\\mathbf{h(t)}||=\\sqrt{\\sum^N_{i=1} \\tilde{h}_i(t)+2\\sum^N_{i>j}\\tilde{h}_i(t)\\tilde{h}_j(t)\\mathbf{v_i}\\mathbf{v_j}}\n\\label{eq-lin-8}\n\\end{equation}\n\nThis magnitude has been previously studied as an amplification mechanism in neural signals \\cite{10.1371\/journal.pcbi.1007655}, where authors study the change or the slope of the $ \\mathbf{\nh (t)}$ norm, and the conditions for the appearance of amplified trajectories like the ones observed in present work. They affirm that the necessary condition for having amplified trajectories is on the eigenvalues of the symmetric part of the matrix $\\mathbf {W^{Rec}} $ estimated as in \\ref{eq-lin-9}. This condition is that the maximum eigenvalue of the symmetric part of the matrix must be greater than 1.\n\n\\begin{equation}\n \\mathbf{W^{Rec}_{sym}}=\\frac{1}{2}(\\mathbf{W^{Rec}+{W^{Rec}}^T})\n\\label{eq-lin-9}\n\\end{equation}\n\nLet us remember that symmetric matrices have all their real eigenvalues. For all the realizations in the simulations, the maximum eigenvalue of the symmetric matrix is always greater than 1, therefore the condition for the existence of transients is guaranteed. Only some specific initial condition values of $ h_ {o, i} $, will be amplified according to \\cite{10.1371\/journal.pcbi.1007655}, which is consistent with the observations that when networks are elicitated with different amplitude values for the input pulse there is an amplitude limit for which the paths are not amplified any more.\n\nIn the case of the realizations obtained, a transient ending in a sustained oscillation, or one going to a fixed point different from zero is always observed. Exceptionally for tasks with a passivated state for the output attenuation is observed.\n\nIn general, the behavior of the system when eigenvalues are lying outside the unitary circle, either with the real part less than 1 or with the negative real part, is to present rapid oscillations. In those cases, the system seems to be also governed by the set of eigenvalues outside the unit circle since the modes that remain within tend to attenuate the transients.\n\n\n\\section{Conclusions \\label{CONCLUSIONS} }\n\nConsidering the analysis made above, we can highlight some aspects of the results obtained in the study. First, networks trained for these four tasks (AND, XOR, OR and Flip Flop) have consistent patterns and they are not stable systems, which in principle is not an unexpected situation. The classification for the set of tasks proposed here and its dynamic are interesting also since these tasks could constitute possible flow control mechanisms for information in the cortex.\n\nOn the other hand, Backpropagation through time without any regularization term, allows networks to be trained to do the same task not univocally. Different realizations for the same task are obtained with different dynamical behaviors, and the networks obtained are generally non-normal \\cite{sengupta2018robust}.\n\nLinearization was a useful mechanism to understand the behavior of the system in the first order so that the decomposition into eigenvalues of the matrix of recurrent weights is an observable characterizing behavior for these tasks.\n\nThe results obtained support the hypothesis that the trained network represents the information of the tasks in a low-dimensional dynamic implemented in a high dimensional network or structure \\cite{BARAK20171} as also reported in \\cite{Kuroki2018}. \n\nThe neural network model studied in this work, as described in Section \\ref{MODELO}, is widely used to describe experimental results in different experiments and areas in neuroscience, for example in motor control \\cite{doi:10.1152\/jn.00467.2018}. In particular analyzes on the cerebral cortex show complex temporal dynamics \\cite{SUSSILLO2014156, DBLP:journals\/neco\/SussilloB13, nature_01, nature_com}, where different mechanisms of control the information flow could be present and coexist. For this reason, knowing the details of the model's dynamics is important to understand the observed experimental results with greater precision.\n\nFuture extensions of the present work will include the distinction between excitatory and inhibitory neurons.\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOpen clusters (OCs) are very important components of our \nGalaxy. A detailed knowledge of their properties, such as \ndistance, age, metallicity or reddening is necessary for a \nproper understanding of the structure and evolution of the \nMilky Way \\citep[see e.g. the review by][]{Kru19}. With the \nadvent of the ESA's {\\it Gaia} space mission \\citep{Gai16}, its \nsecond data release \\citep[DR2,][]{Gai18} and, more recently, \nthe Early Data Release 3 \\citep[EDR3,][]{Gai20}, astronomers \nhave an homogeneous source of data with unprecedented \nastrometric precision and accuracy that have allowed to \nincrease the census \nof\nknown OCs and to improve the \ndeterminations of their properties. In the last years, \nthe number of papers using data from {\\it Gaia} and different \ntechniques (from simple visual inspection to supervised \nor unsupervised machine learning techniques) and reporting \nthe discovery of new OCs has increased notoriously \n\\citep{Cas18,Can19,Fer19,Liu19,Sim19,Cas20,Casado21,He21,Fer21,Hun21,Xia21}.\nAn important requirement to be able to carry out reliable \nstudies of the Galactic cluster population is that OC \nparameters are derived in a homogeneous manner. The \ninhomogeneous analysis combining different data sets \nand\/or methods may lead to discrepant or biased results, \nas has been noted by \\citet{Net15} and \\citet{Car17}.\nIn this sense, the high quality of the {\\it Gaia} data\nallows the systematic and homogeneous derivation of OC\nparameters with very good precision\n\\citep[e.g.][]{Sou18,Bos19,Can20a,Can20b,Aga21,Tar21}.\n\nThere are, however, two important issues that must be \ntaken into account in deriving cluster parameters, \nespecially when performing massive data processing. The \nfirst one refers to the size of the sample used \nto extract the data to work with. This is a very \nimportant first step that usually is decided in \nadvance to data processing. As discussed in \n\\citet{San10}, if the radius of the circular \narea around the cluster used to extract the data \n(the sampling radius, $R_s$) is much smaller than \nthe actual cluster radius ($R_c$) then the cluster \nis obviously subsampled. In order to avoid this \nsituation, usually a ``large enough\" $R_s$ is selected. \nThe problem is that if $R_s \\gg R_c$ then the contamination \nby field stars affects \nthe determination of \nmemberships and, consequently, the determination of the cluster \nproperties, including the radius itself \\citep{San10}.\nIdeally a relatively high $R_s$ value\nshould not affect the membership determination, but\ndepending on membership assignment criteria and\/or\nmeasurement errors the contamination by field stars\ncould become significant and, in any case, it tends\nto be higher as $R_s$ increases.\nWithout a previous knowledge of an approximate value of \nthe cluster radius it is difficult to choose the optimal \nsampling radius $R_s \\simeq R_c$. This is probably one of the \nreasons for the large discrepancies in $R_c$ reported in \nthe literature because, independently on\nthe used membership assignment method, \nselected members tend to be spread over the sampled area and \nthe a posteriori determination of $R_c$ based, for instance, \nin a radial star density profile, can be biased by the \nprevious selection of $R_s$\n\\citep[see discussions in][]{San10,San18,San20}.\nFor the case of the OC\npresented in this work (Alessi-Teutsch~9), previously calculated\nradius values range from $25.7$~arcmin \\citep{Sam17}\nto $31.8$~arcmin \\citep{Kha13}, but more discrepant\nvalues can be found for other clusters\n\\citep[see for instance Fig.~5 in][]{San20}. \n\nThe second relevant consideration relates to the limitation of\nusing only {\\it Gaia} photometry, which may be motivated in part \nby the high quality (and also homogeneity) of the {\\it Gaia}\ndata. However, using only the relatively large width \n{\\it Gaia} bands can result in degeneracy problems and\/or lead to \nstrong systematics when inferring OC parameters \\citep{And18}. \nThe obvious solution would be to complement {\\it Gaia} photometry\nwith multiband photometry from other surveys such as SDSS\n\\citep{Yor00} or 2MASS \\citep{Skr06} as done, for example,\nin \\citet{Tad18}. The larger the set of used photometric \nbands, the more accurate the estimated stellar parameters \nwithout the need of resorting to spectroscopy but, as mentioned \nabove, some caution is required when combining several passbands \nfrom different photometric systems.\n\nBased on these considerations, we have started a project to \nobserve and characterize OCs using the 80~cm JAST80 \ntelescope at the Observatorio Astrof\\'isico de Javalambre \n(OAJ) in Teruel, Spain. The telescope has a panoramic camera \n(T80Cam) with a large-format CCD of $9200 \\times 9200$ pixels \nproviding an effective field of view of $1.4 \\times 1.4$ deg$^2$ \nthat is particularly useful for wide field OCs. Moreover, it \nhosts a set of $12$ optical filters, originally defined for \nthe J-PLUS survey \\citep{Cen19}, spanning the entire optical \nrange. Here we report the results of the pilot campaign \nfor the cluster Alessi-Teutsch~9\\ \\citep{Dia02,Kha05}, also known as ASCC~10 \nand MWSC~275 \\citep{Bic19}. Alessi-Teutsch~9, located in the second Galactic \nquadrant ($l = 155.61$ deg, $b=-17.79$ deg) \\citep{Kha05}, is a \nrelatively large cluster for which apparent radii in the range \n$\\sim 25-32$ arcmin have been reported in works previous to the \n{\\it Gaia} era \\citep{Kha05,Kha13,Dia14,Sam17}. The mean cluster \nproper motion, based on less precise catalogues as PPMXL \n\\citep{Roe10} or UCAC4 \\citep{Zac13}, had disagreeing values \nin the ranges $\\sim (-3.4)\\ -\\ (+1.7)$ mas~yr$^{-1}$ in right \nascension and $\\sim (-2.2)\\ -\\ (-1.0)$ mas~yr$^{-1}$ in \ndeclination \\citep{Kha05,Kha13,Dia14,Sam17,Can18a} and, \nconsequently, the number of assigned members have varied \nin the range $\\sim 10^2-10^3$. More recently and using data \nfrom {\\it Gaia} DR2, \\citet{Can18b} \\citep[see also][]{Can20a,Can20b}\nlocated the cluster proper motion centroid at $(-1.737,-1.368)$ \nmas~yr$^{-1}$ with a total of $71$ probable members. The \nmethod used by \\citet{Can20a} applies $k$-means clustering \nto search for groupings in proper motions and parallaxes. \nAs other authors, a relatively high initial sampling radius \nwas intentionally assumed and, even though they reported \nthat the radius containing half the members is $33.5$ arcmin,\nthe assigned cluster members spread over a radius of $\\sim 70$\narcmin.\n\nIn this work we combined $12$-band photometry from OAJ \nfor Alessi-Teutsch~9\\ with {\\it Gaia} EDR3 data to show (1) the importance \nof a {\\it previous} identification of the optimal sampling \nradius and (2) the advantage of incorporate multiband \nphotometry for a precise and accurate estimation of OC \nproperties. Section~\\ref{sec_data} describes the observed\nand used data sets. In Section~\\ref{sec_sampling} we\ndetermine the optimal sampling radius, which is used in\nSection~\\ref{sec_members} to determine reliable membership\ncriteria and to select the final list of cluster members.\nSpectral energy distributions for the members, based on\nOAJ photometry, are derived in Section~\\ref{sec_seds}.\nThe cluster properties are presented and discussed in\nSection~\\ref{sec_properties} and, finally, the main\nconclusions are summarized in Section~\\ref{sec_conclusions}.\n\n\\section{Observations and data}\n\\label{sec_data}\n\n\\subsection{OAJ}\n\nObservations were made on February 23 and 24, 2020, with \nthe T80Cam panoramic camera attached to the JAST80 \ntelescope at OAJ \\citep{oaj}. The total field of view is \n$1.4 \\times 1.4$ deg$^2$ and the pixel scale is $0.55$ \narcsec\/pixel. The set of filters covers the optical range, \nfrom 3500 to 10000 \\AA\\ \\citep[see][]{Cen19}, and they are\npresented in Table~\\ref{tabOAJ}, indicating the central\nand Full Width Half Maximum ($FWHM$). The log of the\nobservations (number of exposures $N_{exp}$ and total\nexposure time $T_{exp}$) is also provided in Table~\\ref{tabOAJ}.\n\\begin{table}[t]\n\\caption{List of filters and log of observations for Alessi-Teutsch~9.}\n\\label{tabOAJ}\n\\begin{tabular}{@{}ccccc}\n\\hline\nName & $\\lambda$ (\\AA) & $FWHM$ (\\AA) & $N_{epx}$ (s) & $T_{exp}$ (s) \\\\\n\\hline\nuJAVA & 3485 & 508 & 12 & 603 \\\\\nJ0378 & 3785 & 168 & 12 & 813 \\\\\nJ0395 & 3950 & 100 & 12 & 843 \\\\\nJ0410 & 4100 & 200 & 12 & 408 \\\\\nJ0430 & 4300 & 200 & 12 & 378 \\\\\ngSDSS & 4803 & 1409 & 12 & 93 \\\\\nJ0515 & 5150 & 200 & 22 & 631 \\\\\nrSDSS & 6254 & 1388 & 18 & 204 \\\\\nJ0660 & 6600 & 138 & 12 & 588 \\\\\niSDSS & 7668 & 1535 & 12 & 138 \\\\\nJ0861 & 8610 & 400 & 12 & 978 \\\\\nzSDSS & 9114 & 1409 & 12 & 528 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\nIn total there were $160$ images for the cluster Alessi-Teutsch~9\\ in \nthe $12$ available bands. Data reduction and photometric \ncalibration were performed by the Data Processing and \nArchiving Unit (UPAD) team \\citep{upad,Cen19}.\n\n\\subsection{\\textbf{\\textit{Gaia}} EDR3 data}\n\nWe queried the {\\it Gaia} EDR3 database around the cluster \ncenter\\footnote{The exact center is irrelevant because \nit is recalculated from the final member selection.} \nat RA(J2000)=51.870 deg and Dec=34.981 deg \\citep{Can20a}. \nAll the available astrometric and photometric information \nwas extracted using different sampling radii according to \nthe proposed strategy (see Section~\\ref{sec_sampling}). It \nis worth pointing out that we did not apply any magnitude \ncut or filtering on data quality, unlike \\citet{Can20a}\nthat were limited to stars brighter than $G=18$.\n\n\\section{Optimal sampling radius}\n\\label{sec_sampling}\n\nIn previous works \\citep{San10,San18,San20} we have been \ndeveloping and improving a method for objectively calculating \nthe optimal $R_s$ value for OCs. The method is based on the \nbehavior of the stars in the proper motion space as $R_s$ \nincreases and it is not affected by how its stars are \nspatially distributed. \nFor a given $R_s$ value, the first step\nis to identify the cluster overdensity in the proper motion\nspace. For this, radial density profiles are derived for all\nthe available stars in the proper motion space, that is\nassuming each star as the centre of the overdensity. In\neach case, an overdensity ``edge\" is also determined as the\npoint at which the profile changes from a steep to a shallow\nslope. At this point, average densities are calculated both\nfor the overdensity (the circular region from the starting\npoint to the edge) and for the ``local field\" (a ring\nadjacent to the overdensity region). The ring width is\nchosen such that its area is the overdensity area but\nwith the condition that the minimum number of stars\nin the ring is $100$. These calculations are done for\nall the starting points (stars) and the cluster overdensity\nis identified as the one having the highest density contrast\nbetween the overdense region and the local field.\nThe core of the method, in its latest version, consists of \nmeasuring how much the density of stars in the \ncluster\noverdensity increases ($\\Delta D_{od}$) as $R_s$ \nincreases. This change is compared with the change \nin the local field density ($\\Delta D_{lf}$), for \nwhich a ring surrounding the overdensity is used. \nThe key point is the assumption that \n$\\Delta D_{od}> \\Delta D_{lf}$ when $R_s < R_c$ because \nboth field stars and cluster stars are being included \ninside the overdensity. However, when $R_s \\gtrsim R_c$ \nwe expect that $\\Delta D_{od} \\simeq \\Delta D_{lf}$ \n\\citep[see details in][]{San20}.\n\nThe application of this method to Alessi-Teutsch~9\\ using data from \nthe {\\it Gaia} EDR3 catalog gives the result shown in \nFigure~\\ref{fig_eta}.\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{sanchez_fig1.eps}\n\\caption{Ratio between the density variation in the \nproper motion overdensity ($\\Delta D_{od}$) and the \ndensity variation in the local field region \n($\\Delta D_{lf}$) as a function of the sampling \nradius ($R_s$) around the open cluster Alessi-Teutsch~9. Red \nsymbols refer to the full sample and blue symbols \nto stars having proper motion errors smaller than \n$0.3$ mas~yr$^{-1}$ in right ascension or declination. \nBlue error bars are estimated by assuming Poisson \nstatistics whereas the grey area indicates the \nuncertainty associated with local field variations \n(errors for the red symbols are not shown for clarity). \nBlack horizontal line, shown as reference, corresponds \nto the case $\\Delta D_{od}=\\Delta D_{lf}$ expected when \n$R_s$ reaches the actual cluster radius $R_c$.}\n\\label{fig_eta}\n\\end{figure}\nDirect application to the full data (red symbols) does \nnot yield a clear result probably because of the relatively \nlow density of stars in Alessi-Teutsch~9\\ compared to the field stars. \nNormally, this kind of problems can be solved by applying \nsome kind of data filtering. At this point it is not \nnecessary to use the full sample of stars because the \ngoal is not yet to find members but only to make a \nreliable estimate of the cluster radius. If we rule \nout stars with errors in proper motion higher \nthan $0.3$ mas~yr$^{-1}$ (the median of the error in \nthe {\\it Gaia} EDR3 catalog for this sample) then the result \nimproves notoriously (blue symbols in Figure~\\ref{fig_eta}) \nand the point at which $\\Delta D_{od} \\simeq \\Delta D_{lf}$ \nbecomes apparent. The valley in the curve observed around \n$R_s \\sim 20$ arcmin could be related to variations in the \nspatial distribution of cluster stars, but this cannot be \nstated until the final selection of members has been\ncompleted. According to the previous discussion, it is\nclear that the cluster radius is close to the value\n$\\sim 40$ arcmin and, therefore, this would be the\noptimal sampling radius.\n\n\\section{Cluster member selection}\n\\label{sec_members}\n\nFrom this point we remove any filtering option and get \nall the stars available in the {\\it Gaia} EDR3 catalogue. Even \nthough the cluster radius seems to be slightly smaller \nthan $40$ arcmin (Figure~\\ref{fig_eta}) here we use a \nconservative sampling radius of $R_s= 40$ arcmin. With \nthis sampling we make a first iteration to determine \ncandidate cluster members based on their proper motions, \nparallaxes and photometric properties.\n\n\\subsection{Kinematic candidates}\n\nApart from determining the optimal $R_s$, the used \nalgorithm also yields the position of the overdensity \nin the proper motion space. In this first iteration the \noverdensity was found at $(-1.76,-1.44)$ mas~yr$^{-1}$ \nand its radial star density profile\nin the proper motion space\nis shown in Figure~\\ref{fig_perfilMP}.\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{sanchez_fig2.eps}\n\\caption{Radial density profile of the overdensity in \nthe proper motion space centered at $(-1.76,-1.44)$ \nmas~yr$^{-1}$. Used bin size is $0.05$ mas~yr$^{-1}$ \nand error bars are from Poisson statistics. Smooth \nsolid line is the best fit to a Gaussian function \nthat yields a standard deviation of $0.10$ mas~yr$^{-1}$.}\n\\label{fig_perfilMP}\n\\end{figure}\nA Gaussian function with standard deviation \n$\\sigma=0.10$ mas~yr$^{-1}$ fitted very well the \noverdensity profile, then we select as kinematic \ncandidate members the $125$ stars falling inside \nthe $3\\sigma = 0.30$ mas~yr$^{-1}$ neighbourhood.\n\n\\subsection{Parallactic candidates}\n\nFigure~\\ref{fig_paralajes} shows the parallax \ndistribution for the kinematically selected stars.\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{sanchez_fig3.eps}\n\\caption{Distribution of parallax values for the $125$ \nstars selected by kinematic criteria. Solid lines are \ntwo Gaussian functions whose sum was fitted to the \ndistribution, representing field stars (red line) and \ncluster stars (blue line). The Gaussian corresponding \nto the cluster is centered at $1.512$ mas and has \nstandard deviation of $0.090$ mas.}\n\\label{fig_paralajes}\n\\end{figure}\nWe performed a least-square fit to the distribution \nusing two (field and cluster) Gaussian functions that \nyielded a cluster mean parallax of $plx=1.512$ mas with \na standard deviation of $\\sigma_{plx} = 0.090$ mas (blue \nline). From here, the parallax selection criterion \n$plx \\pm 3\\sigma_{plx}$ yields $67$ candidates.\n\n\\subsection{Photometric candidates}\n\nThe colour-magnitude diagram for the $67$ previously \nselected stars is shown in Figure~\\ref{fig_cmd}.\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{sanchez_fig4.eps}\n\\caption{Colour-magnitude diagram using {\\it Gaia} bands for \nthe stars selected by kinematic and parallax criteria \n(blue circles). Solid lines are PARSEC isochrones \n\\citep{Bre12} at distance 654~pc, extinction $A_G=0.6$ \nmag and ages of $\\log T = 8.4$ Myr (black line) and \n$\\log T = 8.6$ Myr (blue line). Red lines represent \nan envelope defined by using the isochrone of age \n$\\log T = 8.5$ Myr and shifting it $\\pm 0.2$ mag \nalong the $(G_{BP}-G_{RP})$ axis, that is used to \nselect photometric candidates.}\n\\label{fig_cmd}\n\\end{figure}\nThe cluster main sequence is clearly visible. In order \nto select photometric candidates we have also plotted \nisochrones generated using PARSEC\nv3.4\\footnote{http:\/\/stev.oapd.inaf.it\/cgi-bin\/cmd}\n\\citep{Bre12}.\nThe distance can be estimated\nby adding $+0.017$ mas to the mean\ncluster parallax $plx=1.512$ mas for taking into account\nthe global parallax zero point of {\\it Gaia} EDR3\n\\citep{Gai20}, which gives a distance of\n654~pc. Additionally, we assume solar metallicity and \n{\\it Gaia} band extinction coefficients given by \\citet{Wan19}. \nThe best fitted isochrones, determined mainly by eye, were \nthose with extinction around $A_G=0.6$ mag and ages in the \nrange $\\log T = 8.4-8.6$ Myr (black and blue lines in \nFigure~\\ref{fig_cmd}). Given that, at this point, we are \nworking with only one cluster and few photometric bands, \nit is not necessary to use a more sophisticated fitting \nprocedure that will probably lead to similar results \n\\citep[e.g.][]{Jef16}.\nThe obtained age range is smaller than the age given\nby \\citet{Kha13} ($8.72$ Myr) \nand higher than the recent estimation\nby \\citet{Dia21} ($8.19$ Myr),\nbut it is in agreement with the values reported by\n\\citet{Bos19} ($8.60$ Myr), \\citet{Zha19} ($8.45$ Myr)\nand \\citet{Can20b} ($8.42$ Myr).\nAs photometric selection \ncriterion we use an isochrone of $8.5$ Myr shifted \n$\\pm 0.2$ mag in $(G_{BP}-G_{RP})$ (red lines in \nFigure~\\ref{fig_cmd}), which yields $60$ stars.\n\n\\subsection{Cluster member selection}\n\nOnce established the best kinematic, parallactic and \nphotometric membership criteria, we repeat the strategy \nof spanning the sampling radius $R_s$. This is the procedure \noriginally suggested in \\citet{San10} for a reliable estimation \nof the actual cluster radius regardless of the membership \nassignment method. The expected behaviour is that the number \nof members $N_m$ increases as $R_s$ increases until the point \n$R_s = R_c$ and then $N_m$ remains nearly constant (for the \nideal case of a ``perfect\" membership assignment) or increases \nbut at slower rate (for more realistic situations with some \ncontamination by field stars). Actually, what we do is not \njust to determine the member stars but also to estimate the \nnumber of spurious members, i.e. the expected number of \nfield stars fulfilling the membership criteria. For this,\nwe use the ``local\" field stars defined as those stars \nfalling in a concentric ring around the proper motion \noverdensity. The calculated local field density of stars \nallows us to estimate the proportion of field star \ncontamination in the overdensity and in the final \nnumber of assigned members. Then, we estimate $N_m$ \nas the number of stars fulfilling the membership \ncriteria minus the expected number of spurious \nmembers. Figure~\\ref{fig_Rsam} is the plot of \n$N_m$ versus $R_s$ for Alessi-Teutsch~9\\ using data from {\\it Gaia} EDR3.\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{sanchez_fig5.eps}\n\\caption{Number of estimated members $N_m$ for Alessi-Teutsch~9\\ as \na function of the sampling radius $R_s$. Solid circles \nconnected by continuous lines refer to members selected \nby using kinematic, parallactic and photometric criteria \nmentioned in Section~\\ref{sec_members}, whereas open \ncircles with dashed lines are for members based exclusively \non kinematic criteria. Vertical dashed line indicates the \nradius $R_s=35$ arcmin at which a slope change is observed.}\n\\label{fig_Rsam}\n\\end{figure}\nOpen circles refer to the number of members estimated by \nusing exclusively kinematic criteria (proper motions), \nwhich is the strategy previously used in \n\\citet{San10,San18,San20}. Although the curve is somewhat \nnoisy, the expected change of slope is clearly observed \naround $R_s=35$ arcmin. If the full set of membership \ncriteria is used (solid circles) then the change of slope \nis less pronounced but still visible at the same point. \nThis result is consistent with Figure~\\ref{fig_eta} but \nnow we can undoubtedly assign the value $R_c=35$ arcmin \nto the cluster radius. According to our results, the \nnumber of stars fulfilling the membership criteria \nare $55$ from which we expect that $\\sim 10$ are \nspurious ($N_m=45$).\n\n\\section{Spectral energy distributions}\n\\label{sec_seds}\n\nThe physical properties of the stars can be derived by \nfitting theoretical spectra to their Spectral Energy \nDistributions (SEDs). In order to create a well sampled \nSED (over a wide wavelength range), usually it is assembled \nor complemented with photometric information from different \ncatalogues. In these cases, great care must be taken when \ncross-matching different catalogues and combining different \nphotometric systems. An advantage of using OAJ data rather \nthan cross-matching different catalogues is that we have \nhomogeneous multiband photometry in $12$ filters spanning \nthe entire optical range. The SEDs of the $55$ selected \nstars were analyzed using the Virtual Observatory SED \nAnalyzer (VOSA) tool developed by the Spanish Virtual \nObservatory \\citep{Bay08}. VOSA has a friendly interface \nthat allows the user to choose different models and \nparameter ranges to find the spectrum that best \nreproduces the observed data.\n\nFor the fitting procedure we used the ATLAS9 Kurucz \nODFNEW\/NOVER models \\citep{Cas97}. Given that the fitting \nprocess is quite insensitive to some parameters, such as \nsurface gravity \\citep{Bay08}, we fixed both metallicity \nand surface gravity around the expected values, i.e. solar \nmetallicity and $\\log g = 4$.\\footnote{We carried out some \ntests keeping metallicity and surface gravity as free \nparameters and the results did not differ significantly \nfrom those reported here.} Additionally, we adopted the \npreviously obtained distance ($654$ pc) to all the stars. \nBoth the effective temperature ($T_{eff}$) and the visual \nextinction ($A_V$) were considered as free parameters. \nEven though not all the stars had valid magnitudes in \nthe $12$ photometric bands, their SEDs were well-fitted \nby the models and converged to valid (physically reasonable) \nsolutions. Figure~\\ref{fig_seds} shows the results for two \nexamples: the best and the worst obtained fit.\n\\begin{figure*}[t]\n\\includegraphics[width=\\columnwidth]{sanchez_fig6a.eps}\n\\includegraphics[width=\\columnwidth]{sanchez_fig6b.eps}\n\\caption{Two examples of fitted SEDs. Dashed lines are \nthe original observed photometric data, solid circles \nwith error bars indicate dereddened data points and \nsolid lines are the best fitted Kurucz model. Left \npanel corresponds to the best result for which \n$T_{eff} = 13000$~K and $A_V=0.6$ mag whereas \nright panel is the worst case with \n$T_{eff} = 6000$~K and $A_V=0.6$ mag.}\n\\label{fig_seds}\n\\end{figure*}\n$T_{eff}$ and $A_V$ values were inferred for\nthe $55$ stars from the fit of their SEDs.\nEffective temperatures are all in the range \n$T_{eff} = 3500-13000$~K. The mean $A_V$ value\nis $0.65$ mag (the median is $0.60$ mag) with a\nstandard deviation of $0.19$ mag. \nOur mean visual extinction is\nhigher than the value reported by \\citet{Bos19}\n($0.443$) and smaller than that by \\citet{Dia21}\n($0.82$), but it agrees with the extinction estimated\nby \\citet{Can20b} ($0.63$).\nThere are $41$ stars with \nextinctions and temperatures calculated \nusing only {\\it Gaia} DR2 photometry by \\citet{And18}. \nFor these $41$ stars, the mean extinction reported in\n{\\it Gaia}'s $G$ band is $A_G=0.66$ mag (standard\ndeviation $0.23$ mag). However, effective \ntemperatures reported in {\\it Gaia} DR2 are systematically \nsmaller than the ones derived in this work by using multiband \nphotometry from OAJ, as can be seen in Figure~\\ref{fig_Teff}.\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{sanchez_fig7.eps}\n\\caption{Comparison between effective temperatures reported \nin {\\it Gaia} DR2 and those derived in this work for the $41$ stars \nwith data in common. Solid line is the 1:1 relation.}\n\\label{fig_Teff}\n\\end{figure}\nThe data points that deviate the most from the 1:1 line\nin Figure~\\ref{fig_Teff}\ncorrespond to the hottest stars ($T_{eff} > 10^4$ K) and \nthis is because \\citet{And18}'s calculations were restricted \nto the range $3000-10000$ K.\n\n\\section{Properties of the open cluster Alessi-Teutsch~9}\n\\label{sec_properties}\n\nThe full list of cluster members, including properties \nresulting from SED fitting, is provided in \nTable~\\ref{tab_members} (fully available online).\n\\begin{table*}[h]\n\\centering\n\\caption{List of star members of the open cluster Alessi-Teutsch~9\\ and\ntheir derived properties including equatorial coordinates \n(J2000), proper motions, parallaxes, magnitudes and colors \nin the {\\it Gaia} bands, visual extinctions, effective \ntemperatures, bolometric luminosities, and stellar masses \nand radii.}\n\\label{tab_members}\n\\begin{tabular}{cccccccccccc}\n\\hline\nRA\\_J2000 & DE\\_J2000 & $\\mu_\\alpha\\cos\\delta$ & $\\mu_\\delta$ &\n$plx$ & $G$ & $BP-RP$ & $A_V$ & $T_{eff}$ & $L_{bol}$ &\n$M$ & $R$ \\\\\n(deg) & (deg) & (mas\/yr) & (mas\/yr) & (mas) &\n(mag) & (mag) & (mag) & ($K$) & ($L_{\\sun}$) & ($M_{\\sun}$) &\n($R_{\\sun}$) \\\\\n\\hline\n 51.870058 & 34.983293 & -1.720 & -1.398 & 1.4843 & 13.910 & 1.007 & 0.60 & 6000 & 1.35 & 0.42 & 1.07 \\\\\n 51.915978 & 35.023862 & -1.776 & -1.462 & 1.4968 & 11.943 & 0.501 & 0.60 & 8000 & 8.66 & 0.84 & 1.52 \\\\\n 51.806890 & 34.944837 & -1.857 & -1.639 & 1.5207 & 12.362 & 0.681 & 0.60 & 7250 & 5.72 & 0.83 & 1.51 \\\\\n 51.788046 & 34.984700 & -1.757 & -1.351 & 1.6150 & 19.350 & 2.727 & 0.75 & 3500 & 0.01 & 0.04 & 0.35 \\\\\n 51.909316 & 34.914086 & -1.702 & -1.435 & 1.5637 & 12.470 & 0.871 & 0.60 & 6750 & 5.05 & 0.97 & 1.63 \\\\\n...\\\\\n\\hline\n\\multicolumn{12}{l}{Note: here we show only\na portion for guidance regarding table form and content.\nThe full version is available online.}\n\\end{tabular}\n\\end{table*}\nThe mean properties of Alessi-Teutsch~9\\ derived directly from the \nselected members are summarized in Table~\\ref{tabAT9}, \nwhere a comparison is also made with other works, \nespecially the most recent studies using data from \n{\\it Gaia} DR2 \\citep{Bos19,Can20a,Can20b}.\n\\begin{table}[t]\n\\caption{Mean parameters of Alessi-Teutsch~9}\n\\label{tabAT9}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{@{}lccc}\n\\hline\nParameter & This work & Other works & Ref.\\\\\n\\hline\nRight ascension (deg) & $51.8261$ & $51.8700$ & (1) \\\\\nDeclination (deg) & $34.9361$ & $34.9810$ & (1) \\\\\n$\\mu_\\alpha\\cos\\delta$ (mas~yr$^{-1}$) & $-1.801$ & $-1.737$ & (1) \\\\\n$\\mu_\\delta$ (mas~yr$^{-1}$) & $-1.427$ & $-1.368$ & (1)\\\\\nNumber of members & $55$ & $71$ & (1) \\\\\nDistance (pc) & $654$ & $672$ & (1) \\\\\n & & $660$ & (2) \\\\\n & & $606$ & (3) \\\\\n & & $700$ & (4) \\\\\n & & $643$ & (5) \\\\\nlog(age) (Myr) & $8.5$ & $8.42$ & (2) \\\\\n & & $8.60$ & (3) \\\\\n & & $8.72$ & (4) \\\\\n & & $8.18$ & (5) \\\\\n & & $8.45$ & (6) \\\\\n$A_V$ (mag) & $0.65$ & $0.63$ & (2) \\\\\n & & $0.443$ & (3) \\\\\n & & $0.820$ & (5) \\\\\nRadius (arcmin) & $35$ & $31.8$ & (4) \\\\\n & & $29.8$ & (7) \\\\\n & & $25.7$ & (8) \\\\\nTotal mass ($M_{\\sun}$) & $35.8$ \\\\\nTotal luminosity ($L_{\\sun}$) & $992$ \\\\\n\\hline\n\\multicolumn{4}{p{\\columnwidth}}{\nReferences:\n(1) \\citet{Can20a};\n(2) \\citet{Can20b};\n(3) \\citet{Bos19};\n(4) \\citet{Kha13};\n(5) \\citet{Dia21};\n(6) \\citet{Zha19};\n(7) \\citet{Dia14};\n(8) \\citet{Sam17}.}\n\\end{tabular}}\n\\end{table}\nOut of these $55$ members, $45$ are listed in the catalogue \nof \\citet{Can20a}, although only $37$ were assigned as \nprobable members ($p > 0.5$). \nIf we restrict to stars brighter than\n$G = 18$ mag, the constraint imposed by \\citet{Can20a},\nthen we get $49$ members from which $45$ ($92$\\%) are\nin common with their catalogue. The $4$ remaining stars\nare fainter than $G = 17$ mag but showing no particularities\nin spatial positions, proper motions, parallaxes or their\nerrors and they likely come from the different methods used\nor from the differences in proper motions and parallaxes \nbetween {\\it Gaia} DR2 and EDR3. However,\nthe sampling \nradius used by \\citet{Can20a} is around twice ours. If we \nuse $R_s=70$ arcmin instead of $R_s=35$ arcmin we obtain a \ntotal of $79$ stars fulfilling membership criteria, a number \nvery similar to the $71$ members reported by \\citet{Can20a}.\nIn this case, the estimated number of spurious members is $19$, \nwhich means a field star contamination of $24$\\%. Then, it \nis reasonable to assume that about a quarter of \\citet{Can20a}'s \nmembers are actually spurious. Using an unsuitable sampling \nradius may obviously affect the estimate of some important \nparameters, such as for instance the total cluster mass. \nOther parameters could be less affected. For Alessi-Teutsch~9, using \n$R_s=70$ arcmin, cluster's center shifted by $\\sim 4$ arcmin, \nproper motion centroid by $\\sim 0.01-0.02$ mas~yr$^{-1}$ and \ndistance remained unchanged. However, depending on the nature \nof the data and on the details of the used method, these \ndifferences could be more significant and hence the \nimportance of a reliable {\\it previous} determination \nof the optimal sampling radius.\n\nFigure~\\ref{fig_XY} shows the spatial distribution of \nthe $55$ member stars.\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{sanchez_fig8.eps}\n\\caption{Spatial distribution of Alessi-Teutsch~9\\ members relative \nto the cluster's center.}\n\\label{fig_XY}\n\\end{figure}\nIt appears quite irregular although a central concentration\nof stars can be discerned. The corresponding radial density\nprofile is plotted in Figure~\\ref{fig_perfilXY}.\n\\begin{figure}[t]\n\\includegraphics[width=\\columnwidth]{sanchez_fig9.eps}\n\\caption{Radial spatial density profile for the cluster Alessi-Teutsch~9. \nThe bins are $5$ arcmin wide and the vertical error bars are \nestimated by assuming Poisson statistics.}\n\\label{fig_perfilXY}\n\\end{figure}\nThe small ``valley\" seen around $r \\sim 15$~arcmin roughly \nagrees with that observed in Figure~\\ref{fig_eta} and \nindicates an apparent lack of stars in this region. Given \nthe relatively low number of members, this feature is likely \na random fluctuation. What does seem clear is a two-component \nstructure in the radial profile: a more concentrated core of \nradius $R_{core} \\sim 12-13$ arcmin and an outer halo extending \nout to $r = R_c$. These values are consistent with the \nrelation $R_c \\sim 3 R_{core}$ found by \\cite{Mac07}.\nWe analysed separately both populations (core and halo) but \nwe did not find patterns or significant differences between them.\n\nCluster mean proper motion derived in this work\ndiffers by less than $\\sim 0.1$ mas~yr$^{-1}$ from that calculated\nby \\citet{Can20a}, that is the typical proper motion uncertainty\nin the sample.\nThe differences \nwith earlier works \\citep{Kha13,Dia14,Sam17} are much larger \n($> 1$ mas~yr$^{-1}$) but this is not surprising since these \nstudies were carried out with less precise data from the PPMXL \nor UCAC4 catalogues. The number of assigned members in these \ncatalogues previous to {\\it Gaia} is unrealistically high ($\\sim 10^3$) \nlikely because of the same reason: larger proper motion errors \nimply larger overdensity areas and a higher number of kinematic \ncandidates.\n\nRegarding the cluster distance, our result ($654$ pc) lies in \nbetween the estimates by \\citet{Kha13} and \\citet{Bos19} and \nclose to the result of \\citet{Can20a} (see Table~\\ref{tabAT9}). \nAt this distance, the cluster linear radius is $6.7$~pc and the \ncorresponding number density of stars would be $\\Sigma_c \\simeq \n0.06$ star\/pc$^3$. This makes Alessi-Teutsch~9\\ one of the less dense known \nOCs. Star density is a parameter difficult to determine accurately \nbecause it depends on the actual cluster radius, which is an \nuncertain parameter as already discussed in this work \\citep[see \nalso][]{San18,San20}. We cross-matched \\citet{Can20a}'s catalogue \nwith de \\citet{San20}'s catalogue of reliable OC radii and we \nfound $334$ clusters in common, of which only one had \na star density smaller than Alessi-Teutsch~9\\ (Ruprecht~45 with $\\sim 0.04$ \nstar\/pc$^3$). The corresponding relaxation time, the time \nnecessary for the cluster to reach a Maxwellian velocity \ndistribution, can be calculated as\\footnote{This equation \nactually refers to the relaxation time {\\it at the half-mass \nradius}, because $T_E$ varies through the cluster.} \\citep{Spi71}\n\\begin{equation}\nT_E = \\frac{0.06 ~ N_m^{1\/2} ~ R_h^{3\/2}}{G^{1\/2}\n~ \\overline{m}^{1\/2} ~ \\log (0.4 N_m)}\n\\end{equation}\nbeing $N_m$ the number of cluster members, $R_h$ the half-mass \nradius and $\\overline{m}$ the mean mass of the cluster stars \n($G$ is the universal gravitational constant). Taking \n$\\overline{m} = 0.5 M_{\\sun}$ and assuming that $R_h \n\\simeq R_c\/2$ we get $T_E \\simeq 40$ Myr, which means \nthat Alessi-Teutsch~9\\ is a (very) relaxed system. Additionally, \nbased on the diagnosis criterion proposed by \\citet{Gie11}, \nwe can say that Alessi-Teutsch~9\\ is gravitationally bound because its \ncrossing time, which is of the same order of magnitude as \n$T_E$ \\citep{Spi71}, is significantly smaller than its age.\nNote that choosing an unsuitable value of $R_s=70$ arcmin \n(and $N_m=79$) yields $T_E \\simeq 130 Myr$. \nReliable estimates of important characteristic times (e.g. \ndynamical relaxation time, crossing time or evaporation time) \ncan be obtained only if reliable measurements for the radius, \nnumber of members or total mass are available.\n\n\\section{Conclusions}\n\\label{sec_conclusions}\n\nThis paper describes a pilot study that implements a novel\napproach to determine the properties of an OC in a reliable\nway. For a given set of membership criteria, the number of\nestimated cluster member should be plotted as a function\nof the sampling radius. The point at which the slope of the\ncurve flattens indicates the actual cluster radius and both\nmember stars and cluster properties should be derived at\nthis point. By using this strategy, we used data from\n{\\it Gaia} EDR3 to study the cluster Alessi-Teutsch~9\\ and we obtained\na radius of $35$ arcmin and $55$ member stars, from which\nwe expect that $\\sim 10$ are spurious (field stars\nfulfilling the membership criteria). We complemented this\ndata with observed $12$-bands photometry to determine the\nfinal cluster properties (Table~\\ref{tabAT9}).\nOur results show that Alessi-Teutsch~9\\ is a bound, dynamically relaxed\ncluster having a very low number density ($\\simeq 0.06$\nstar\/pc$^3$) and a two-component structure with an outer\nhalo and a central core of radius $\\sim 12-13$ arcmin.\n\nCurrently, there is a prevailing approach to determine OC\nproperties with totally automated techniques \n\\citep[see][for recent examples]{Gao18,Yon19,Gao20,Aga21}.\nFor these cases, we highlight the importance of choosing\nan optimal sampling radius {\\it previous} to any calculation.\nAdditionally, it is important to complement the large width\n{\\it Gaia} bands with photometry in multiple narrow or\nmedium bands such as the $12$ filters used in this work.\nHowever, addressing this last issue is not\nstraightforward because it involves dedicated observations\nin multiple filters per each cluster.\nIn this sense, the forthcoming data from the\nJavalambre-Physics of the Accelerated Universe Astrophysical\nSurvey \\citep[J-PAS,][]{Ben14} \ncould be very useful. J-PAS will cover at least\n$8000$ deg$^2$ using a unique set of 56 optical filters\nand these data can be used to apply the\nproposed approach to OCs falling within the survey area.\n\n\\acknowledgments\nWe are very grateful to the referee for his\/her careful\nreading of the manuscript and helpful comments and\nsuggestions, which improved this paper.\nNS acknowledges support from the Spanish Ministerio de Ciencia,\nInnovaci\\'on y Universidades through grant PGC2018-095049-B-C21.\nF.~L.-M. acknowledges partial support by the Fondos de \nInversiones de Teruel (FITE).\nBased on observations made with the JAST80 telescope at the \nObservatorio Astrof\\'isico de Javalambre (OAJ), in Teruel, \nowned, managed, and operated by the Centro de Estudios de \nF\\'isica del Cosmos de Arag\\'on. We thank the OAJ Data \nProcessing and Archiving Unit (UPAD) for reducing and \ncalibrating the OAJ data used in this work.\nThis work has made use of data from the European Space Agency\n(ESA) mission {\\it Gaia} (\\url{https:\/\/www.cosmos.esa.int\/gaia}),\nprocessed by the {\\it Gaia} Data Processing and Analysis Consortium\n(DPAC, \\url{https:\/\/www.cosmos.esa.int\/web\/gaia\/dpac\/consortium}).\nFunding for the DPAC has been provided by national institutions,\nin particular the institutions participating in the {\\it Gaia} \nMultilateral Agreement.\n\n\\section*{Funding}\nFunding for OAJ and UPAD has been provided by the \nGovernments of Spain and Arag\\'on through the Fondo \nde Inversiones de Teruel; the Spanish Ministry of Science, \nInnovation and Universities (MCIU\/AEI\/FEDER, UE) with grant \nPGC2018-097585-B-C21; the Spanish Ministry of Economy and \nCompetitiveness (MINECO\/FEDER, UE) under AYA2015-66211-C2-1-P, \nAYA2015-66211-C2-2, AYA2012-30789, and ICTS-2009-14; and \nEuropean FEDER funding (FCDD10-4E-867, FCDD13-4E-2685).\n\n\\section*{Conflicts of interests}\nThe authors have no conflicts of interest to declare\nthat are relevant to the content of this article.\n\n\\bibliographystyle{spr-mp-nameyear-cnd}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSilicon quantum dots have shown to be a promising candidate system for realizing quantum information technologies due to their long coherence times, fast gate times, potential for scalability, and integration within the current semiconductor industry. Significant progress has been made in the study of Si-based quantum-dot type qubits either using Si-MOS or Si-Ge devices including these representative (but by no means exhaustive) recent experimental publications \\cite{HuangNat2019,MaurandNC2016,ZhaoNC2019,YangNat2020,PetitNat2020,FogartyNC2018,VeldhorstNat2015,ZajacSci2018,WatsonNat2018,ZajacPRApp2016,MillsNC2019,BorjansNat2020,SigillitoNPJQI2019,XuePRX2020,KawakamiNN2014,XuePRX2019}. An eventual large-scale quantum computer will solve problems which no classical digital computers can. One such problem, which ushered in the modern era of quantum computing, is the Shor algorithm for prime factorization \\cite{ShorSIAMJC1997}. This is a problem of great technological interest as it is used in all modern cryptography. Trying to factorize a 1000-digit number, which classical computers cannot since the computation cost is exponential in the number of digits, one may need roughly a million logical qubits. Since quantum error correction is essential for quantum computing to actually work, each logical qubit may easily require several thousand physical qubits for its realization, so in the end a hypothetical quantum CPU trying to decisively beat classical computers in doing prime factorization may require $\\sim 10^{10}$ physical qubits. Although this sounds like a huge number of qubits, an ordinary CPU chip today may host $10^{10}$ transistors or bits. Each of these transistors or classical bits is made of silicon, giving Si a huge materials and technological advantage in building a quantum computer. In this context, it is understandable why there are multiple large groups all over the world (e.g. INTEL, Princeton, Wisconsin, Sandia, Delft, Sydney, Hefei...) involved in developing Si-based qubits although in terms of the number of working qubits today, the Si system is well behind ion trap or superconducting transmon qubits. So far, only 1-4 Si qubits have been successfully demonstrated in the laboratory.\n\nThe physical property being used to create the Si qubit is the electron spin localized within an effective Si quantum dot near a Si surface or interface with another material, such as Si-SiO2 MOS system or Si-Ge 2D electron system. Since electron spin is by definition a quantum 2-level system, an isolated localized electron spin, if it can be manipulated without much decoherence, is an ideal qubit. Silicon has enormous advantage because electron spins in Si are relatively long-lived with long spin coherence times even for natural Si, which can be enhanced greatly by isotopic purification \\cite{WitzelPRL2010}. In addition, 2-qubit gates can be implemented by exchange coupling neighboring localized spins by electrically controlling the tunnel coupling between neighboring quantum dots, allowing very fast gate operations. Thus, long spin coherence time, fast electrostatic gating, and the existing Si chip technology allowing scaling up in principle make Si-based quantum information processing extremely attractive. There is however one serious fundamental problem: Valley. Bulk Si has six equivalent conduction band minima, with the ground state of Si quantum dots having two valleys which are energetically degenerate in the ideal limit. Thus, ground state Si quantum dot electron spin can in principle be any of the four degenerate spin-valley states. Typically, there is always some valley splitting associated with the surface\/interface \\cite{SaraivaPRB2009,SaraivaPRB2011}, but the magnitude of this valley splitting is uncontrollable and it varies randomly from dot to dot in essentially a random manner. The valley problem in silicon quantum dots, which has received little attention in the literature so far although most researchers in the field recognize its importance \\cite{WitzelPRL2010,SaraivaPRB2009,SaraivaPRB2011,CulcerPRL2012,DodsonARXIV2021,CorriganARXIV2020}, is the topic of the current theoretical work.\n\nThe Si valley problem has mostly been considered in the context of the valley splitting in the quantum dot qubit being large compared with the qubit temperature ($\\sim$ 25-100 mK) so that the thermal occupancy of the higher valley states remains negligible, enabling a valid 2-level quantum description of the system in terms of only the electron spin states. This is indeed a serious potential problem as the qubit can no longer be defined if higher valley states are occupied. But, this may not be the only problem when two-qubit gate operations are carried out using inter-qubit exchange coupling. We find that the relative value of the valley splitting with respect to the exchange coupling becomes an important limiting factor even at $T=0$ when any valley splitting is by definition much larger than the temperature. In particular, we find that the valley splitting must be much larger than the inter-qubit exchange coupling to avoid leakage (i.e. quantum decoherence) for the Si system to operate as a multi-qubit quantum computing platform, and we also find that even when the valley splitting is large, the initialization of the valley states becomes a crucial consideration in multiqubit gate operations. This problem of valley splitting in the context of multiqubit gate operations as determined by the exchange coupling between quantum dots has not attracted attention yet in the literature perhaps because of two reasons: (1) There are very few reports of two-qubit exchange gate operations in Si quantum dot qubits; (2) the currently achieved values for the exchange coupling are very small so that the condition of a valley splitting being larger than the exchange energy is automatically satisfied when the valley splitting is larger than temperature. Since the speed of the 2-qubit gate operations is determined by the exchange coupling strength, higher exchange coupling strength is desirable in the future for progress in Si-based quantum computing, and we want to alert the community that the valley splitting issue is fundamental to 2-qubit gate operations as a totally distinct problem from the one involving thermal occupancies of higher valley states.\n\nThe problem with having multiple valley states is that their presence can lead to leakage out of the computational space if the valley degeneracy is not adequately broken. Of the six valley states present in bulk Silicon, four of these decouple when strain is applied to the sample, but the degeneracy of the final two valley states is broken only by a small valley splitting term that is dependent on the microscopics of the system\\cite{BorjansPRXQ2021}. We emphasize that there is no known way to control this valley splitting in specific qubits, and in fact, one can figure out the size of the valley splitting only aposteriori. There is no existing in situ sample diagnostics providing the valley splitting information for the working qubits beforehand. In the current work, we analyze the effect of valleys on the coupled qubit dynamics, finding that valleys are a much bigger problem for coupled qubits than has been realized so far, and the valley problem worsens radically as the number of qubits goes beyond two. In fact, we believe that the Si quantum computing community should worry about the valley problem now before building circuits with tens of coupled quantum dots and finding out that they do not work because of the valley problem. The problem of principle we have uncovered here can be `fixed' by having small exchange coupling ($\\ll$ valley splitting), but this means that the 2-qubit gate operations will remain bounded by the valley splitting energies.\n\nIn this paper, we use a Hubbard model to determine the dynamics of a system of two exchange-coupled quantum dots. We show that spin-valley entanglement can easily arise from time evolution of the system, which is detrimental to the use of the system for quantum information applications. This can be avoided if the valley splitting is large and electrons can be initialized to valley eigenstates, and in fact we show that under these conditions the system maintains coherence and is unaffected to leading order by the valley degree of freedom. If any electron begins in a superposition of valley states then spin-valley entanglement will result, but if the valley splitting is sufficiently large, electrons can be properly initialized to the valley ground state avoiding this situation. However, if the valley splitting is small compared to the exchange interaction strength, then an undesirable spin-valley entanglement is more difficult to avoid, as it will be present unless all electrons are initialized to the same valley state, which is generally difficult to achieve without a large valley splitting.\n\nAdditionally, we show that when the valley splitting is small, the measurement probabilities of a two-qubit system are unaffected and are identical to the corresponding measurement probabilities in an ideal system without valley degrees of freedom. However, this is not the case in systems with more than two qubits, and we give examples of gate sequences which give different measurement probabilities in a system with valley states than in an ideal system. Because valley degeneracy affects larger systems but not two-qubit systems, two-qubit gate fidelities measured in two-qubit systems may not accurately account for the effects of valley states, as these effects are only observable in larger system sizes containing more than just two qubits. This last property we discover has not been mentioned in the literature at all, and there has been a feeling that if the 2-qubit gates work, the valley degeneracy problem is irrelevant. We show that this is false-- one could have perfectly working 2-qubit gates, but the system will lose quantum information through spin-valley entanglement as one scales up to more qubits. This is a very serious issue requiring a resolution before more qubits are added to the circuit. We work at $T=0$ throughout so that the well-understood problem of the thermal occupancy of higher valley states is a non-issue. We consider the valley degeneracy question only in the context of gate operations driven by the inter-dot exchange coupling. Just to avoid any misunderstanding, our definition of `large' and `small' valley splitting is as compared with the inter-dot exchange coupling, and not compared with temperature as we are at $T=0$.\n\nThis paper is organized as follows: in Sec. II, we present our model and give the Hamiltonian that we use. In Sec. III, we diagonalize the Hamiltonian for a system of two electrons in two quantum dots, for both triplet and singlet spin configurations. In Sec. IV, we discuss the dynamics of the two-qubit system, first for the case where the valley splitting is large, then also for the case of small or zero valley splitting. In Sec. V, we give examples of gate sequences in four-qubit systems where valley effects are observable even when they are not detectable in the 2-qubit situation, and we give our conclusions in Sec. VI.\n\n\\section{Model and Hamiltonian}\n\nThe Fermi surface of Silicon contains six electron pockets, leading to a six-fold degeneracy in the band structure minima. By applying tensile strain to the sample, two of these valleys are energetically separated from the other four, but remain nearly degenerate to each other. Therefore, in addition to its spin, an electron in a Silicon quantum dot will contain a valley quantum number denoting whether it fills the $\\ket{+z}$ or $\\ket{-z}$ valley state. Microscopic features of the system introduce a small position-dependent valley splitting. We consider a double quantum dot in Silicon, which yields the following effective Hubbard Hamiltonian \\cite{BorjansPRXQ2021,DasSarmaPRB2011,YangPRB2011}:\n\n\\begin{align*}\n&H=\\!\\!\\sum_{s\\in\\{\\uparrow,\\downarrow\\}}\\sum_{i=1}^2\\Big(\\tilde{\\Delta}_ic^\\dagger_{i,+z,s}c_{i,-z,s}+\\tilde{\\Delta}_i^*c^\\dagger_{i,-z,s}c_{i,+z,s}\\Big)\\\\&+\\frac{\\epsilon}{2}(n_1-n_2)+t_c\\!\\!\\!\\sum_{s\\in\\{\\uparrow,\\downarrow\\}}\\sum_{v=\\pm z}\\Big(c^\\dagger_{1,v,s}c_{2,v,s}+c^\\dagger_{2,v,s}c_{1,v,s}\\Big)\\\\&+\\sum_{i=1}^2\\frac{U}{2}n_i(n_i-1)\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\end{align*}\n\nwhere $c_{i,v,s}$ is the second quantized annihilation operator for dot $i$, valley state $v$, and spin $s$, and where $n_i=n_{i,+z,\\uparrow}+n_{i,+z,\\downarrow}+n_{i,-z,\\uparrow}+n_{i,-z,\\downarrow}$. Here $\\epsilon$ is the detuning between the two quantum dots, $t_c$ is the tunneling constant between the two dots, and $\\tilde{\\Delta}_i$ determines the valley splitting of dot $i$. We define $U$ to be the Coulomb energy difference between a dot occupied by two electrons, and a state with one electron occupying each dot. Thus we use a short-range Coulomb interaction term which is independent of valley states (including an explicit valley dependence in the interaction complicates the calculation, but does not affect our conclusion). Let $\\tilde{\\Delta}_i=\\Delta_ie^{-i\\phi_i}$, where $\\Delta_i=|\\tilde{\\Delta}_i|$. Define $\\ket{i_\\pm}=(\\ket{i,+z}\\pm e^{i\\phi_i}\\ket{i,-z})\/\\sqrt{2}$. Here $\\ket{i_\\pm}$ are the eigenstates of a single electron in a single dot $i$. Then for a single electron in a double-quantum-dot, transforming $H$ into the basis $\\{\\ket{1_+},\\ket{1_-},\\ket{2_+},\\ket{2_-}\\}$ gives the following:\n\n\\begin{equation}\nH=\\begin{pmatrix}\n\\epsilon\/2+\\Delta_1&0&t_+&t_-\\\\\n0&\\epsilon\/2-\\Delta_1&t_-&t_+\\\\\nt_+^*&t_-^*&-\\epsilon\/2+\\Delta_2&0\\\\\nt_-^*&t_+^*&0&-\\epsilon\/2-\\Delta_2\n\\end{pmatrix}\n\\end{equation}\n\nwhere $t_\\pm=(1\\pm e^{i(\\phi_2-\\phi_1)})t_c\/2$. This is equivalent to the two-dot Hamiltonian given in Ref. \\onlinecite{BorjansPRXQ2021} up to a constant energy shift. We emphasize that this Hubbard model Hamiltonian is in fact an accurate description of quantum dot qubit coupling in reasonably realistic situations \\cite{DasSarmaPRB2011,YangPRB2011,StaffordPRL1994}.\n\n\\section{Diagonalization of Hamiltonian}\n\nIn order to determine the effects of valley degeneracy and splitting on the exchange coupling between two dots, we consider two electrons in this two-dot system, diagonalizing the resulting Hamiltonian. We do this separately for the case when the electrons form a spin triplet and when they form a spin singlet.\n\n\\subsection{Triplet Spin Configuration}\n\nConsider two electrons with a triplet spin configuration. Then due to the Pauli exclusion principle, the electrons must occupy different orbital or valley states. We will assume that $U\\gg t_c,\\Delta_j,\\epsilon$, as the short-range Coulomb interaction energy is the largest energy scale in the system. Then there are four low energy states: $\\ket{1_+2_+}$, $\\ket{1_+2_-}$, $\\ket{1_-2_+}$, and $\\ket{1_-2_-}$. These states couple via tunneling to the two high energy states $\\ket{1_+1_-}$ and $\\ket{2_+2_-}$, where the electrons occupy both valley states in a single dot. Perturbation theory to first order in $U^{-1}$ gives the following Hamiltonian for the four lowest-energy states after including the effects of coupling to the two high energy states:\n\n\\begin{widetext}\n\\begin{equation}\nH_T=\\begin{pmatrix}\n\\Delta_1+\\Delta_2-\\frac{2|t_-|^2}{U}&\\frac{t_-t_+^*-t_+t_-^*}{U}&\\frac{t_+t_-^*-t_-t_+^*}{U}&\\frac{2|t_-|^2}{U}\\\\\n\\frac{t_+t_-^*-t_-t_+^*}{U}&\\Delta_1-\\Delta_2-\\frac{2|t_+|^2}{U}&\\frac{2|t_+|^2}{U}&\\frac{t_-t_+^*-t_+t_-^*}{U}\\\\\n\\frac{t_-t_+^*-t_+t_-^*}{U}&\\frac{2|t_+|^2}{U}&-\\Delta_1+\\Delta_2-\\frac{2|t_+|^2}{U}&\\frac{t_+t_-^*-t_-t_+^*}{U}\\\\\n\\frac{2|t_-|^2}{U}&\\frac{t_+t_-^*-t_-t_+^*}{U}&\\frac{t_-t_+^*-t_+t_-^*}{U}&-\\Delta_1-\\Delta_2-\\frac{2|t_-|^2}{U}\n\\end{pmatrix}\n\\label{eqn:ht4}\n\\end{equation}\n\\end{widetext}\n\nNote that to leading order in $\\epsilon\/U$, the four low energy states are independent of $\\epsilon$. This is because $\\epsilon$ only affects the energies of states where both electrons occupy the same dot, and these are already energetically separated from the other states by the large onsite Coulomb interaction strength $U$ and hence do not contribute in the leading order. If $\\epsilon$ is allowed to be of the same scale as $U$, as is done in some experiments to control the exchange interaction, then these results must be adjusted accordingly. Specifically where $U$ appears in the denominator of terms in Eq. (\\ref{eqn:ht4}) it must be replaced with $U\\pm\\epsilon$ depending on the which state introduces each term. However for the purpose of this paper we will focus on the situation where $\\epsilon\\ll U$, as is the case in experiments which use barrier control of the exchange interaction. In order to further study the system dynamics, Eq. (\\ref{eqn:ht4}) must be fully diagonalized. Since this cannot be easily analytically done for arbitrary $\\Delta_i$ and $t_c$, we instead consider two different limits for small and large $\\Delta_i$. In the limit where $\\Delta_i\\gg t_c^2\/U$, the matrix in eq. (\\ref{eqn:ht4}) is already diagonalized to leading order in $t_c^2\/U\\Delta_i$, and the energies are given by its diagonal entries. In the limit where $\\Delta_i\\ll t_c^2\/U$, diagonalizing Eq. (\\ref{eqn:ht4}) yields the following energies:\n\n\\begin{align*}\n&E_1=-\\frac{4t_c^2}{U}\\\\\n&E_2=-|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|\\\\\n&E_3=0\\\\\n&E_4=|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|=\\sqrt{\\Delta_1^2+\\Delta_2^2+2\\Delta_1\\Delta_2\\cos(\\phi_2-\\phi_1)}\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\label{eqn:et}\n\\end{align*}\n\nThe corresponding eigenstates are given by:\n\n\\begin{widetext}\n\\begin{align*}\n&\\ket{\\psi_1}=\\frac{1}{\\sqrt{2}t_c}\\Big(t_-\\ket{1_+2_+}-t_+\\ket{1_+2_-}+t_+\\ket{1_-2_+}-t_-\\ket{1_-2_-}\\Big)\\\\\n&\\ket{\\psi_2}=\\Big[\\frac{(\\Delta_2+\\Delta_1)t_+}{2t_c|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|}\\big(\\ket{1_+2_+}+\\ket{1_-2_-}\\big)+\\frac{(\\Delta_2-\\Delta_1)t_-}{2t_c|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|}\\big(\\ket{1_+2_-}+\\ket{1_-2_+}\\big)\\\\&\\qquad\\qquad-\\frac{1}{2t_c}\\Big(t_+\\ket{1_+2_+}-t_-\\ket{1_+2_-}+t_-\\ket{1_-2_+}-t_+\\ket{1_-2_-}\\Big)\\Big]\\\\\n&\\ket{\\psi_3}=\\frac{1}{\\sqrt{2}t_c|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|}\\Big[(\\Delta_2-\\Delta_1)t_-\\big(\\ket{1_+2_+}+\\ket{1_-2_-}\\big)+(\\Delta_2+\\Delta_1)t_+\\big(\\ket{1_+2_-}+\\ket{1_-2_+}\\big)\\Big]\\\\\n&\\ket{\\psi_4}=\\Big[\\frac{(\\Delta_2+\\Delta_1)t_+}{2t_c|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|}\\big(\\ket{1_+2_+}+\\ket{1_-2_-}\\big)+\\frac{(\\Delta_2-\\Delta_1)t_-}{2t_c|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|}\\big(\\ket{1_+2_-}+\\ket{1_-2_+}\\big)\\\\&\\qquad\\qquad+\\frac{1}{2t_c}\\Big(t_+\\ket{1_+2_+}-t_-\\ket{1_+2_-}+t_-\\ket{1_-2_+}-t_+\\ket{1_-2_-}\\Big)\\Big]\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\label{eqn:eigenstates}\n\\end{align*}\n\\end{widetext}\n\n\\subsection{Singlet Spin Configuration}\n\nWe now repeat the same calculation for a pair of electrons in a singlet spin configuration. In this case, the same six orbital states as in the triplet case are present, with the addition of four doubly occupied states $\\ket{1_{+\\upharpoonleft\\!\\downharpoonright}}$, $\\ket{1_{-\\upharpoonleft\\!\\downharpoonright}}$, $\\ket{2_{+\\upharpoonleft\\!\\downharpoonright}}$, and $\\ket{2_{-\\upharpoonleft\\!\\downharpoonright}}$, since both electrons can occupy the same valley and orbital state. Like above, we apply perturbation theory in $U^{-1}$, and calculate the Hamiltonian for the four lowest energy states, yielding:\n\n\\begin{widetext}\n\\begin{equation}\nH_S=\\begin{pmatrix}\n\\Delta_1+\\Delta_2-\\frac{2|t_-|^2+4|t_+|^2}{U}&-\\frac{2t_-t_+^*}{U}&-\\frac{2t_+t_-^*}{U}&-\\frac{2|t_-|^2}{U}\\\\\n-\\frac{2t_+t_-^*}{U}&\\Delta_1-\\Delta_2-\\frac{2|t_+|^2+4|t_-|^2}{U}&-\\frac{2|t_+|^2}{U}&-\\frac{2t_-t_+^*}{U}\\\\\n-\\frac{2t_-t_+^*}{U}&-\\frac{2|t_+|^2}{U}&-\\Delta_1+\\Delta_2-\\frac{2|t_+|^2+4|t_-|^2}{U}&-\\frac{2t_+t_-^*}{U}\\\\\n-\\frac{2|t_-|^2}{U}&-\\frac{2t_+t_-^*}{U}&-\\frac{2t_-t_+^*}{U}&-\\Delta_1-\\Delta_2-\\frac{2|t_-|^2+4|t_+|^2}{U}\n\\end{pmatrix}\n\\label{eqn:hs4}\n\\end{equation}\n\\end{widetext}\n\nDiagonalizing this matrix in the limit where $\\Delta_i\\ll t_c^2\/U$ gives the following energies:\n\n\\begin{align*}\n&E_1=0\\\\\n&E_2=-\\frac{4t_c^2}{U}-|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|\\\\\n&E_3=-\\frac{4t_c^2}{U}\\\\\n&E_4=-\\frac{4t_c^2}{U}+|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\label{eqn:es}\n\\end{align*}\n\nThe corresponding singlet case eigenstates are precisely the same as for the triplet case given by Eq. (\\ref{eqn:eigenstates}). Note that $E_2$, $E_3$, and $E_4$ are less than the corresponding triplet energies; however, $E_1$ is greater than the corresponding triplet energy. In each case, the difference is $\\pm J_0$, where $J_0=4t_c^2\/U$ is the strength of the exchange interaction in an ideal system which does not have any valley degeneracy.\n\n\\section{System Dynamics}\n\nWe now investigate the dynamics of a system prepared in a specific initial state and allowed to evolve under the Hamiltonian for some time $t$. This is the coupled qubit dynamics under gate operation, which controls quantum computation. We show that when the initial state is not prepared with each electron in the same valley state, leakage between valley states will introduce error. If the valley splitting $\\Delta_i$ is small, this error can occur even if all electrons are initialized in the valley ground states $\\ket{i_-}$ if the phases $\\phi_i$ differ from one another. We first examine the large valley-splitting limit where $\\Delta_i\\gg J_0$, followed by the small valley-splitting limit where $\\Delta_i\\ll J_0$.\n\n\\subsection{Large Valley Splitting}\n\nWhen $\\Delta_i\\gg t_c^2\/U$, the system dynamics is determined to leading order by the diagonal entries of Eqs. (\\ref{eqn:ht4}) \\& (\\ref{eqn:hs4}). The off-diagonal elements only affect the energies to order $J_0^2\/\\Delta_i$. If the system is prepared in one of the valley eigenstates, there will be an effective exchange interaction $J_+=4|t_+|^2\/U$ for valley states $\\ket{1_+2_+}$ and $\\ket{1_-2_-}$; and an effective exchange interaction $J_-=4|t_-|^2\/U$ for valley states $\\ket{1_+2_-}$ and $\\ket{1_-2_+}$. Thus to leading order in $J_0\/\\Delta_i$, the presence of energetically separated valley states does not affect the dynamics of the system as long as the initial state is a valley eigenstate.\n\nIf not all electrons are initialized to valley eigenstates, then the time evolution of the system can give rise to spin-valley entanglement. Consider, for example, an initial state where qubit 1 starts in the $\\ket{1_{+\\uparrow}}$ state, but qubit 2 starts in the state $(\\ket{2_{+\\downarrow}}+\\ket{2_{-\\downarrow}})\/\\sqrt{2}$. Then after some time $t$, the system will evolve to:\n\n\\begin{align*}\n\\ket{\\Psi(t)}&=\\frac{1}{2}\\ket{1_+2_+}\\Big(\\ket{T_0}+e^{iJ_+t}\\ket{S}\\Big)\\\\&+\\frac{1}{2}e^{it(2\\Delta_2+\\frac{J_+-J_-}{2})}\\ket{1_+2_-}\\Big(\\ket{T_0}+e^{iJ_-t}\\ket{S}\\Big)\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\label{eqn:timeevv2}\n\\end{align*}\n\nwhere $\\ket{T_0}$ and $\\ket{S}$ are the triplet and singlet spin states respectively. Suppose for simplicity that $\\phi_1=\\phi_2$, and thus $J_-=0$ and $J_+=J_0$. Then the system reaches a maximally entangled state when $J_0t=(2k+1)\\pi$ for integer $k$:\n\n\\begin{align*}\n\\ket{\\Psi\\big(\\frac{(2k+1)\\pi}{J_0}\\big)}&=\\frac{1}{\\sqrt{2}}\\ket{1_+2_+}\\ket{\\downarrow\\uparrow}\\\\&+\\frac{1}{\\sqrt{2}}e^{it(2\\Delta_2+\\frac{J_+}{2})}\\ket{1_+2_-}\\ket{\\uparrow\\downarrow}\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\end{align*}\n\nBecause the electron in dot 1 was initialized in a valley eigenstate in this example, it remains in that state throughout the evolution of the system. Thus, the state at any particular point in time corresponds a point on each of two Bloch spheres which represent the combined spin state (singlet or triplet) and the valley state of the second electron. This is not a one-to-one correspondence, as different entangled states can correspond to the same set of points; nevertheless, it is useful for visualizing the information stored in the spin and valley states and the entanglement between them.\n\nIn Fig. \\ref{fig:blochdelta}, we plot the path $\\ket{\\Psi(t)}$ traces on the two Bloch spheres. The path begins on the surface of both Bloch spheres, indicating that initial state is separable. As time evolves to $t=\\pi\/J_0$, the path spirals toward the center of both Bloch spheres, indicating maximal spin-valley entanglement. In general, spin-valley entanglement is detrimental to quantum information applications, since measuring the spin state without the ability to measure the corresponding valley state can result in a complete loss of qubit information. Thus it is imperative to initialize the system in valley eigenstates -- otherwise, information will leak out unwittingly through spin-valley entanglement during the coupled qubit dynamical evolution.\n\n\\begin{figure}[!htb]\n\t\\includegraphics[width=.4\\columnwidth]{spinsphere.pdf}\n\t\\includegraphics[width=.43\\columnwidth]{v2sphere.pdf}\n\t\\includegraphics[width=.4\\columnwidth]{spinpathv2.pdf}\n\t\\includegraphics[width=.43\\columnwidth]{valleypathv2.pdf}\n\t\\caption{{\\bf Top:} Bloch sphere representation of the spin qubit in the singlet-triplet basis and valley qubit of the second dot (the first dot will always be in the valley state $\\ket{1_+}$). {\\bf Bottom:} Time evolution of a state given by eq. (\\ref{eqn:timeevv2}) for $t$ ranging from $0$ to $\\pi\/J_0$, with $\\Delta_2=5J_0$ and $\\phi_1=\\phi_2$. The $xy$ cross-sections of the Bloch spheres are plotted since the state stays entirely within the $xy$ planes.}\n\t\\label{fig:blochdelta}\n\\end{figure}\n\n\\subsection{Small Valley Splitting}\n\nWe now consider a situation where the valley splitting is small compared to the exchange interaction strength. In this regime the dynamics is dictated by the states and energies in Eqs. (\\ref{eqn:ht4}), (\\ref{eqn:eigenstates}), and (\\ref{eqn:hs4}). Because the singlet and triplet spin configurations share the same valley eigenstates, there is an effective exchange interaction $\\pm J_0$ depending on the specific valley states occupied. We show that unless the system is initialized with all electrons occupying the same valley state, that the time evolution of the coupled system will result in spin-valley entanglement. Initializing the system in this way is difficult due to the near-degeneracy of the valley states. Additionally, if the phase difference $\\phi_2-\\phi_1$ between dots is nonzero, as is often the case, spin-valley entanglement will occur even if all electrons are initialized to their individual ground valley states. In fact, we are not aware of any experimental control capable of tuning the inter-valley phase difference $\\phi_2-\\phi_1$, which we see as a potential problem requiring a resolution for Si spin qubits to work in a large circuit with many operational qubits.\n\nTo demonstrate the presence of spin-valley entanglement which can arise, consider an initial state $\\ket{1_{+\\uparrow}2_{+\\downarrow}}$. For simplicity, suppose $\\Delta_1=\\Delta_2=\\Delta$, and suppose there is a nonzero phase difference $\\phi_2-\\phi_1$. Then after some time $t$, the state will evolve to:\n\n\\begin{align*}\n\\ket{\\Psi(t)}&=\\frac{t_-^*}{2t_c}\\ket{\\psi_1}\\Big(e^{iJ_0t}\\ket{T_0}+\\ket{S}\\Big)\\\\&+\\frac{t_+^*}{2\\sqrt{2}t_c}\\bigg[\\Big(\\frac{t_c}{|t_+|}-1\\Big)e^{i\\frac{2|t_+|}{t_c}\\Delta t}\\ket{\\psi_2}\\\\&+\\Big(\\frac{t_c}{|t_+|}+1\\Big)e^{-i\\frac{2|t_+|}{t_c}\\Delta t}\\ket{\\psi_4}\\bigg]\\Big(\\ket{T_0}+e^{iJ_0t}\\ket{S}\\Big)\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\label{eqn:timeevd}\n\\end{align*}\n\nBy definition, $2|t_+|\/t_c=\\sqrt{2+2\\cos(\\phi_2-\\phi_1)}$, which equals $|\\tilde{\\Delta}_1+\\tilde{\\Delta}_2|\/\\Delta$ when $\\Delta_1=\\Delta_2$. In Fig. \\ref{fig:pathd} we plot the path on the Bloch sphere drawn out by total spin of $\\ket{\\Psi(t)}$. This path is independent of the value of $\\Delta$, and is confined to the $xy$-plane. The path given by the valley states is not confined to the $xy$-plane, but its projection into the $xy$-plane is a chord of the circle, the angle of which is determined by $\\phi$. In Fig. \\ref{fig:v1curvd}, we plot the full shape of the path, which lies in the plane defined by the specific $\\phi$-dependent chord extended in the $z$-direction. \n\n\\begin{figure}[!htb]\n\t\\includegraphics[width=.48\\columnwidth]{spinpathd.pdf}\n\t\\includegraphics[width=.48\\columnwidth]{v1projd.pdf}\n\t\\caption{{\\bf Left:} Time evolution of the spin state given by Eq. (\\ref{eqn:timeevd}) for $t$ ranging from $0$ to $2\\pi\/J_0$ plotted for different values of $\\phi$. The $xy$ cross-section of the Bloch sphere is shown since the path stays entirely within the $xy$ planes. These paths are independent of $\\Delta$. {\\bf Right:} Projection into the $xy$-plane of the valley state of dot 1 given by Eq. (\\ref{eqn:timeevd}), plotted for different values of $\\phi$. The projections of these paths are independent of $\\Delta$; however, the paths themselves do vary with $\\Delta$ as shown in fig. \\ref{fig:v1curvd}. The plot for the valley state of dot 2 is identical, but reflected vertically.}\n\t\\label{fig:pathd}\n\\end{figure}\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=.49\\columnwidth]{v1curvdp1.pdf}\n\t\\includegraphics[width=.49\\columnwidth]{v1curvdp2.pdf}\n\t\\includegraphics[width=.49\\columnwidth]{v1curvdp3.pdf}\n\t\\caption{Time evolution of the valley state of dot 1 given by eq. (\\ref{eqn:timeevd}). The path which represents this state's time evolution lies entirely within the plane given by $x+y\\tan\\phi=1$. In these plots, the vertical direction corresponds to the $\\pm z$-direction, and the horizontal axis is the line given by $\\{x+y\\tan\\phi=1,z=0\\}$, which is the set of lines plotted in Fig. \\ref{fig:pathd}. The dashed circle represents the intersection of the Bloch sphere with the plane $x+y\\tan\\phi=1$ and is not necessarily unit length. The shape of the path depends only on the quantity $\\Delta\\sqrt{2+2\\cos\\phi}\/J_0$, and we give plots where this quantity equals 0.1, 0.2, and 0.3.}\n\t\\label{fig:v1curvd}\n\\end{figure}\n\nIn general, $\\ket{\\Psi(t)}$ is an entangled state with two exceptions. $\\ket{\\Psi(t)}$ is separable if $\\phi_1=\\phi_2$, as this makes $t_-=0$ causing the first term in Eq. (\\ref{eqn:timeevd}) to vanish. $\\ket{\\Psi(t)}$ also becomes separable when $t=k\\pi\/J_0$ (with integer k), as this causes the spin states in both terms of Eq. (\\ref{eqn:timeevd}) to become identical. Conversely, $\\ket{\\Psi(t)}$ becomes a maximally entangled state when $\\phi_2-\\phi_1=\\pi$ and $J_0t=(2k+1)\\pi\/2$, which causes Eq. (\\ref{eqn:timeevd}) to simplify to the following:\n\n\\begin{align*}\n&\\frac{1}{2\\sqrt{2}}\\Big(\\ket{1_+2_+}-\\ket{1_-2_-}\\Big)\\Big(i^{2k+1}\\ket{T_0}+\\ket{S}\\Big)\\\\+\\;&\\frac{1}{2\\sqrt{2}}\\Big(\\ket{1_+2_+}+\\ket{1_-2_-}\\Big)\\Big(\\ket{T_0}+i^{2k+1}\\ket{S}\\Big)\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\label{eqn:timeevsimp}\n\\end{align*}\n\nIt is instructive to consider the probability of recovering the initial spin state $\\ket{\\uparrow\\downarrow}$ when measuring the spin of $\\ket{\\Psi(t)}$ after some time (leaving the valley state unmeasured). From Eq. (\\ref{eqn:timeevd}), this probability is found to be given by:\n\n\\begin{equation}\nP\\big(\\ket{1_{+\\uparrow}2_{+\\downarrow}}\\rightarrow\\ket{\\uparrow\\downarrow}\\big)=\\frac{1}{2}\\big(1+\\cos J_0t\\big)\n\\end{equation}\n\nThis is precisely the same measurement probability as would result from time evolution in an ``ideal'' one-valley system. Differences occur when measuring in a different basis; for example the probability of obtaining the state $(\\ket{\\uparrow\\downarrow}+i\\ket{\\downarrow\\uparrow})\/\\sqrt{2}$ is given by:\n\n\\begin{equation}\nP\\Big(\\ket{1_{+\\uparrow}2_{+\\downarrow}}\\rightarrow\\frac{\\ket{\\uparrow\\downarrow}+i\\ket{\\downarrow\\uparrow}}{\\sqrt{2}}\\Big)=\\frac{1}{2}-\\frac{1}{2}\\Big(1-\\frac{|t_-^2|}{t_c^2}\\Big)\\sin J_0t\n\\label{eqn:py}\n\\end{equation} \n\nIf $\\phi_1=\\phi_2$, meaning the two electrons start in the same valley states, then Eq. (\\ref{eqn:py}) reduces to $(1-\\sin J_0t)\/2$, which is the same result as would be given by an ideal system. However if $\\phi_2-\\phi_1=\\pi$, then the two electrons start in opposite valley states, and Eq. (\\ref{eqn:py}) reduces to $P=1\/2$, independent of time. In this case when $J_0t=(2k+1)\\pi\/2$, the probability outcome of measuring the spin in any basis gives $1\/2$, as the spin state is maximally entangled with the valley state.\n\nDespite the difference between Eq. (\\ref{eqn:py}) and the ideal case, spin-valley entanglement is difficult to observe in a system of two quantum dots. This is because it is difficult to directly measure in the basis containing the state $(\\ket{\\uparrow\\downarrow}+i\\ket{\\downarrow\\uparrow})\/\\sqrt{2}$. Generally, if a quantum algorithm would require such a measurement, the measurement would be performed by applying a $\\sqrt{SWAP}$ gate and then measuring in the $z$ basis, which in an ideal system would produce the same result. However, $\\sqrt{SWAP}$ gates are performed via the exchange interaction, which can disentangle a spin-valley entangled state. In fact, for two qubits, as long as states are initialized and measured in the z-basis, valley splitting will not affect the measurement probability to first order in $\\Delta\/J_0$. This is because the eigenstates given by Eq. (\\ref{eqn:eigenstates}) simultaneously diagonalize both the triplet and singlet Hamiltonians, Eqs. (\\ref{eqn:ht4}) and (\\ref{eqn:hs4}). Additionally, the energy difference between the corresponding triplet and singlet energies, Eqs. (\\ref{eqn:et}) and (\\ref{eqn:es}), is $\\pm J_0$ for every eigenstate. Simply measuring in the z-basis cannot distinguish between rotations by $+J_0$ or $-J_0$.\n\nOne possible way to make such a distinction between rotations is the following. Start with a state $\\ket{\\psi_1}$, and perform a partial rotation (perhaps $2\\pi\/3$) via the exchange interaction $J_0$. Then allow $\\ket{\\psi_1}$ through some method to be transformed to any of the other states $\\ket{\\psi_2}$ through $\\ket{\\psi_4}$. One way in which this might happen is to let the system precess under the valley splitting, but with no exchange interaction present (note that in this case $\\Delta\\ll J_0$ does not hold). Finally, complete the initial rotation from the first step. In an ideal one-valley case, the system will have undergone one complete rotation. However, in a system with valley splitting, the last part of the rotation will be in the opposite direction as the first part, and thus will not form a complete rotation. This will affect the measurement probabilities in the z-basis. However, it is not possible to complete the second step (rotating $\\ket{\\psi_1}$ into $\\ket{\\psi_2}$) in a system of two qubits while keeping $\\Delta\\ll J_0$ without adding additional terms to the Hamiltonian.\n\n\\section{Observing Valley Effects in 4-Qubit Systems}\n\nIn the previous discussion, we demonstrated that despite the spin-valley entanglement that occurs, the measurement probabilities in the $\\{\\uparrow,\\downarrow\\}$ basis would be unaffected by the presence of valley states in a two-dot system. However, we now show that in a 4-dot system this is no longer the case. We do this by giving two examples of sequences of operations which will result in a different measurement probability in the presence of valley states than the same operations would in an ideal one-valley system.\n\nConsider the fully degenerate case where $\\Delta_1=\\Delta_2=0$ (exactly the same sequences apply when $\\Delta_i$ are nonzero, but we consider the degenerate case for the sake of simplicity). Time evolution of the exchange interaction can be used to perform $\\sqrt{SWAP}$ gates. Consider an array of four quantum dots prepared in the initial state $\\ket{\\uparrow_-\\downarrow_+\\uparrow_+\\downarrow_-}$. In this example we only consider states with one electron confined to each dot, and therefore omit the dot numbers in our notation for the sake of notational brevity. Now perform the following operations to obtain $\\ket{\\Psi_{\\text{valley}}}$:\n\n\\begin{align*}\n\\ket{\\Psi_{\\text{valley}}}&=\\sqrt{SWAP_{23}}\\sqrt{SWAP_{12}}\\sqrt{SWAP_{34}}\\\\&\\times\\sqrt{SWAP_{23}}\\ket{\\uparrow_-\\downarrow_+\\uparrow_+\\downarrow_-}\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\label{eqn:seq1}\n\\end{align*} \n\nThen $\\ket{\\Psi_{\\text{valley}}}$ can be explicitly obtained, and is given by:\n\n\\begin{align*}\n&\\ket{\\Psi_{\\text{valley}}}=\\frac{1}{4}\\Big[\\ket{\\uparrow_+\\uparrow_-\\downarrow_+\\downarrow_-}-i\\ket{\\uparrow_+\\uparrow_-\\downarrow_-\\downarrow_+}\\\\&+2i\\ket{\\uparrow_-\\uparrow_+\\downarrow_+\\downarrow_-}+\\ket{\\uparrow_-\\uparrow_+\\downarrow_-\\downarrow_+}-\\ket{\\uparrow_+\\downarrow_-\\uparrow_-\\downarrow_+}\\\\&-i\\ket{\\uparrow_+\\downarrow_+\\uparrow_-\\downarrow_-}-i\\ket{\\uparrow_-\\downarrow_-\\uparrow_+\\downarrow_+}+\\ket{\\downarrow_+\\uparrow_-\\downarrow_-\\uparrow_+}\\\\&-i\\ket{\\downarrow_+\\downarrow_-\\uparrow_-\\uparrow_+}+\\ket{\\uparrow_-\\downarrow_-\\downarrow_+\\uparrow_+}+i\\ket{\\uparrow_-\\downarrow_+\\downarrow_-\\uparrow_+}\\\\&+i\\ket{\\downarrow_+\\uparrow_-\\uparrow_+\\downarrow_-}+\\ket{\\downarrow_+\\uparrow_+\\uparrow_-\\downarrow_-}\\Big]\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\end{align*}\n\nFrom this state, we calculate the measurement probability of the second dot being measured to be spin up, obtaining $P(2_\\uparrow)=5\/8$. Now consider the ideal one-valley case, where the same set of operations are performed on a system of four spins in the same initial spin configuration. Then the resulting state $\\ket{\\Psi_{\\text{ideal}}}$ will be given by:\n\n\\begin{align*}\n\\ket{\\Psi_{\\text{ideal}}}&=\\frac{1}{4}\\Big[(2+i)\\ket{\\uparrow\\ua\\downarrow\\da}-(1+2i)\\ket{\\uparrow\\downarrow\\uparrow\\downarrow}+\\ket{\\downarrow\\uparrow\\downarrow\\uparrow}\\\\&-i\\ket{\\downarrow\\da\\uparrow\\ua}+(1+i)\\ket{\\uparrow\\downarrow\\da\\uparrow}+(1+i)\\ket{\\downarrow\\uparrow\\ua\\downarrow}\\Big]\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\end{align*}\n\nIn the ideal case, the probability of measuring the second spin to be up is $P(2_\\uparrow)=1\/2$, which is different from the case above where two valley states are present. This discrepancy arises from the fact that in the presence of valley states, there is a distinction between certain states that would be considered identical in the ideal case (e.g. $\\ket{\\uparrow_+\\uparrow_-\\downarrow_+\\downarrow_-}$ and $\\ket{\\uparrow_-\\uparrow_+\\downarrow_-\\downarrow_+}$). This distinction prevents constructive or destructive interference between the states, which influences the final measurement probabilities (in the ideal case both states are $\\ket{\\uparrow\\ua\\downarrow\\da}$ so their amplitudes should add together). This may have important implications for Si qubits. In order to perform correct calculations, all dots must be initialized to the same valley state, and the opposite valley state should be considered a leakage state. The presence of a phase difference between dots $\\phi_2-\\phi_1$ (leading to a $t_-$ term) also introduces leakage into the system. However, the leakage states behave almost like the ideal states for small systems, and in fact they are indistinguishable for systems of only two dots, as we have shown in the previous section. This makes errors resulting from leakage difficult to detect in small systems and small gate sequences. Thus fidelities of two-qubit gates measured in two-qubit systems may be measured as higher than their true value in multiqubit circuits, because these measurements cannot account for leakage into other valley states without applying a larger sequence of gates such as that of Eq. (\\ref{eqn:seq1}) to a larger number of dots.\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=.5\\columnwidth]{phiplot1.pdf}\n\t\\caption{Probability that dot 2 will be measured as $\\uparrow$ after the sequence of gates shown in eq. (\\ref{eqn:seq1}) is applied to the initial state $\\ket{\\uparrow_+\\downarrow_+\\uparrow_+\\downarrow_+}$. Here $\\Delta_i=0$, $\\phi_1=\\phi_2=\\phi_3=0$, and $\\phi_4$ varies.}\n\t\\label{fig:seq1}\n\\end{figure}\n\nThe sequence of gates given in Eq. (\\ref{eqn:seq1}) is a short sequence of gates which yields a different measurement outcome in an ideal system than in a system with multiple valley states. This effect is present for any initial state where the electrons with the same spin do not all have the same valley state. Note that the initial state is dependent on the values of $\\phi_i$, and the only way to control them is by controlling the valley phase. In Fig. \\ref{fig:seq1} we plot the measurement probability of $\\ket{2_\\uparrow}$ versus the value of $\\phi_4$, with $\\phi_1=\\phi_2=\\phi_3=0$. When $\\phi_4$ also equals 0, the initial valley states of all electrons are the same, and the measurement probability is the same as in the ideal case. As $\\phi_4$ varies away from 0, the measurement probability increases. This sequence of gates can be used to demonstrate the presence of valley-induced error, as if the state $\\ket{2_\\uparrow}$ is measured with probability greater than 1\/2, this is a result of valley-induced error. However this sequence of gates cannot be used to show the converse in noisy systems, because if $\\ket{2_\\uparrow}$ is measured with probability 1\/2, the result is indistinguishable from noise-induced decoherence.\n\nWe now give a different sequence of gates which has a measurement probability of 1 in the ideal case, and thus can be used to demonstrate initialization of electrons in the same valley state. Consider a ring of 4 quantum dots with the initial state $\\ket{\\uparrow_-\\downarrow_+\\uparrow_+\\downarrow_-}$. Perform the following sequence of $\\sqrt{SWAP}$ gates:\n\n\\begin{align*}\n&\\ket{\\Psi_{\\text{valley}}}=\\sqrt{SWAP_{23}}\\sqrt{SWAP_{14}}\\sqrt{SWAP_{13}}\\\\&\\times\\sqrt{SWAP_{24}}\\sqrt{SWAP_{12}}\\sqrt{SWAP_{34}}\\ket{\\uparrow_-\\downarrow_+\\uparrow_+\\downarrow_-}\n\\label{eqn:seq2}\n\\end{align*} \n\nFor this sequence of gates, $\\ket{\\Psi_{\\text{valley}}}$ is given by:\n\n\\begin{align*}\n&\\ket{\\Psi_{\\text{valley}}}=\\frac{1}{4}\\Big[\\ket{\\uparrow_+\\uparrow_-\\downarrow_-\\downarrow_+}-\\ket{\\uparrow_-\\uparrow_+\\downarrow_+\\downarrow_-}\\\\&+\\ket{\\downarrow_+\\downarrow_-\\uparrow_-\\uparrow_+}-\\ket{\\downarrow_-\\downarrow_+\\uparrow_+\\uparrow_-}+\\ket{\\uparrow_+\\downarrow_-\\downarrow_+\\uparrow_-}\\\\&-\\ket{\\uparrow_-\\downarrow_+\\downarrow_-\\uparrow_+}-\\ket{\\downarrow_+\\uparrow_-\\uparrow_+\\downarrow_-}+\\ket{\\downarrow_-\\uparrow_+\\uparrow_-\\downarrow_+}\\\\&+2\\ket{\\downarrow_+\\uparrow_+\\downarrow_-\\uparrow_-}+2\\ket{\\downarrow_-\\uparrow_-\\downarrow_+\\uparrow_+}\\Big]\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\end{align*}\n\nThe corresponding ideal state $\\ket{\\Psi_{\\text{ideal}}}$ is simply:\n\n\\begin{align*}\n\\ket{\\Psi_{\\text{ideal}}}=\\ket{\\downarrow\\uparrow\\downarrow\\uparrow}\n\\addtocounter{equation}{1}\\tag{\\theequation}\n\\end{align*}\n\nWith this sequence of gates, the ideal single-valley case will have measurement probabilities of $100\\%$, but the corresponding two-valley case will only have corresponding measurement probabilities of $75\\%$, simulating an unknown apparent decoherence although it arises simply from the inevitable spin-valley entanglement which is omnipresent in the multi-valley qubits. \n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=.5\\columnwidth]{phiplot2.pdf}\n\t\\caption{Probability that dot 2 will be measured as $\\uparrow$ after the sequence of gates shown in eq. (\\ref{eqn:seq2}) is applied to the initial state $\\ket{\\uparrow_+\\downarrow_+\\uparrow_+\\downarrow_+}$. Here $\\Delta_i=0$, $\\phi_1=\\phi_2=\\phi_3=0$, and $\\phi_4$ varies.}\n\\end{figure}\n\n\\begin{figure}[!htb]\n\t\\centering\n\t\\includegraphics[width=.5\\columnwidth]{dplot1.pdf}\n\t\\caption{Probability that dot 2 will be measured as $\\uparrow$ after the sequence of gates shown in eq. (\\ref{eqn:seq2}) is applied to the initial state $\\ket{\\uparrow_+\\downarrow_+\\uparrow_+\\downarrow_+}$. Here $\\phi_1=\\phi_2=\\phi_3=0$, $\\phi_4=\\pi\/2$, and $\\Delta$ varies.}\n\\end{figure}\n\n\\section{Conclusion}\n\nWe used an effective Hubbard model to investigate the effects of valley states in Silicon quantum dots in the context of exchange gate operations in multiqubit systems at zero temperature. We first considered a system of two exchange-coupled quantum dots, and determined the eigenstates and energies for both the singlet and triplet spin configurations to leading order in $t_c^2\/U$. We considered the limits where $\\Delta\\gg J_0$ and $\\Delta\\ll J_0$, and found that in both cases the singlet and triplet spin configurations share the same eigenstates, though their energies differ. When the valley splitting is large, any state not initialized to valley eigenstates will lead to spin-valley entanglement as the system evolves. However, as long as the valley splitting is large and the system is initialized to the valley ground state, the evolution of the system will not be affected to leading order in $\\Delta\/J_0$. Thus valley degrees of freedom are not problematic for exchange gate operations as long as the valley splitting $\\Delta$ is sufficiently large. For small valley splitting or completely degenerate valley states, spin-valley entanglement will arise any time the electrons are not initialized to the same valley state. Without the ability to perform valley state measurements, spin-valley entanglement will obfuscate any information stored in the electron spin states, making it impossible to perform quantum computations in such a state. For a two qubit system with $\\Delta\\ll J_0$, if all spins are initialized in $z$-eigenstates and measured in the $z$-basis, then the resulting measurement probabilities will be the same as an ideal single-valley system. However, this does not extend to systems with more qubits, as spin-valley entanglement can introduce errors despite states being initialized and measured in the $z$-basis. This means that two-qubit gate fidelities measured by performing a single gate in a two-qubit system will give deceptively high fidelity results, because the presence of valley states does not affect the fidelity measurement even though it can affect operations in systems with more qubits. To demonstrate robustness to spin-valley entanglement without the ability to measure valley states themselves requires a longer series of gates in a system with more qubits. We emphasize that our use of the Hubbard model is not an approximation here since the coupled spin qubit system is indeed equivalent to the Hubbard model where the Hubbard interaction $U$ is simply related to the exchange coupling $J_0$ between the dots through $U=t_c^2\/J_0$, where $t_c$ is the inter-dot hopping energy from wavefunction overlap. Our fundamental finding of an apparent quantum leakage (or effective decoherence) due to spin-valley entanglement is also independent of additional complications arising from varying exchange couplings and\/or inter-dot hopping through the circuit-- all they do is to complicate the expressions for the leakage, but the basic physics of spin-valley entanglement remains the same.\n\nWe note that our considerations on spin-valley entanglement apply equally well to all-exchange gate operations considered in Ref. \\onlinecite{DiVincenzoNAT2000}-- anytime the inter-dot exchange coupling is used to carry out gate operations, the valley-spin entanglement (and potential decoherence to valley states) studied in this work becomes relevant. Right now, the Si qubit platforms have rather small exchange coupling, and the problem discussed in this work is most likely not crucial to the current generation of few qubit systems with rather small exchange coupling values. But faster gate operations in multiqubit circuits will necessitate larger exchange coupling strength in the future, making our dynamical consideration relevant as one must ensure that not only is the valley splitting much larger than the electron temperature in the qubits, it is also much larger than the inter-dot exchange coupling used in the gate operations.\n\n\\acknowledgements\n\nThis work is supported by the Laboratory for Physical Sciences.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}