diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzaqkz" "b/data_all_eng_slimpj/shuffled/split2/finalzzaqkz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzaqkz" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nMany recent studies have been done on the construction, behavior, and performance of reservoir computers. Two popular reservoir structures are the echo state network (ESN) and the single nonlinear node with delay line~\\cite{appeltantetal}. Jaeger provided a mathematical description and analysis of echo states, in which case the reservoir is assumed to have a finite number of randomly interconnected internal nodes~\\cite{jaeger}. While much work has been done to construct, test, and analyze single-node reservoir schemes~\\cites{appeltant,appeltantetal,paquot,duport,brunner}, there remains a lack of understanding and an absence of a rigorous mathematical analysis of the underlying dynamics of such a system.\n\nWe seek to provide some mathematical insight into the dynamics of the single-node reservoir computer. This analysis is done in two parts. Section \\ref{results1} describes how the variation in outputs (in the sense of the $\\ell^2$ norm) is bounded above by a constant multiple of the variation of time-series input vectors. To this end, we draw on basic mathematical tools to prove something of a Lipschitz continuity condition on the entire system:\n\\newtheorem*{main_result}{Theorem \\ref{main_result}}\n\\begin{main_result}\nSuppose $f:\\mathbb{R}\\rightarrow \\mathbb{R}$ is $L$-Lipschitz with $\\alpha L < \\nicefrac{1}{\\sqrt{2}},$ and define $y_u:\\mathbb{Z}_{\\geq 0} \\rightarrow \\mathbb{R}$ by $y_u(t)=\\sum_{k=0}^{N} w_k x_k^{(u)}(t)$ for an input spike train $u\\in\\mathbb{R}^M.$ Then\n\\begin{center}\n$\\| y_u-y_v \\|_{\\ell^2(\\mathbb{R}^M)}^2 \\leq |w|^2 M(L\\beta)^2 (N+1) \\left(1+\\frac{2}{1-2\\alpha^2 L^2} \\right) \\| u-v \\|_{\\ell^2(\\mathbb{R}^M)}^2$\n\\end{center}\nfor all inputs $u,v.$\n\\end{main_result}\n\nSection \\ref{classification_section} provides a translation of some well known classification metrics~\\cites{goodman,gibbons} to the context of the single-node with delay line reservoir structure. That section also contains a brief review and analysis of the importance of selecting an injective nonlinear sigmoid function for the single nonlinear input node. In particular, we show that classification mishaps are possible if the injectivity condition is disregarded, or if the input data are not properly pre-conditioned.\n\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\\label{preliminaries}\n\n\\subsection{Notation and definitions}\\label{notation_subsection}\n\nLet $t\\in \\mathbb{Z}_{\\geq 0}$ denote discrete time and let $\\alpha, \\beta \\in (0,1)$. An \\textit{input} is a vector $u \\in \\mathbb{R}^m$ where the $0$th entry is defined as $u(0)=0.$ Since we only consider reservoirs with a single nonlinear node and delay line, the term \\textit{reservoir} will be reserved for that structure.\n\\begin{definition}\\label{resdef}\nLet $f:\\mathbb{R}\\rightarrow\\mathbb{R}$ be non-linear. A \\textit{reservoir of length} $N$, \\textit{nonlinearity} $f$, \\textit{input gain} $\\beta,$ \\textit{and feedback gain} $\\alpha,$ denoted $X(N,f,\\beta,\\alpha),$ is a set of $N$ functions $X=\\{x_1,x_2,\\dots,x_N\\},$ where $ x_k:\\mathbb{Z}_{\\geq 0} \\rightarrow \\mathbb{R},$ defined by the relations \\begin{equation}\\label{resdefeqn}\n\\begin{split}\nx_k(0) &= 0~\\text{for all}~k\\geq 0 \\\\\nx_1(t) &= f(\\alpha x_N(t-1) + \\beta u(t))~\\text{for}~t\\geq 1\\\\\nx_{k+1}(t) &= x_k(t-1)~\\text{for all}~k~\\text{and for}~t\\geq 1\n\\end{split}\n\\end{equation}\nWe will have occasion to use a modified version of Definition \\ref{resdef} in which $x_1(t)= f(\\alpha x_N(t) + \\beta u(t))$ instead of $x_1(t)= f(\\alpha x_N(t-1) + \\beta u(t))$ in \\eqref{resdefeqn}.\n\\end{definition}\nCall $x(t)=\\{x_1(t),x_2(t),\\dots,x_N(t)\\}$ the \\textit{state of the reservoir at time} $t.$ A vector\/set of \\textit{weights} is $(w_1,w_2,\\dots,w_n)\\in \\mathbb{R}^N,$ where $w_k$ is the weight of the ``node\" $x_k.$ Given a set of weights, the \\textit{output of the reservoir at time $t$} is defined as the sum of weighted node states $y(t) = \\sum_{k=1}^N w_k x_k(t).$ Suppose we are working with a given data set of (masked) inputs, call it $\\mathcal{\\tilde{U}}$. Put $M=\\max\\{m:u\\in\\mathbb{R}^m\\}$. The respective reservoir outputs will be considered over the interval of discrete time $[1,M].$ In order to compare two inputs $u$ and $v$ it is convenient to simply extend all input vectors of dimension less than $M$. To this end, if $u\\in\\mathbb{R}^m$ and if $m0,~ X\\in M_{n\\times p}(\\mathbb{R}),$ and $y\\in \\mathbb{R}^n.$ Let $D$ be a diagonal matrix where $D(k,k)$ is the $\\ell^2$ norm of the $k$th column of $X.$ The Dantzig selector, denoted $\\hat{\\beta}\\in\\mathbb{R}^p$, solves the following optimization problem:\n\\begin{equation*}\n\\hat{\\beta}\\in \\text{argmin}\\{\\|\\beta\\|_1 : D^{-1} X^T(X\\beta - y)\\|_{\\infty} \\leq \\delta\\}.\n\\end{equation*}\nThis concept can be used to train reservoir node weights by storing the states of the reservoir over time as the columns of $X$; that is, $X(:,t)=X(t)$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{An upper bound on input $\\mapsto$ output error}\\label{results1}\n\nTo ensure that an input $\\mapsto$ output system is reasonably accurate, it is desirable to have a growth condition that places an upper bound on the distortion of distances between two outputs with respect to the distance between their corresponding inputs. The following is a classical definition that describes such a condition.\n\n\\begin{definition}\\label{Lipschitz_definition}\nSuppose $X$ and $Y$ are metric spaces and $L\\geq 0.$ A mapping $f:X\\rightarrow Y$ is \\textit{$L$-Lipschitz} if\n\\begin{equation*}\nd_Y(f(z),f(w)) \\leq L d_X(z,w)\n\\end{equation*}\nfor all $z,w\\in X.$ The function $f$ is \\textit{Lipschitz} if it is $L$-Lipschitz for some $L\\geq 0.$\n\\end{definition}\n\\noindent If $f:\\mathbb{R}\\rightarrow\\mathbb{R}$ is differentiable, then $f'$ is bounded if and only if $f$ is Lipschitz. In fact, if $f:\\mathbb{R}\\rightarrow \\mathbb{R}$ is any Lipschitz function, then $f$ is absolutely continuous, and is differentiable except on a set $E\\subset \\mathbb{R}$ of Lebesgue measure $0.$ Definition \\ref{Lipschitz_definition} says that $f$ can only increase distances by a bounded amount.\n\nIn this section we assume that there is a specific task at hand, and that target outputs have been defined and used to train a set of weights. It is also assumed that for the set of inputs $\\mathcal{U}$, each $u\\in\\mathcal{U}$ has been standardized to have dimension $M$ as described in Subsection \\ref{notation_subsection}. In other words, we only consider inputs over the discrete time interval $[1,M]\\cap \\mathbb{N}$.\n\nFor the purpose of classification in a reservoir system, it is desirable that the distance between two outputs is controlled by the distance between their respective inputs. The Lipschitz condition clearly describes this property. The main result in this section is that the function $u\\mapsto y_u$ is Lipschitz, where the Lipschitz constant depends on $N,\\alpha, \\beta,$ and $M.$ This result does not appear to be optimal, however, since the Lipschitz constant is directly proportional to the dimension of the input vectors and the number of nodes in the reservoir. Recall Definition \\ref{resdef}:\n\n\\begin{definition}\\label{1} For a system with $N+1$ nodes and input $u,$\n\\begin{equation}\n\\begin{split}\nx_0(t) &= f( \\alpha x_N(t-1) + \\beta u(t)) \\\\\nx_{n+1}(t) &= x_n(t-1),~~n=0,\\dots,N-1 \\\\\nx_n(0) &= 0\\\\\nx_0(1) &= f(0)\\\\\n\\end{split}\n\\end{equation}\n\\end{definition}\n\n The following couple of modest observations will be useful for proving Theorem \\ref{main_result}.\n\n\\begin{fact}\\label{2}\nIf $0\\leq t \\leq N$ then $x_N(t)=0$ and $x_N(N+t)=f(\\beta u(t))$.\n\\end{fact}\n\n\\begin{corollary}\\label{3}\nIf $t\\in [1,N]\\cap \\mathbb{Z}$ then $x_0(t) = f(\\beta u(t))$.\n\\end{corollary}\n\\begin{proof}\nSince $t-1\\leq N,$ Definition \\ref{1} and Fact \\ref{2} yield\n\\begin{equation*}\n\\begin{split}\nx_0(t) &= f(\\alpha x_N(t-1) + \\beta u(t))\\\\\n&= f(\\alpha x_{N-t+1}(0) + \\beta u(t))\\\\\n&= f(\\beta u(t)). \\qedhere\n\\end{split}\n\\end{equation*}\n\\end{proof}\n\nIt would be convenient to have a Lipschitz continuity condition on the function $u(t)\\mapsto y_u(t)$ where $t$ is fixed. This idea seems naive, however, since we do not expect the difference of reservoir outputs $|y_u(t)-y_v(t)|$ to depend entirely on the difference in one entry of the inputs $|u(t)-v(t)|.$ That is, we do not know how to capture the behavior of the reservoir simply by looking at the input difference at a particular time step. An approach to predicting classification accuracy involves the concept of ``separation\" of inputs and reservoir states, which will be discussed in Section \\ref{classification_section}.\n\n\\begin{theorem}\\label{main_result}\nSuppose $X(N,f,\\beta,\\alpha)$ is a reservoir and $f:\\mathbb{R}\\rightarrow \\mathbb{R}$ is $L$-Lipschitz with $\\alpha L < \\nicefrac{1}{\\sqrt{2}}.$ Then for two inputs $u$ and $v,$\n\\begin{center}\n$\\| y_u-y_v \\|_{\\ell^2(\\mathbb{R}^M)}^2 \\leq |w|^2 M(L\\beta)^2 (N+1) \\left(1+\\frac{2}{1-2\\alpha^2 L^2} \\right) \\| u-v \\|_{\\ell^2(\\mathbb{R}^M)}^2.$\n\\end{center}\n\\end{theorem}\n\\begin{remark}\nThe bars $\\|\\cdot\\|$ indicate the length of the vector as a time series, whereas the bars $|\\cdot|$ are used for all other lengths.\n\\end{remark}\n\\begin{proof}\nFor all $t,$\n\\begin{equation}\n\\begin{split}\\label{4}\n|y_u(t)-y_v(t)| &= \\left|\\sum_{i=0}^N w_i (x_i^{(u)}(t) - x_i^{(v)}(t))\\right| \\\\\n&\\leq \\sum_{i=0}^N |w_i| \\cdot |x_i^{(u)}(t) - x_i^{(v)}(t)|.\n\\end{split}\n\\end{equation}\n\nIf $t=0$ there is nothing to prove. We will deal with special time intervals and achieve the above estimation on each interval.\n\n\\begin{enumerate}[(I)]\n\\item ${1\\leq t \\leq N+1}$:\\\\\n\\begin{center}\n$x_0(t) = f(\\alpha x_N(t-1) + \\beta u(t)) = f(\\beta u(t))$ \\\\\n\\begin{displaymath}\n x_1(t)=\\left\\{\n \\begin{array}{lr}\nf(\\beta u(t-1) & ,~ t\\geq 2 \\\\\n0 & ,~ t=1\n\\end{array}\n\\right.\n\\end{displaymath}\n\\\\\n\\begin{displaymath}\nx_2(t)=\\left\\{\n\\begin{array}{lr}\nf(\\beta u(t-2) & ,~ t\\geq 3 \\\\\n0 & ,~ t=1,2\n\\end{array}\n\\right.\n\\end{displaymath}\n\\vdots \\\\\n\\begin{displaymath}\nx_i(t) = \\left\\{\n\\begin{array}{lr}\nf(\\beta u(t-i)) & ,~ t\\geq i\\\\\n0 & ,~ t0$ such that\n\\begin{equation}\\label{inverse_lip}\n|x^{(u)}(t)-x^{(v)}(t)| \\geq C|u(t)-v(t)|\n\\end{equation}\nfor all pairs $u,v\\in\\mathcal{U}$ such that $|u(t)-v(t)|$ is ``large.\" For sake of simplicity, let us say that \\eqref{inverse_lip} should hold whenever $|u(t)-v(t)|\\geq 1.$\n\n\\subsection{Injectivity and periodicity of $f$}\nIf the chosen nonlinear function $f$ is injective (which is fairly typical considering the common usage of $f=\\tanh$), then an inequality like \\eqref{inverse_lip} is not outside the realm of possibility, since the function $u(t)\\mapsto x^{(u)}(t)$ is also injective.\n\n\\begin{fact}\nLet $X(N,f,\\beta,\\alpha)$ be a reservoir where $f$ is injective. If two inputs $u$ and $v$ are such that $u(t)\\neq v(t),$ then $x^{(u)}(t)\\neq x^{(v)}(t).$\n\\end{fact}\n\\begin{proof}\nSuppose $u(t_0)\\neq v(t_0)$ and assume $x^{(u)}(t_0)=x^{(v)}(t_0).$ Put $t_1=\\min\\{t:u(t)\\neq v(t)\\}$. By definition \\ref{resdef},\n\\begin{equation}\nx_1^{(u)}(t_1) = f(\\alpha x_N^{(u)}(t_1 -1)+\\beta u(t_1)),\\\\\n\\end{equation}\nand since $u(t_1-1)=v(t_1-1),$ we have\n\\begin{equation}\n\\begin{split}\\label{22}\nx_1^{(v)}(t_1) &= f(\\alpha x_N^{(v)}(t_1 -1)+\\beta v(t_1)) \\\\\n&= f(\\alpha x_N^{(u)}(t_1 -1)+\\beta v(t_1)).\n\\end{split}\n\\end{equation}\nThen $x_1^{(u)}(t_1)=x_1^{(v)}(t_1)$ since $x^{(u)}(t_0)=x^{(v)}(t_0),$ so it follows from \\eqref{22} that\n\\begin{equation}\\label{23}\n\\alpha x_N^{(u)}(t_1 -1)+\\beta u(t_1) = \\alpha x_N^{(u)}(t_1 -1)+\\beta v(t_1).\n\\end{equation}\nBy \\eqref{23} it is clear that $u(t_1)=v(t_1)$ since $\\beta >0,$ which is a contradiction.\n\\end{proof}\n\nSuppose that we graph $u$ as a function of time $t,$ and that we desire a single-node reservoir $X(N,f,\\beta,\\alpha)$ whose state is not extremely sensitive to vertical translations. Intuitively, it seems reasonable to use a nonlinear function $f$ that is periodic. The following fact shows that periodic functions $f$ can, in some sense, yield periodic reservoir states.\n\n\\begin{fact}\\label{periodic_fact}\nLet $X(N,f,\\beta,\\alpha)$ be a reservoir with inputs $u$ and $v,$ where $f$ is $P$-periodic. For all $t$, if $v(t)=u(t)-\\nicefrac{P}{\\beta}$, then $x^{(u)}(t)=x^{(v)}(t).$\n\\end{fact}\n\\begin{proof}\nThe following basic argument proceeds by induction on $t,$ but this statement could be verified directly from the equations for $x_i(t)$ given in Part IV of the proof of Theorem \\ref{main_result}.\n\nThe case $t=0$ is trivial. For the base case, let $t=1$ and note that\n\\begin{equation*}\nx_1^{(u)}(t)=f(\\alpha x_N^{(u)}(0) + \\beta u(1)) = f(\\alpha x_N^{(v)}(0) + \\beta v(1)+P)=x_1^{(v)}(1).\n\\end{equation*}\nClearly $x_i^{(u)}(1)=0=x_i^{(v)}(1)$ for all $1< j \\leq N,$ so that $x^{(u)}(1)=x^{(v)}(1).$ Now assume $x^{(u)}(k)=x^{(v)}(k).$ Then\n\\begin{equation}\\label{24}\nx_1^{(u)}(k+1)=f(\\alpha x_N^{(u)}(k) + \\beta u(k+1)) = f(\\alpha x_N^{(v)}(k) + \\beta v(k+1) + P) = x_1^{(v)}(k+1),\n\\end{equation}\nso it remains to show that $x_i^{(u)}(k+1) = x_i^{(v)}(k+1)$ for $1