diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbsyh" "b/data_all_eng_slimpj/shuffled/split2/finalzzbsyh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbsyh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sxn:intro}\n\nTwo ubiquitous aspects of large-scale data analysis are that the data often\nhave heavy-tailed properties and that diffusion-based or spectral-based\nmethods are often used to identify and extract structure of interest.\nIn the absence of strong assumptions on the data, \npopular distribution-independent methods such as those based on the VC\ndimension fail to provide nontrivial results for even simple learning\nproblems such as binary classification in these two settings.\nAt root, the reason is that in both of these situations the data are\nformally very high dimensional and that (without additional regularity\nassumptions on the data) there may be a small number of ``very outlying''\ndata points.\nIn this paper, we develop distribution-dependent learning methods that\ncan be used to provide dimension-independent sample complexity bounds for\nthe maximum margin version of the binary classification problem in these \ntwo popular settings.\nIn both cases, \nwe are able to obtain nearly optimal linear classification hyperplanes since\nthe distribution-dependent tools we employ are able to control the aggregate\neffect of the ``outlying'' data points.\nIn particular, our results will hold even though the data may be \ninfinite-dimensional and unbounded.\n\n\n\\subsection{Overview of the problems}\n\\label{sxn:intro_overview}\n\nSpectral-based kernels have received a great deal of attention recently in\nmachine learning for data classification, regression, and exploratory data\nanalysis via dimensionality reduction~\\cite{SWHSL06}.\nConsider, for example, Laplacian Eigenmaps~\\cite{BN03} and the related\nDiffusion Maps~\\cite{Lafon06}.\nGiven a graph $G=(V,E)$ (where this graph could be constructed from the data\nrepresented as feature vectors, as is common in machine learning, or it\ncould simply be a natural representation of a large social or information\nnetwork, as is more common in other areas of data analysis), let\n$f_0, f_1, \\dots, f_n$ be the eigenfunctions of the normalized Laplacian of\n$G$ and let $\\l_0, \\l_1, \\dots, \\l_n$ be the corresponding eigenvalues.\nThen, the Diffusion Map is the following feature map\n$$\n\\Phi: v \\mapsto(\\l_0^k f_0(v), \\dots, \\l_n^k f_n(v)) ,\n$$\nand Laplacian Eigenmaps is the special case when $k=0$.\nIn this case, the support of the data distribution is unbounded as the size\nof the graph increases; the VC dimension of hyperplane classifiers is\n$\\Theta(n)$; and thus existing results do not give dimension-independent\nsample complexity bounds for classification by Empirical Risk Minimization\n(ERM).\nMoreover, it is possible (and indeed quite common in certain applications) \nthat on some vertices $v$ the eigenfunctions\nfluctuate wildly---even on special classes of graphs, such as random graphs\n$G(n, p)$, a non-trivial uniform upper bound stronger than $O(n)$ on\n$\\|\\Phi(v)\\|$ over all vertices $v$ does not appear to be known\n\\footnote{It should be noted that, while potentially problematic for what we\nare discussing in this paper, such eigenvector localization often has a\nnatural interpretation in terms of the processes generating the data and can\nbe useful in many data analysis applications.\nFor example, it might correspond to a high degree node or an articulation\npoint between two clusters in a large informatics\ngraph~\\cite{LLDM08_communities_CONF,LLDM08_communities_TR,LLM10_communities_CONF};\nor it might correspond to DNA single-nucleotide polymorphisms that are\nparticularly discriminative in simple models that are chosen for\ncomputational rather than statistical reasons~\\cite{CUR_PNAS,Paschou07b}.}\nEven for maximum margin or so-called ``gap-tolerant'' classifiers, defined \nprecisely in Section~\\ref{sxn:background} and which are easier to learn\nthan ordinary linear hyperplane classifiers, the existing bounds of Vapnik\nare not independent of the number $n$ of nodes\n\\footnote{VC theory provides an upper bound of\n$O\\left(\\left(n\/\\Delta\\right)^2\\right)$ on the VC dimension of gap-tolerant\nclassifiers applied to the Diffusion Map feature space corresponding to a\ngraph with $n$ nodes.\n(Recall that by Lemma~\\ref{lem:vap1} below, the VC dimension of the space of\ngap-tolerant classifiers corresponding to a margin $\\Delta$, applied to a\nball of radius $R$ is $\\sim \\left(R\/\\Delta\\right)^2$.)\nOf course, although this bound is quadratic in the number of nodes, VC\ntheory for ordinary linear classifiers gives an $O(n)$ bound.}\n\nA similar problem arises in the seemingly very-different situation that the\ndata exhibit heavy-tailed or power-law behavior.\nHeavy-tailed distributions are probability distributions with tails that are\nnot exponentially bounded~\\cite{Resnick07,CSN09}.\nSuch distributions can arise via several mechanisms, and they are ubiquitous\nin applications~\\cite{CSN09}.\nFor example, graphs in which the degree sequence decays according to a power\nlaw have received a great deal of attention recently.\nRelatedly, such diverse phenomenon as the distribution of packet\ntransmission rates over the internet, the frequency of word use in common\ntext, the populations of cities, the intensities of earthquakes, and the\nsizes of power outages all have heavy-tailed behavior.\nAlthough it is common to normalize or preprocess the data to remove the\nextreme variability in order to apply common data analysis and machine\nlearning algorithms, such extreme variability is a fundamental property of\nthe data in many of these application domains.\n\nThere are a number of ways to formalize the notion of heavy-tailed behavior\nfor the classification problems we will consider, and in this paper we will\nconsider the case where the \\emph{magnitude} of the entries decays according \nto a power law.\n(Note, though, that in Appendix~\\ref{sxn:learnHT}, we will, for completeness,\nconsider the case in which the probability that an entry is nonzero decays in \na heavy-tailed manner.) \nThat is, if\n$$\n\\Phi: v \\mapsto(\\phi_0(v), \\dots, \\phi_n(v))\n$$\nrepresents the feature map, then $\\phi_i(v) \\leq Ci^{-\\alpha}$ for some\nabsolute constant $C>0$, with $\\alpha > 1$.\nAs in the case with spectral kernels, in this heavy-tailed situation, the\nsupport of the data distribution is unbounded as the size of the graph\nincreases, and the VC dimension of hyperplane classifiers is $\\Theta(n)$.\nMoreover, although there are a small number of ``most important'' features,\nthey do not ``capture'' most of the ``information'' of the data.\nThus, when calculating the sample complexity for a classification task for\ndata in which the feature vector has heavy-tailed properties, bounds that do\nnot take into account the distribution are likely to be very weak.\n\nIn this paper, we develop distribution-dependent bounds for problems in\nthese two settings.\nClearly, these results are of interest since VC-based arguments fail to\nprovide nontrivial bounds in these two settings, in spite of ubiquity of\ndata with heavy-tailed properties and the widespread use of spectral-based\nkernels in many applications.\nMore generally, however, these results are of interest since the\ndistribution-dependent bounds underlying them provide insight into how\nbetter to deal with heterogeneous data with more realistic noise\nproperties.\n\n\n\\subsection{Summary of our main results}\n\\label{sxn:intro_summary1}\n\nOur first main result provides bounds on classifying data whose\nmagnitude decays in a heavy-tailed manner.\nIn particular, in the following theorem we show that if the weight of the\n$i^{th}$ coordinate of random data point is less than $C i^{-\\a}$ for some\n$C>0, \\a > 1$, then the number of samples needed before a maximum-margin classifier is approximately optimal with high probability is independent of the number of features.\n\\begin{theorem} [Heavy-Tailed Data]\n\\label{thm:learn_heavytail}\nLet the data be heavy-tailed in that the feature vector:\n$$\n\\Phi: v \\mapsto(\\phi_1(v), \\dots, \\phi_n(v)) ,\n$$\nsatisfy $|\\phi_i(v)| \\leq Ci^{-\\alpha}$ for some absolute constant $C>0$, with\n$\\alpha > 1$. Let $\\zeta(\\cdot)$ denote the Riemann zeta function.\nThen, for any $\\ell$, if a maximum margin classifier has a margin $> \\De $, with probability more than $1 - \\de$, its risk is less than\n$$ \\eps := \\frac{\\tilde{O}\\left( \\frac{\\sqrt{\\zeta(2 \\a) \\ell}}{\\Delta}\\right)+ \\log \\frac{1}{\\de}}{\\ell},$$\n where $\\tilde{O}$ hides multiplicative polylogarithmic factors.\n\\end{theorem}\nThis result follows from a bound on the annealed entropy of gap-tolerant\nclassifiers in a Hilbert space that is of independent interest.\nIn addition, it makes important use of the fact that although individual\nelements of the heavy-tailed feature vector may be large, the vector has\nbounded moments.\n\nOur second main result provides bounds on classifying data with spectral\nkernels.\nIn particular, in the following theorem\nwe give dimension-independent upper bounds on the sample complexity of\nlearning a nearly-optimal maximum margin classifier\nin the feature space of the Diffusion Maps.\n\\begin{theorem} [Spectral Kernels]\n\\label{thm:learn_spectral}\nLet the following Diffusion map be given:\n$$\n\\Phi: v \\mapsto(\\l_1^k f_1(v), \\dots, \\l_n^k f_n(v)) ,\n$$ where $f_i$ are normalized eigenfunctions (whose $\\ell_2(\\mu)$) norm is $1$, $\\mu$ being the uniform distribution), $\\l_i$ are the eigenvalues of the corresponding Markov Chain and $k \\geq 0$.\nThen, for any $\\ell$, if a maximum margin classifier has a margin $> \\De $, with probability more than $1 - \\de$, its risk is less than\n$$ \\eps := \\frac{\\tilde{O}\\left( \\frac{\\sqrt{\\ell}}{\\Delta}\\right)+ \\log \\frac{1}{\\de}}{\\ell},$$\n where $\\tilde{O}$ hides multiplicative polylogarithmic factors.\n\\end{theorem}\nAs with the proof of our main heavy-tailed learning result, the proof of our\nmain spectral learning result makes essential use of an upper bound on the\nannealed entropy of gap-tolerant classifiers.\nIn applying it, we make important use of the fact that although individual\nelements of the feature vector may fluctuate wildly, the norm of the\nDiffusion Map feature vector is bounded.\n\nAs a side remark, note that we are not viewing the feature map in\nTheorem~\\ref{thm:learn_spectral} as necessarily being either a random\nvariable or requiring knowledge of some marginal distribution---as might\nbe the case if one is generating points in some space according to some\ndistribution; then constructing a graph based on nearest neighbors; and\nthen doing diffusions to construct a feature map.\nInstead, we are thinking of a data graph in which the data are adversarially\npresented, \\emph{e.g.}, a given social network is presented, and diffusions\nand\/or a feature map is then constructed.\n\nThese two theorems provides a dimension-independent (\\emph{i.e.},\nindependent of the size $n$ of the graph and the dimension of the feature\nspace) upper bound on the number of samples needed to learn a maximum \nmargin classifier, under \nthe assumption that a heavy-tailed feature map or the Diffusion Map kernel\nof some scale is used as the feature map.\nAs mentioned, both proofs (described below in Sections~\\ref{sxn:gapHS_HTapp}\nand~\\ref{sxn:gapHS_SKapp}) proceed by providing a\ndimension-independent upper bound on the annealed entropy of gap-tolerant\nclassifiers in the relevant feature space, and then appealing to\nTheorem~\\ref{thm:Vapnik} (in Section~\\ref{sxn:background}) relating the\nannealed entropy to the generalization error.\nFor this bound on the annealed entropy of these gap-tolerant classifiers,\nwe crucially use the fact that\n$\\E_v \\|\\Phi(v)\\|^2$ is bounded, even if $\\sup_v \\|\\Phi(v)\\|$ is\nunbounded as~$n \\ra \\infty$.\nThat is, although bounds on the individual entries of the feature map do not\nappear to be known, we crucially use that there exist nontrivial bounds on\nthe magnitude of the feature vectors.\nSince this bound is of more general interest, we describe it separately.\n\n\n\\subsection{Summary of our main technical contribution}\n\\label{sxn:intro_summary2}\n\nThe distribution-dependent ideas that underlie our two main results\n(in Theorems~\\ref{thm:learn_heavytail} and~\\ref{thm:learn_spectral}) can\nalso be used to bound the sample complexity of a classification task more\ngenerally under the assumption that the expected value of a norm of the data\nis bounded, \\emph{i.e.}, when the magnitude of the feature vector of the\ndata in some norm has a finite moment.\nIn more detail:\n\\begin{itemize}\n\\item\nLet $\\PP$ be a probability measure on a Hilbert space $\\HH$,\nand let $\\Delta>0$.\nIn Theorem~\\ref{thm:ddb_HS} (in Section~\\ref{sxn:gapHS_annent}), we prove\nthat if $\\EE_{\\mathcal{P}} \\|x\\|^{2} = r^{2} < \\infty$,\nthen the annealed entropy of gap-tolerant classifiers (defined in Section~\\ref{sxn:background}) in $\\HH$ can be\nupper bounded in terms of a function of $r$, $\\Delta$, and (the number of\nsamples) $\\ell$, independent of the (possibly infinite) dimension of $\\HH$.\n\\end{itemize}\nIt should be emphasized that the assumption that the expectation of some\nmoment of the norm of the feature vector is bounded is a \\emph{much} weaker\ncondition than the more common assumption that the largest element is\nbounded, and thus this result is likely of more general interest in dealing\nwith heterogeneous and noisy data.\nFor example, similar ideas have been applied recently to the problem of\nbounding the sample complexity of learning smooth cuts on a low-dimensional\nmanifold~\\cite{NN09}.\n\nTo establish this result, we use a result\n(See Lemma~\\ref{lem:vap1} in Section~\\ref{sxn:gapHS_vcdim}.)\nthat the VC\ndimension of gap-tolerant classifiers in a Hilbert space when the margin is\n$\\Delta$ over a bounded domain such as a ball of radius $R$ is bounded above\nby $\\lfloor R^2\/\\Delta^2 \\rfloor + 1$.\nSuch bounds on the VC dimension of gap-tolerant classifiers have been stated\npreviously by Vapnik~\\cite{Vapnik98}.\nHowever, in the course of his proof bounding the VC dimension of a\ngap-tolerant classifier whose margin is $\\Delta$ over a ball of radius $R$\n(See~\\cite{Vapnik98}, page~353.), Vapnik states, without further\njustification, that due to symmetry the set of points in a ball that is\nextremal in the sense of being the hardest to shatter with gap-tolerant\nclassifiers is the regular simplex.\nAttention has been drawn to this fact by Burges (See~\\cite{Burges98},\nfootnote~20.), who mentions that a rigorous proof of this fact seems to be\nabsent.\nHere, we provide a new proof of the upper bound on the VC dimension of such\nclassifiers without making this assumption.\n(See Lemma~\\ref{lem:vap1} in Section~\\ref{sxn:gapHS_vcdim} and\nits proof.)\nHush and Scovel~\\cite{HS01} provide an alternate proof of Vapnik's claim;\nit is somewhat different than ours, and they do not extend their proof to \nBanach spaces.\n\nThe idea underlying our new proof of this result generalizes to the case\nwhen the data need not have compact support and where the margin may be\nmeasured with respect to more general norms.\nIn particular, we show that the VC dimension of gap-tolerant classifiers with\nmargin $\\Delta$ in a ball of radius $R$ in a Banach space of Rademacher type\n$p \\in (1, 2]$ and type constant $T$ is bounded above\nby $\\sim \\left(3TR\/\\Delta\\right)^{\\frac{p}{p-1}}$, and that there exists a Banach space of type $p$ (in fact $\\ell_p$) for which the VC dimension is bounded below by $\\left(R\/\\Delta\\right)^{\\frac{p}{p-1}}$.\n(See Lemmas~\\ref{lem:banach_ub} and~\\ref{lem:banach_lb} in\nSection~\\ref{sxn:gapBS_vcdim}.)\nUsing this result, we can also prove bounds for the annealed entropy of\ngap-tolerant classifiers in a Banach space.\n(See Theorem~\\ref{thm:ddb_BS} in Section~\\ref{sxn:gapBS_annent}.)\nIn addition to being of interest from a theoretical perspective, this result\nis of potential interest in cases where modeling the relationship between\ndata elements as a dot product in a Hilbert space is too restrictive, and\nthus this may be of interest, \\emph{e.g.}, when the data are extremely\nsparse and heavy-tailed.\n\n\n\\subsection{Maximum margin classification and ERM with gap-tolerant classifiers}\n\\label{sxn:intro_maxmargin}\n\n\nGap-tolerant classifiers---see Section~\\ref{sxn:background} for more\ndetails---are useful, at least theoretically, as a means of implementing\nstructural risk minimization (see, \\emph{e.g.}, Appendix A.2\nof~\\cite{Burges98}).\nWith gap-tolerant classifiers, the margin $\\De$ is fixed before hand, and\ndoes not depend on the data.\nSee, \\emph{e.g.},~\\cite{FS99b,HBS05,HS01,SBWA98}.\nWith maximum margin classifiers, on the other hand, the margin is a function\nof the data.\nIn spite of this difference, the issues that arise in the analysis of these\ntwo classifiers are similar.\nFor example, through the fat-shattering dimension, bounds can be obtained\nfor the maximum margin classifier, as shown by Shawe-Taylor\n\\emph{et al.}~\\cite{SBWA98}.\nHere, we briefly sketch how this is achieved.\n\\begin{definition} Let $\\FF$ be a set of real valued functions. We say that a set of points $x_1, \\dots, x_s$ is $\\gamma-$shattered by $\\FF$ if there are real numbers $t_1, \\dots, t_s$ such that for all binary vectors $\\mathbf{b} = (b_1, \\dots, b_s)$ and each $ i \\in [s]=\\{1,\\ldots,s\\}$, there is a function $f_{\\mathbf{b}}$ satisfying,\n\\beq f_{\\mathbf{b}}(x_i) = \\left\\{ \\begin{array}{ll}\n > t_i + \\gamma, & \\hbox{if $ b_i = 1$;} \\\\\n < t_i - \\gamma, & \\hbox{otherwise.}\n \\end{array} \\right. \\eeq\n For each $\\gamma > 0$, the {\\it fat shattering dimension} $\\fat_\\FF(\\gamma)$ of the set $\\FF$ is defined to be the size of the largest $\\gamma-$shattered set if this is finite; otherwise it is declared to be infinity.\n \\end{definition}\nNote that, in this definition, $t_i$ can be different for different $i$, which is not the case\nin gap-tolerant classifiers.\nHowever, one can incorporate this shift into the feature space by a simple\nconstruction.\nWe start with the following definition of a Banach space of type $p$ with\ntype constant $T$.\n\n\\begin{definition} [Banach space, type, and type constant]\n\\label{def:rad}\nA {\\it Banach space} is a complete normed vector space.\nA Banach space $\\mathcal{B}$ is said to have (Rademacher) type $p$ if there\nexists $T < \\infty$ such that for all $n$ and $x_1, \\dots, x_n \\in\n\\mathcal{B}$\n\\beqs\n\\EE_\\epsilon[\\|\\sum_{i=1}^n \\epsilon_i x_i\\|_{\\mathcal{B}}^p]\n \\leq T^p \\sum_{i=1}^n \\|x_i\\|_{\\mathcal{B}}^p .\n\\eeqs\nThe smallest $T$ for which the above holds with $p$ equal to the type, is\ncalled the type constant of~$\\mathcal{B}$.\n\\end{definition}\nGiven a Banach space $\\BB$ of type $p$ and type constant $T$, let $\\BB'$\nconsist of all tuples $(v, c)$ for $v \\in \\BB$ and $c \\in \\RR$, with the\nnorm $$\\|(v, c)\\|_{\\BB'} := \\left(\\|v\\|^p + |c|^p\\right)^{1\/p}.$$\nNoting that if $\\BB$ is a Banach space of type $p$ and type constant $T$\n(see Sections~\\ref{sxn:gapBS_prelim} and~\\ref{sxn:gapBS_vcdim}), one can\neasily check that $\\BB'$ is a Banach space of type $p$ and type constant\n$\\max(T, 1)$.\n\nIn our distribution-specific setting, we cannot control the fat-shattering\ndimension, but we can control the logarithm of the expected value of\n$2^{\\kappa(\\fat_\\FF(\\gamma))}$ for any constant $\\kappa$ by applying\nTheorem~\\ref{thm:ddb_BS} to $\\BB'$.\nAs seen from Lemma 3.7 and Corollary 3.8 of the journal version\nof~\\cite{SBWA98}, this is all that is required for obtaining\ngeneralization error bounds for maximum margin classification.\nIn the present context, the logarithm of the expected value of the\nexponential of the fat shattering dimension of linear $1$-Lipschitz\nfunctions on a random data set of size $\\ell$ taken i.i.d from $\\PP$\non $\\BB$ is bounded by the annealed entropy of gap-tolerant classifiers\non $\\BB'$ with respect to the push-forward $\\PP'$ of the measure $\\PP$\nunder the inclusion $\\BB \\hookrightarrow \\BB'$.\n\nThis allows us to state the following theorem, which is an analogue of\nTheorem~4.17 of the journal version of~\\cite{SBWA98}, adapted using\nTheorem~\\ref{thm:ddb_BS} of this paper\n\n\\begin{theorem}\n\\label{thm:margin_BS}\nLet $\\Delta > 0$.\nSuppose inputs are drawn independently\naccording to a distribution $\\PP$ be a probability measure on a Banach space $\\BB$ of type $p$ and type constant $T$, and $\\EE_{\\mathcal{P}} \\|x\\|^p = r^p < \\infty$. If we succeed in\ncorrectly classifying $\\ell$ such inputs by a maximum margin hyperplane of margin $\\De$, then with confidence $1 - \\de$\nthe generalization error will be bounded from above by\n$$ \\eps := \\frac{\\tilde{O}\\left( \\frac{ Tr \\ell^{\\frac{1}{p}}}{\\Delta}\\right)+ \\log \\frac{1}{\\de}}{\\ell},$$\n where $\\tilde{O}$ hides multiplicative polylogarithmic factors involving $\\ell, T, r$ and $\\De$.\n\\end{theorem}\nSpecializing this theorem to a Hilbert space, we have the following theorem \nas a corollary.\n\\begin{theorem}\n\\label{thm:margin_HS}\nLet $\\Delta > 0$.\nSuppose inputs are drawn independently\naccording to a distribution $\\PP$ be a probability measure on a Hilbert space $\\HH$, and $\\EE_{\\mathcal{P}} \\|x\\|^2 = r^2 < \\infty$. If we succeed in\ncorrectly classifying $\\ell$ such inputs by a maximum margin hyperplane with margin $\\De$, then with confidence $1 - \\de$\nthe generalization error will be bounded from above by\n$$ \\eps := \\frac{\\tilde{O}\\left( \\frac{ r \\ell^{\\frac{1}{2}}}{\\Delta}\\right)+ \\log \\frac{1}{\\de}}{\\ell},$$\n where $\\tilde{O}$ hides multiplicative polylogarithmic factors involving $\\ell, r$ and $\\De$.\n\\end{theorem}\nNote that Theorem~\\ref{thm:margin_HS} is an analogue of Theorem~4.17 of the \njournal version of~\\cite{SBWA98}, but adapted using Theorem~\\ref{thm:ddb_HS} \nof this paper.\nIn particular, note that this theorem does not assume that the distribution \nis contained in a ball of some radium $R$, but instead it assumes only that \nsome moment of the distribution is bounded.\n\n\n\\subsection{Outline of the paper}\n\\label{sxn:intro_outline}\n\nIn the next section, Section~\\ref{sxn:background}, we review some technical\npreliminaries that we will use in our subsequent analysis.\nThen, in Section~\\ref{sxn:gapHS}, we state and prove our main result for\ngap-tolerant learning in a Hilbert space, and we show how this result can be\nused to prove our two main theorems in maximum margin learning.\nThen, in Section~\\ref{sxn:gapBS}, we state and prove an extension of our\ngap-tolerant learning result to the case when the gap is measured with\nrespect to more general Banach space norms; and\nthen, in Sections~\\ref{sxn:discussion} and~\\ref{sxn:conclusion} we\nprovide a brief discussion and conclusion.\nFinally, for completeness, in Appendix~\\ref{sxn:learnHT}, we will provide \na bound for exact (as opposed to maximum margin) learning in the case in \nwhich the probability that an entry is nonzero (as opposed to the value of \nthat entry) decays in a heavy-tailed manner.\n\n\n\n\\section{Background and preliminaries}\n\\label{sxn:background}\n\nIn this paper, we consider the supervised learning problem of binary\nclassification, \\emph{i.e.}, we consider an input space $\\mathcal{X}$\n(\\emph{e.g.}, a Euclidean space or a Hilbert space) and an output space\n$\\mathcal{Y}$, where $\\mathcal{Y}=\\{-1,+1\\}$, and where the data consist of\npairs $(X,Y) \\in \\mathcal{X} \\times \\mathcal{Y}$ that are random variables\ndistributed according to an unknown distribution.\nWe shall assume that for any $X$, there is at most one pair $(X, Y)$ that is\nobserved.\nWe observe $\\ell$ i.i.d. pairs $(X_i,Y_i), i=1,\\dots,\\ell$ sampled according\nto this unknown distribution, and the goal is to construct a classification\nfunction $\\a:\\mathcal{X} \\rightarrow \\mathcal{Y}$ which predicts\n$\\mathcal{Y}$ from $\\mathcal{X}$ with low probability of error.\n\nWhereas an ordinary linear hyperplane classifier consists of an oriented\nhyperplane, and points are labeled $\\pm1$, depending on which side of the\nhyperplane they lie, a \\emph{gap-tolerant classifier} consists of an\noriented hyperplane and a margin of thickness $\\Delta$ in some norm.\nAny point outside the margin is labeled $\\pm 1$, depending on which side of\nthe hyperplane it falls on, and all points within the margin are declared\n``correct,'' without receiving a $\\pm 1$ label.\nThis latter setting has been considered in~\\cite{Vapnik98,Burges98} (as a\nway of implementing structural risk minimization---apply empirical risk\nminimization to a succession of problems, and choose where the gap $\\Delta$\nthat gives the minimum risk bound).\n\nThe\n\\emph{risk} $R(\\a)$ of a linear hyperplane classifier $\\alpha$ is the\nprobability that $\\a$ misclassifies a random data point $(x, y)$ drawn from\n$\\PP$; more formally, $R(\\a) := \\E_{\\PP}[\\a(x) \\neq y]$.\nGiven a set of $\\ell$ labeled data points\n$(x_1, y_1), \\dots, (x_\\ell, y_\\ell)$, the \\emph{empirical risk}\n$R_{emp}(\\a, \\ell)$ of a linear hyperplane classifier $\\alpha$ is the\nfrequency of misclassification on the empirical data; more formally,\n$R_{emp}(\\a, \\ell) := \\frac{1}{\\ell}\\sum_{i=1}^\\ell \\II[x_i \\neq y_i]$, where\n$\\II[\\cdot]$ denotes the indicator of the respective event.\nThe risk and empirical risk for gap-tolerant classifiers are defined in the\nsame manner.\nNote, in particular, that data points labeled as ``correct'' do not\ncontribute to the risk for a gap-tolerant classifier, \\emph{i.e.}, data\npoints that are on the ``wrong'' side of the hyperplane but that are within\nthe $\\Delta$ margin are not considered as incorrect and do not contribute\nto the risk.\n\nIn classification, the ultimate goal is to find a classifier that minimizes\nthe true risk, \\emph{i.e.}, $\\arg\\min_{\\a \\in \\Lambda} R(\\a)$, but since\nthe true risk $R(\\a)$ of a classifier $\\a$ is unknown, an empirical\nsurrogate is often used.\nIn particular,\n\\emph{Empirical Risk Minimization (ERM)} is the procedure of choosing a\nclassifier $\\a$ from a set of classifiers $\\Lambda$ by minimizing the\nempirical risk $\\arg\\min_{\\a \\in \\Lambda} R_{emp}(\\a, \\ell)$.\nThe consistency and rate of convergence of ERM---see~\\cite{Vapnik98} for\nprecise definitions---can be related to uniform bounds on the difference\nbetween the empirical risk and the true risk over all $\\a \\in \\Lambda$.\nThere is a large body of literature on sufficient conditions for this kind\nof uniform convergence.\nFor instance, the VC dimension is commonly-used toward this end.\nNote that, when considering gap-tolerant classifiers, there is an additional\ncaveat, as one obtains uniform bounds only over those gap-tolerant\nclassifiers that do not contain any data points in the margin---the\nappendix A.2 of~\\cite{Burges98} addresses this issue.\n\nIn this paper, our main emphasis is on the annealed entropy:\n\n\\begin{definition} [Annealed Entropy]\n\\label{def:an_ent}\nLet $\\PP$ be a probability measure supported on a vector space $\\HH$.\nGiven a set $\\Lambda$ of decision rules and a set of points\n$Z = \\{z_1, \\dots, z_\\ell\\} \\subset \\HH$, let $\\Nl(z_1, \\dots, z_\\ell)$ be\nthe number of ways of labeling $\\{z_1, \\dots, z_\\ell\\} $ into positive and\nnegative samples such that there exists a gap-tolerant classifier that\npredicts {\\it incorrectly} the label of each $z_i$.\nGiven the above notation,\n\\beqs\n\\Hnl(k) := \\ln \\E_{\\PP^{\\times k}} \\Nl(z_1, \\dots, z_k)\n\\eeqs\nis the annealed entropy of the classifier $\\Lambda$ with respect to $\\PP$.\n\\end{definition}\nNote that although we have defined the annealed entropy for general \ndecision rules, below we will consider the case that $\\Lambda$ consists of \nlinear decision rules.\n\nAs the following theorem states, the annealed entropy of a classifier can be\nused to get an upper bound on the generalization error.\nThis follows from Theorem $8$ in \\cite{Burges98} and a remark on page 198\nof~\\cite{DevGyoLug96}. Note that the class $\\Lambda^*$ is itself random, and consequently, $\\sup_{\\a \\in \\Lambda^*}\n {R(\\a) - R_{emp}(\\a, \\ell)}$ is the supremum over a random class.\n\\begin{theorem}\n\\label{thm:Vapnik}\nLet $\\Lambda^*$ be the family of all gap-tolerant classifiers such that no data point lies inside the margin. Then,\n$$\n\\p\\left[\\sup_{\\a \\in \\Lambda^*}\n {R(\\a) - R_{emp}(\\a, \\ell)}\n > \\epsilon\\right]\n < 8 \\exp{\\left(\\left({H_{ann}^\\Lambda(\\ell)}\\right) - \\frac{\\epsilon^2\\ell}{32}\\right)}\n$$\nholds true, for any number of samples $\\ell$ and for any error parameter\n$\\epsilon$.\n\\end{theorem}\nThe key property of the function class that leads to uniform bounds is the\nsublinearity of the logarithm of the expected value of the ``growth\nfunction,'' which measures the number of distinct ways in which a data set\nof a particular size can be split by the function class.\nA finite VC bound guarantees this in a distribution-free setting.\nThe annealed entropy is a distribution-specific measure,\n\\emph{i.e.}, the same family of classifiers can have different annealed\nentropies when measured with respect to different distributions.\nFor a more detailed exposition of uniform bounds in the context of gap-tolerant classifiers, we refer the reader\nto~(\\cite{Burges98}, Appendix A.2).\n\nNote also that normed vector spaces (such as Hilbert spaces and Banach\nspaces) are relevant to learning theory for the following reason.\nData are often accompanied with an underlying metric which carries\ninformation about how likely it is that two data points have the same label.\nThis makes concrete the intuition that points with the same class label are\nclustered together.\nMany algorithms cannot be implemented over an arbitrary metric space, but\nrequire a linear structure.\nIf the original metric space does not have such a structure, as is the case\nwhen classifying for example, biological data or decision trees, it is\ncustomary to construct a feature space representation, which embeds data\ninto a vector space.\nWe will be interested in the commonly-used Hilbert spaces, in which distances\nin the feature space are measure with respect to the $\\ell_2$ distance (as\nwell as more general Banach spaces, in Section~\\ref{sxn:gapBS}).\n\nFinally, note that our results where the margin is measured in $\\ell_2$ can\nbe transferred to a setting with kernels.\nGiven a kernel $k(\\cdot, \\cdot)$, it is well known that linear classification\nusing a kernel $k(\\cdot, \\cdot)$ is equivalent to mapping $x$ onto the\nfunctional $k(x, \\cdot)$ and then finding a separating halfspace in the\nReproducing Kernel Hilbert Space (RKHS) which is the Hilbert Space generated\nby the functionals of the form $k(x, \\cdot)$.\nSince the span of any finite set of points in a Hilbert Space can be\nisometrically embedded in $\\ell_2$, our results hold in the setting of\nkernel-based learning as well, when one first uses the feature map\n$x \\mapsto k(x, \\cdot)$ and works in the RKHS.\n\n\n\n\\section{Gap-tolerant classifiers in Hilbert spaces}\n\\label{sxn:gapHS}\n\nIn this section, we state and prove Theorem~\\ref{thm:ddb_HS}, our main result\nregarding an upper bound for the annealed entropy of gap-tolerant classifiers\nin $\\ell_2$.\nThis result is of independent interest, and it was used in a crucial way in\nthe proof of Theorems~\\ref{thm:learn_heavytail} and~\\ref{thm:learn_spectral}.\nWe start in Section~\\ref{sxn:gapHS_annent} with the statement and proof of\nTheorem~\\ref{thm:ddb_HS}, and then in Section~\\ref{sxn:gapHS_vcdim} we bound\nthe VC dimension of gap-tolerant classifiers over a ball of radius $R$.\nThen, in Section~\\ref{sxn:gapHS_HTapp}, we apply these results to prove our\nmain theorem on learning with heavy-tailed data, and finally in\nSection~\\ref{sxn:gapHS_SKapp}, we apply these results to prove our main\ntheorem on learning with spectral kernels.\n\n\n\\subsection{Bound on the annealed entropy of gap-tolerant classifiers in Hilbert spaces}\n\\label{sxn:gapHS_annent}\n\nThe following theorem is our main result regarding an upper bound for\nthe annealed entropy of gap-tolerant classifiers.\nThe result holds for gap-tolerant classification in a Hilbert space,\n\\emph{i.e.}, when the distances in the feature space are measured with\nrespect to the $\\ell_2$ norm.\nAnalogous results hold when distances are measured more generally, as we\nwill describe in Section~\\ref{sxn:gapBS}.\n\n\\begin{theorem} [Annealed entropy; Upper bound; Hilbert Space]\n\\label{thm:ddb_HS}\nLet $\\PP$ be a probability measure on a Hilbert space $\\HH$,\nand let $\\Delta>0$.\nIf $\\EE_{\\mathcal{P}} \\|x\\|^{2} = r^{2} < \\infty$, then\nthen the annealed entropy of gap-tolerant classifiers in $\\HH$,\nwhere the gap is $\\Delta$, is\n$$\n\\Hnl(\\ell)\n \\leq \\left( \\ell^{\\frac{1}{2}}\\left(\\frac{r}{\\Delta}\\right) + 1\\right)\n (1+\\ln(\\ell+1)) .\n$$\n\\end{theorem}\n\\begin{proof}\nLet $\\ell$ independent, identically distributed (i.i.d) samples\n$z_1, \\dots, z_\\ell$ be chosen from $\\mathcal{P}$.\nWe partition them into two classes:\n$$X = \\{x_1, \\dots, x_{\\ell-k}\\} := \\{z_i\\,\\, |\\,\\, \\|z_i\\| > R\\},$$ and\n$$Y = \\{y_1, \\dots, y_k\\} := \\{z_i \\,\\,|\\,\\, \\|z_i\\| \\leq R\\}.$$\nOur objective is to bound from above the annealed entropy\n$\\Hnl(\\ell)=\\ln\\EE[N^\\Lambda(z_1, \\dots, z_\\ell)]$.\nBy Lemma~\\ref{lem:submul}, $N^{\\Lambda}$ is sub-multiplicative.\nTherefore,\n$$N^{\\Lambda}(z_1, \\dots, z_\\ell) \\leq N^{\\Lambda}(x_1, \\dots,\nx_{\\ell-k}) N^{\\Lambda}(y_1, \\dots, y_k).$$ Taking an expectation\nover $\\ell$ i.i.d samples from $\\mathcal{P}$,\n$$\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)] \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})N^{\\Lambda}(y_1, \\dots,\ny_k)].$$\nNow applying Lemma~\\ref{lem:vap1}, we see that\n$$\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell -k})(k+1)^{R^2\/\\Delta^2+1}] .\n$$\nMoving $(k+1)^{R^2\/\\Delta^2+1}$ outside this expression,\n$$\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell -k})](k+1)^{R^2\/\\Delta^2+1} .\n$$\nNote that $N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})$ is always bounded above by\n$2^{\\ell-k}$ and that the random variables $\\I[E_i [\\|x_i\\| > R]]$ are i.i.d.\nLet $\\rho = \\p[{\\|x_i\\| > R}]$, and note that\n$\\ell-k$ is the sum of $\\ell$ independent Bernoulli variables.\nMoreover, by Markov's inequality,\n$$\n\\p[\\|z_i\\|>R] \\,\\, \\leq \\,\\, \\frac{\\EE[\\|z_i\\|^2]}{R^2} ,\n$$\nand therefore $\\rho \\leq (\\frac{r}{R})^2$.\nIn addition,\n $$\\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})] \\leq \\EE[2^{\\ell-k}].$$\n Let $I[\\cdot]$ denote an indicator variable. $\\EE[2^{\\ell-k}]$ can be written as\n $$\\prod_{i=1}^\\ell \\EE[2^{I[\\|z_i\\|> R]}] = (1+\\rho)^\\ell \\leq\n e^{\\rho \\ell}.$$\nPutting everything together, we see that\n\\beq\n\\label{eqn:jensen1}\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq\n \\exp\\left(\\ell \\left(\\frac{r}{R}\\right)^2\n + \\ln(\\ell+1)\\left(\\frac{R^2}{\\Delta^2} + 1\\right)\\right) .\n\\eeq\nIf we substitute $R = (\\ell r^2 \\Delta^2)^{\\frac{1}{4}}$, it follows that\n\\begin{eqnarray*}\n\\Hnl(\\ell)\n &=& \\log \\EE\\left[N^{\\Lambda}(z_1, \\dots, z_\\ell)\\right] \\\\\n &\\leq& \\left(\\ell^{\\frac{1}{2}}\\left(\\frac{r}{\\Delta}\\right) + 1\\right) (1+\\ln(\\ell+1)) .\n\\end{eqnarray*}\n\\end{proof}\n\nFor ease of reference, we note the following easily established fact about\n$\\Nl$.\nThis lemma is used in the proof of Theorem~\\ref{thm:ddb_HS} above and\nTheorem~\\ref{thm:ddb_BS} below.\n\n\\begin{lemma}\n\\label{lem:submul}\nLet $\\{x_1, \\dots, x_\\ell\\} \\cup \\{ y_1, \\dots, y_k\\}$ be a partition of the\ndata $Z$ into two parts.\nThen, $\\Nl$ is submultiplicative in the following sense:\n$$\n\\Nl(x_1, \\dots, x_\\ell, y_1, \\dots y_k)\n \\leq \\Nl(x_1, \\dots, x_\\ell) \\Nl(y_1, \\dots, y_k) .\n$$\n\\end{lemma}\n\\begin{proof}\nThis holds because any partition of\n$Z := \\{x_1, \\dots, x_\\ell, y_1, \\dots, y_k\\}$\ninto two parts by an element $\\II \\in \\Lambda$ induces such a partition for\nthe sets $\\{x_1, \\dots, x_\\ell\\}$ and $\\{y_1, \\dots, y_\\ell\\}$, and for any\npair of partitions of $\\{x_1, \\dots, x_\\ell\\}$ and $\\{y_1, \\dots, y_k\\}$,\nthere is at most one partition of $Z$ that induces them.\n\\end{proof}\n\n\n\\subsection{Bound on the VC dimension of gap-tolerant classifiers in Hilbert spaces}\n\\label{sxn:gapHS_vcdim}\n\nAs an intermediate step in the proof of Theorem~\\ref{thm:ddb_HS}, we\nneeded a bound on the VC dimension of a gap-tolerant classifier within a\nball of fixed radius.\nLemma~\\ref{lem:vap1} below provides such a bound and is due to\nVapnik~\\cite{Vapnik98}.\nNote, though, that in the course of his proof (See~\\cite{Vapnik98},\npage 353.), Vapnik states, without further justification, that due to\nsymmetry the set of points that is extremal in the sense of being the\nhardest to shatter with gap-tolerant classifiers is the regular simplex.\nAttention has also been drawn to this fact by Burges (\\cite{Burges98},\nfootnote~20), who mentions that a rigorous proof of this fact seems to be\nabsent.\nVapnik's claim has since been proved by Hush and\nScovel~\\cite{HS01}.\nHere, we provide a new proof of Lemma~\\ref{lem:vap1}.\nIt is simpler than previous proofs, and in Section~\\ref{sxn:gapBS} we will\nsee that it generalizes to cases when the margin is measured with norms other\nthan~$\\ell_2$.\n\n\\begin{lemma} [VC Dimension; Upper bound; Hilbert Space]\n\\label{lem:vap1}\nIn a Hilbert-space,\nthe VC dimension of a gap-tolerant classifier\nwhose margin is $\\Delta$\nover a ball of radius $R$\ncan by bounded above by\n$\\lfloor \\frac{R^2}{\\Delta^2} \\rfloor + 1$.\n\\end{lemma}\n\\begin{proof}\nSuppose the VC dimension is $n$. Then there exists a set of $n$\npoints $X = \\{x_1, \\dots, x_n\\}$ in $B(R)$ that can be completely\nshattered using gap-tolerant classifiers. We will consider two\ncases, first that $n$ is even, and then that $n$ is odd.\n\nFirst, assume that $n$ is even, i.e., that $n=2k$ for some positive\ninteger $k$. We apply the probabilistic method to obtain a upper\nbound on $n$. Note that for every set $S \\subseteq [n]$, the set\n$X_S := \\{x_i | i \\in S\\}$ can be separated from $X - X_S$ using a\ngap-tolerant classifier. Therefore the distance between the\ncentroids (respective centers of mass) of these two sets is greater\nor equal to $2\\Delta$. In particular, for each $S$ having $k=n\/2$\nelements,\n$$\n\\|\\frac{\\sum_{i \\in S}x_i}{k} - \\frac{\\sum_{i \\not \\in S}x_i}{k}\\|\n \\geq 2\\Delta .\n$$\nSuppose now that $S$ is chosen uniformly at random from the ${n\n\\choose k}$ sets of size $k$. Then,\n\\begin{eqnarray*}\n4 \\Delta^2\n &\\leq& \\EE\\left[\\|\\frac{\\sum_{i \\in S}x_i}{k} - \\frac{\\sum_{i \\not \\in S}x_i}{k}\\|^2\\right] \\\\\n &=& k^{-2}\\left\\{\\frac{2k+1}{2k}\\sum_{i=1}^n \\|x_i\\|^2 - \\frac{\\|\\sum_1^n x_i\\|^2}{2k} \\right\\} \\\\\n &\\leq& \\frac{4(n+1)}{n^2} R^2 .\n\\end{eqnarray*}\nTherefore,\n\\begin{eqnarray*}\n\\Delta^2\n &\\leq& \\frac{n+1}{n^2} R^2 \\\\\n &<& \\frac{R^2}{n-1}\n\\end{eqnarray*}\nand so\n$$\nn < \\frac{R^2}{\\Delta^2} + 1 .\n$$\n\nNext, assume that $n$ is odd. We perform a similar calculation for\n$n = 2k+1$. As before, we average over all sets $S$ of cardinality\n$k$ the squared distance between the centroid of $X_S$ and the\ncentroid (center of mass) of $X-X_S$. Proceeding as before,\n\\begin{eqnarray*}\n4\\Delta^2\n & \\leq & \\EE\\left[\\|\\frac{\\sum_{i \\in S}x_i}{k} - \\frac{\\sum_{i \\not \\in S}x_i}{k+1}\\|^2\\right] \\\\\n & = & \\frac{\\sum_{i=1}^n \\|x_i\\|^2 (1 + \\frac{1}{2n}) - \\frac{1}{2n}\\|\\sum_{1 \\leq i \\leq n} x_i\\|^2}{k(k+1)} \\\\\n & \\leq & \\frac{\\sum_{i=1}^n \\|x_i\\|^2 (1 + \\frac{1}{2n})}{k(k+1)} \\\\\n & = & \\frac{4k+3}{2k(2k+1)(k+1)}\\{(2k+1)R^2\\} \\\\\n & < & \\frac{4R^2}{n-1} .\n\\end{eqnarray*}\nTherefore, $n < \\frac{R^2}{\\Delta^2} + 1$.\n\\end{proof}\n\n\n\\subsection{Learning with heavy-tailed data: proof of Theorem~\\ref{thm:learn_heavytail}}\n\\label{sxn:gapHS_HTapp}\n\n\n\\begin{proof}\nFor a random data sample $x$,\n \\beq \\E \\|\\x\\|^2 & \\leq & \\sum_{i=1}^n\n(C i^{-\\a})^2 \\\\ & \\leq & C^2 \\zeta(2 \\a),\\eeq\nwhere $\\zeta$ is the Riemann zeta function.\nThe theorem then follows from Theorem~\\ref{thm:margin_HS}.\n\\end{proof}\n\n\n\\subsection{Learning with spectral kernels: proof of Theorem~\\ref{thm:learn_spectral}}\n\\label{sxn:gapHS_SKapp}\n\n\n\\begin{proof}\nA Diffusion Map for the graph $G=(V, E)$ is the feature map that\nassociates with a vertex $x$, the feature vector $\\x = (\\l_1^\\a\nf_1(x), \\dots, \\l_m^\\a f_m(x))$, when the eigenfunctions\ncorresponding to the top $m$ eigenvalues are chosen. Let $\\mu$ be\nthe uniform distribution on $V$ and $|V| = n$. We note that if the\n$f_j$ are normalized eigenfunctions, \\emph{i.e.}, $\\forall j, \\sum_{x \\in V}\nf_j(x)^2 = 1,$ \n\\beq \n\\E \\|\\x\\|^2 & = & \\frac{\\sum_{i=1}^m {\\l_i^{2\\a}}}{n} \\leq \\frac{\\sum_{i=1}^n {\\l_i^{2\\a}}}{n} \\leq 1.\n\\eeq \nThe above inequality holds because\nthe eigenvalues have magnitudes that are less or equal to $1$:\n$$1 = \\l_1 \\geq \\dots \\geq \\l_n \\geq -1.$$ \nThe theorem then follows from Theorem~\\ref{thm:margin_HS}.\n\\end{proof}\n\n\n\n\\section{Gap-tolerant classifiers in Banach spaces}\n\\label{sxn:gapBS}\n\nIn this section, we state and prove Theorem~\\ref{thm:ddb_BS}, our main result\nregarding an upper bound for the annealed entropy of gap-tolerant classifiers\nin a Banach space.\nWe start in Section~\\ref{sxn:gapBS_prelim} with some technical preliminaries;\nthen in Section~\\ref{sxn:gapBS_vcdim} we bound the VC dimension of\ngap-tolerant classifiers in Banach spaces over a ball of radius $R$; and\nfinally in Section~\\ref{sxn:gapBS_annent} we prove Theorem~\\ref{thm:ddb_BS}.\nWe include this result for completeness since it is of theoretical interest;\nsince it follows using similar methods to the analogous results for Hilbert\nspaces that we presented in Section~\\ref{sxn:gapHS}; and since this result\nis of potential practical interest in cases where modeling the relationship\nbetween data elements as a dot product in a Hilbert space is too\nrestrictive, \\emph{e.g.}, when the data are extremely sparse and\nheavy-tailed.\nFor recent work in machine learning on Banach spaces,\nsee~\\cite{DerLee,MP04,Men02,ZXZ09}.\n\n\n\\subsection{Technical preliminaries}\n\\label{sxn:gapBS_prelim}\n\n\nRecall the definition of a Banach space from Definition~\\ref{def:rad} above.\nWe next state the following form of the Chernoff bound, which we will use in\nthe proof of Lemma~\\ref{lem:banach_ub} below.\n\n\\begin{lemma} [Chernoff Bound]\n\\label{lem:chernoff}\nLet $X_1, \\dots, X_n$ be discrete independent random variables such that\n$\\EE[X_i]=0$ for all $i$ and $|X_i| \\leq 1$ for all $i$.\nLet $X = \\sum_{i=1}^n X_i$ for all $i$ and $\\sigma^2$ be the variance of $X$.\nThen\n$$\n\\p[|X| \\geq \\lambda \\sigma] \\leq 2e^{-\\lambda^2\/4}\n$$\nfor any $0 \\leq \\lambda \\leq 2\\sigma$.\n\\end{lemma}\n\n\n\\subsection{Bounds on the VC dimension of gap-tolerant classifiers in Banach spaces}\n\\label{sxn:gapBS_vcdim}\n\n\nThe idea underlying our new proof of Lemma~\\ref{lem:vap1} (of\nSection~\\ref{sxn:gapHS_vcdim}, and that provides an upper bound on the\nVC dimension of a gap-tolerant classifier in Hilbert spaces) generalizes to\nthe case when the the gap is measured in more general Banach spaces.\nWe state the following lemma for a Banach space of type $p$ with type\nconstant $T$.\nRecall, \\emph{e.g.}, that $\\ell_p$ for $p \\geq 1$ is a Banach space of type\n$\\min(2, p)$ and type constant $1$.\n\n\\begin{lemma} [VC Dimension; Upper bound; Banach Space]\n\\label{lem:banach_ub}\nIn a Banach Space of type $p$ and type constant $T$,\nthe VC dimension of a gap-tolerant classifier\nwhose margin is $\\Delta$\nover a ball of radius $R$\ncan by bounded above by\n$ \\left(\\frac{3T R}{\\Delta}\\right)^{\\frac{p}{p-1}} + 64$\n\\end{lemma}\n\\begin{proof}\nSince a general Banach space does not possess an inner product, the proof\nof Lemma~\\ref{lem:vap1} needs to be modified here.\nTo circumvent this difficulty, we use Inequality~(\\ref{ineq:rad})\ndetermining the Rademacher type of $\\mathcal{B}$.\nThis, while permitting greater generality, provides weaker bounds than\npreviously obtained in the Euclidean case.\nNote that if $\\mu :=\\frac{1}{n}\\sum_{i=1}^n x_i$, then by repeated application of the\nTriangle Inequality,\n\\begin{eqnarray*}\n \\|x_i -\n\\mu\\| & \\leq & (1-\\frac{1}{n})\\|x_i\\| + \\sum_{j\\neq i} \\frac{\\|x_j\\|}{n}\\\\\n& < & 2 \\sup_i \\|x_i\\|.\n\\end{eqnarray*}\nThis shows that if we start with $x_1, \\dots, x_n$ having norm $\\leq\nR$, $\\|x_i - \\mu\\| \\leq 2R$ for all $i$. The property of being\nshattered by gap-tolerant classifiers is translation invariant.\nThen, for $ \\emptyset \\subsetneq S \\subsetneq [n]$, it can be\nverified that\n\\begin{eqnarray}\n\\nonumber\n2\\Delta\n &\\leq& \\left\\| \\frac{\\sum_{i \\in S} (x_i-\\mu)}{|S|} - \\frac{\\sum_{i \\not\\in S} (x_i-\\mu)}{n - |S|} \\right\\| \\\\\n &=& \\frac{n}{2|S|(n-|S|)}\\left\\|\\sum_{i \\in S} (x_i -\\mu) - \\sum_{i \\not\\in S} (x_i - \\mu)\\right\\| .\n\\label{eleven}\n\\end{eqnarray}\nThe Rademacher Inequality states that\n\\beq\\label{ineq:rad}\\EE_\\epsilon[\\|\\sum_{i=1}^n \\epsilon_i x_i\\|^p]\n\\leq T^p \\sum_{i=1}^n \\|x_i\\|^p.\\eeq Using the version of Chernoff's\nbound in Lemma~\\ref{lem:chernoff} \\beq \\label{twelve}\n\\p[|\\sum_{i=1}^n \\epsilon_i| \\leq \\lambda \\sqrt{n}] \\geq 1- 2\ne^{-\\lambda^2\/4}. \\eeq We shall denote the above event by\n$E_{\\lambda}$. Now, let $x_1, \\dots, x_n$ be $n$ points in\n$\\mathcal{B}$ with a norm less or equal to $R$. Let $\\mu =\n\\frac{\\sum_{i=1}^n x_i}{n}$ as before.\n\\begin{eqnarray*}\n 2^pT^p n R^p & \\geq & 2^pT^p \\sum_{i=1}^n \\|x_i\\|^p\\\\\n & \\geq & T^p \\sum_{i=1}^n \\|x_i - \\mu\\|^p\\\\\n & \\geq & \\EE_\\epsilon [\\|\\epsilon_i (x_i-\\mu)\\|^p]\\\\\n & \\geq & \\EE_\\epsilon [\\|\\epsilon_i (x_i-\\mu)\\|^p| E_{\\lambda}]\\,\\,\\p[E_{\\lambda}]\\\\\n & \\geq & \\EE_\\epsilon [(n - \\lambda^2)^p(2\\Delta)^p (1 - 2\n e^{-\\lambda^2\/4})]\\,\\,\n\\end{eqnarray*}\nThe last inequality follows from (\\ref{eleven}) and (\\ref{twelve}).\nWe infer from the preceding sequence of inequalities that\n$$n^{p-1} \\leq 2^p T^p \\left(\\frac{R}{\\Delta}\\right)^p\n\\left\\{(1-\\frac{\\lambda^2}{n})^p\n(1-2e^{-\\lambda^2\/4})\\right\\}^{-1}.$$ The above is true for any\n$\\lambda \\in (0, 2\\sqrt{n})$, by the conditions in the Chernoff\nbound stated in Lemma~\\ref{lem:chernoff}. If $n \\geq 64$, choosing\n$\\lambda$ equal to $8$ gives us $n^{p-1} \\leq 3^p T^p\n\\left(\\frac{R}{\\Delta}\\right)^p.$ Therefore, it is always true that\n$n \\leq \\left(\\frac{3TR}{\\Delta}\\right)^\\frac{p}{p-1} + 64.$\n\\end{proof}\n\n\nFinally, for completeness, we next state a lower bound for VC dimension of\ngap-tolerant classifiers when the margin is measured in a norm that is\nassociated with a Banach space of type $p \\in (1, 2]$.\nSince we are interested only in a lower bound, we consider the special case\nof $\\ell_p^n$.\nNote that this argument does not immediately generalize to Banach spaces of\nhigher type because for $p >2$, $\\ell_p$ has type $2$.\n\n\\begin{lemma} [VC Dimension; Lower Bound; Banach Space]\n\\label{lem:banach_lb}\nFor each $p \\in (1, 2]$, there\nexists a Banach space of type $p$ such that the VC dimension of gap-tolerant\nclassifiers with gap $\\Delta$ over a ball of radius $R$ is\ngreater or equal to\n$$\\left(\\frac{R}{\\Delta}\\right)^{\\frac{p}{p-1}}.$$ Further, this\nbound is achieved when the space is $\\ell_p$.\n\\end{lemma}\n\\begin{proof}\nWe shall show that the first $n$ unit norm basis vectors in the\ncanonical basis can be shattered using gap-tolerant classifiers,\nwhere $\\Delta = n^{\\frac{1-p}{p}}$. Therefore in this case, the VC\ndimension is $\\geq (\\frac{R}{\\Delta})^\\frac{p}{p-1}$. Let $e_j$ be\nthe $j^{th}$ basis vector. In order to prove that the set $\\{e_1,\n\\dots, e_n\\}$ is shattered, due to symmetry under permutations, it\nsuffices to prove that for each $k$, $\\{e_1, \\dots, e_k\\}$ can be\nseparated from $\\{e_{k+1}, \\dots, e_{n}\\}$ using a gap-tolerant\nclassifier. Points in $\\ell_p$ are infinite sequences $(x_1, \\dots\n)$ of finite $\\ell_p$ norm. Consider the hyperplane $H$ defined by\n$\\sum_{i=1}^k x_i - \\sum_{i=k+1}^n x_i = 0$. Clearly, it separates\nthe sets in question. We may assume $e_j$ to be $e_1$, replacing if\nnecessary, $k$ by $n-k$. Let $x = \\inf_{y \\in H} \\|e_1 - y\\|_p.$\nClearly, all coordinates $x_{n+1}, \\dots $ of $x$ are $0$. In order\nto get a lower bound on the $\\ell_p$\ndistance, we use the power-mean inequality:\nIf $p \\geq 1$, and $x_1, \\dots, x_n \\in \\mathbb{R}$,\n$$\\left(\\frac{\\sum_{i=1}^n |x_i|^p}{n}\\right)^{\\frac{1}{p}} \\geq\n\\frac{\\sum _{i=1}^n |x_i|}{n}.$$ This implies that\n\\begin{eqnarray*}\n\\|e_1 -x\\|_p\n & \\geq & n^{\\frac{1-p}{p}} \\|e_1 - x\\|_1 \\\\\n & = & n^{\\frac{1-p}{p}}\\left(|1-x_1| + \\sum_{i=2}^n |x_i|\\right) \\\\\n & \\geq & n^{\\frac{1-p}{p}}\\left(1 - \\sum_{i=1}^k x_i + \\sum_{i=k+1}^n x_i\\right) \\\\\n & = & n^{\\frac{1-p}{p}} .\n\\end{eqnarray*}\nFor $p>2$, the type of $\\ell_p$ is $2$~\\cite{LedouxTalagrand}. Since\n$\\frac{p}{p-1}$ is a decreasing function of $p$ in this regime, we\ndo not recover any useful bounds.\n\\end{proof}\n\n\n\\subsection{Bound on the annealed entropy of gap-tolerant classifiers in Banach spaces}\n\\label{sxn:gapBS_annent}\n\nThe following theorem is our main result regarding an upper bound for\nthe annealed entropy of gap-tolerant classifiers in Banach spaces.\nNote that the $\\ell_2$ bound provided by this theorem is slightly weaker\nthan that provided by Theorem~\\ref{thm:ddb_HS}.\nNote also that it may seem counter-intuitive that in the case of $\\ell_2$\n(\\emph{i.e.}, when we set $\\gamma = 2$), the dependence of $\\Delta$ is\n$\\Delta^{-1}$, which is weaker than in the VC bound, where it is\n$\\Delta^{-2}$.\nThe explanation is that the bound on annealed entropy here depends on the\nnumber of samples $\\ell$, while the VC dimension does not.\nTherefore, the weaker dependence on $\\Delta$ is compensated for by a term\nthat in fact tends to $\\infty$ as the number of samples~$\\ell \\ra \\infty$.\n\n\\begin{theorem} [Annealed entropy; Upper bound; Banach Space]\n\\label{thm:ddb_BS}\nLet $\\PP$ be a probability measure on a Banach space $\\BB$ of type $p$ and\ntype constant $T$.\nLet $\\gamma, \\Delta > 0$, and let $\\eta = \\frac{p}{p + \\gamma(p-1)}$.\nIf $\\EE_{\\mathcal{P}} \\|x\\|^\\gamma = r^\\gamma< \\infty$, then\nthe annealed entropy of gap-tolerant classifiers in $\\BB$,\nwhere the gap is $\\Delta$, is\n$$\n\\Hnl(\\ell)\n \\leq\n \\left(\\eta^{-\\eta}(1-\\eta)^{-1+\\eta} \\left(\\frac{\\ell}{\\ln(\\ell+1)}\n \\left(\\frac{3Tr}{\\Delta}\\right)^\\gamma\\right)^\\eta + 64\\right)\n \\ln(\\ell+1) .\n$$\n\\end{theorem}\n\\begin{proof}\nThe proof of this theorem parallels that of Theorem~\\ref{thm:ddb_HS}, except\nthat here we use Lemma~\\ref{lem:banach_ub} instead of Lemma~\\ref{lem:vap1}.\nWe include the full proof for completeness.\nLet $\\ell$ independent, identically distributed (i.i.d) samples\n$z_1, \\dots, z_\\ell$ be chosen from $\\mathcal{P}$.\nWe partition them into two classes:\n$$X = \\{x_1, \\dots, x_{\\ell-k}\\} := \\{z_i\\,\\, |\\,\\, \\|z_i\\| > R\\},$$ and\n$$Y = \\{y_1, \\dots, y_k\\} := \\{z_i \\,\\,|\\,\\, \\|z_i\\| \\leq R\\}.$$\nOur objective is to bound from above the annealed entropy\n$\\Hnl(\\ell)=\\ln\\EE[N^\\Lambda(z_1, \\dots, z_\\ell)]$.\nBy Lemma~\\ref{lem:submul}, $N^{\\Lambda}$ is sub-multiplicative.\nTherefore,\n$$N^{\\Lambda}(z_1, \\dots, z_\\ell) \\leq N^{\\Lambda}(x_1, \\dots,\nx_{\\ell-k}) N^{\\Lambda}(y_1, \\dots, y_k).$$ Taking an expectation\nover $\\ell$ i.i.d samples from $\\mathcal{P}$,\n$$\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)] \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})N^{\\Lambda}(y_1, \\dots,\ny_k)].$$\nNow applying Lemma~\\ref{lem:banach_ub}, we see that\n$$\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell -k})(k+1)^{(3TR\/\\Delta)^{\\frac{p}{p-1}}+ 64}] .\n$$\nMoving $(k+1)^{((2+o(1)TR\/\\Delta)^{\\frac{p}{p-1}})}$ outside this expression,\n$$\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell -k})](k+1)^{(3TR\/\\Delta)^{\\frac{p}{p-1}}+ 64} .\n$$\nNote that $N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})$ is always bounded above by\n$2^{\\ell-k}$ and that the random variables $\\I[E_i [\\|x_i\\| > R]]$ are i.i.d.\nLet $\\rho = \\p[{\\|x_i\\| > R}]$, and note that\n$\\ell-k$ is the sum of $\\ell$ independent Bernoulli variables.\nMoreover, by Markov's inequality,\n$$\n\\p[\\|z_i\\|>R] \\,\\, \\leq \\,\\, \\frac{\\EE[\\|z_i\\|^\\gamma]}{R^\\gamma} ,\n$$\nand therefore $\\rho \\leq (\\frac{r}{R})^\\gamma$.\nIn addition,\n $$\\EE[N^{\\Lambda}(x_1, \\dots, x_{\\ell-k})] \\leq \\EE[2^{\\ell-k}].$$\n Let $I[\\cdot]$ denote an indicator variable. $\\EE[2^{\\ell-k}]$ can be written as\n $$\\prod_{i=1}^\\ell \\EE[2^{I[\\|z_i\\|> R]}] = (1+\\rho)^\\ell \\leq\n e^{\\rho \\ell}.$$\nPutting everything together, we see that\n\\beq\n\\label{eqn:jensen2}\n\\EE[N^{\\Lambda}(z_1, \\dots, z_\\ell)]\n \\leq \\exp\\left(\\ell \\left(\\frac{r}{R}\\right)^{\\gamma} + \\ln(k+1)\\left(64 + \\frac{3T R}{\\Delta}\\right)^{\\frac{p}{p-1}}\\right) .\n\\eeq\nBy setting $\\eta :=\n\\frac{p}{\\gamma(p-1) + p},$ and adjusting $R$ so that $$\\ell\n\\left(\\frac{r}{R}\\right)^\\gamma \\eta^{-1} = (1-\n\\eta)^{-1}\\ln(\\ell+1)\\left(\\frac{3TR}{\\Delta}\\right)^{\\frac{p}{p-1}}.$$\nWe see that\n\\begin{eqnarray*}\n\\ell \\left(\\frac{r}{R}\\right)^\\gamma +\n\\left(\\frac{3TR}{\\Delta}\\right)^{\\frac{p}{p-1}}\n &=& \\left(\\ell \\left(\\frac{r}{R}\\right)^\\gamma \\eta^{-1}\\right)^\\eta\n \\left((1- \\eta)^{-1}\\ln(\\ell+1)\n \\left(\\frac{3TR}{\\Delta}\\right)^{\\frac{p}{p-1}}\\right)^{1-\\eta} \\\\\n &=& \\eta^{-\\eta}(1-\\eta)^{-1+\\eta}\n \\left(\\ell\\left(\\frac{3Tr}{\\Delta}\\right)^\\gamma\\right)^\\eta .\n\\end{eqnarray*}\nThus, it follows that\n\\begin{eqnarray*}\n\\Hnl(\\ell)\n &=& \\log \\EE\\left[N^{\\Lambda}(z_1, \\dots, z_\\ell)\\right] \\\\\n &\\leq& \\left(\\eta^{-\\eta}(1-\\eta)^{-1+\\eta}\n \\left(\\frac{\\ell}{\\ln(\\ell+1)}\n \\left(\\frac{3Tr}{\\Delta}\\right)^\\gamma\\right)^\\eta + 64\\right)\n \\ln(\\ell+1) .\n\\end{eqnarray*}\n\\end{proof}\n\n\n\n\n\\section{Discussion}\n\\label{sxn:discussion}\n\nIn recent years, there has been a considerable amount of somewhat-related\ntechnical work in a variety of settings in machine learning.\nThus, in this section we will briefly describe some of the more technical\ncomponents of our results in light of the existing related literature.\n\\begin{itemize}\n\\item\nTechniques based on the use of Rademacher inequalities allow one to obtain\nbounds without any assumption on the input distribution as long as the\nfeature maps are uniformly bounded.\nSee, \\emph{e.g.},~\\cite{Gurvits,KP00,BartMen02,Kol01}.\nViewed from this perspective, our results are interesting because the\nuniform boundedness assumption is not satisfied in either of the two\nsettings we consider, although those settings are ubiquitous in\napplications.\nIn the case of heavy-tailed data, the uniform boundedness assumption is not\nsatisfied due to the slow decay of the tail and the large variability of\nthe associated features.\nIn the case of spectral learning, uniform boundedness assumption is not\nsatisfied since for arbitrary graphs one can have localization and thus\nlarge variability in the entries of the eigenvectors defining the feature\nmaps.\nIn both case, existing techniques based on Rademacher inequalities or\nVC dimensions fail to give interesting results, but we show that\ndimension-independent bounds can be achieved by bounding the annealed\nentropy.\n\\item\nA great deal of work has focused on using diffusion-based and spectral-based\nmethods for nonlinear dimensionality reduction and the learning a nonlinear\nmanifold from which the data are assumed to be drawn~\\cite{SWHSL06}.\nThese results are very different from the type of learning bounds we\nconsider here.\nFor instance, most of those learning results involve convergence to an\nhypothesized manifold Laplacian and not of learning process itself, which\nis what we consider here.\n\\item\nWork by Bousquet and Elisseeff~\\cite{BE02} has focused on establishing\ngeneralization bounds based on stability.\nIt is important to note that their results assume a given algorithm and\nshow how the generalization error changes when the data are changed, so\nthey get generalization results for a given algorithm.\nOur results make no such assumptions about working with a given algorithm.\n\\item\nGurvits~\\cite{Gurvits} has used Rademacher complexities to prove upper bounds\nfor the sample complexity of learning bounded linear functionals on\n$\\ell_p$ balls.\nThe results in that paper can be used to derive an upper bound on the VC\ndimension of gap-tolerant classifiers with margin $\\Delta$ in a ball of\nradius $R$ in a Banach space of Rademacher type $p \\in (1, 2]$.\nConstants were not computed in that paper, therefore our results do not\nfollow.\nMoreover, our paper contains results on distribution specific bounds which\nwere not considered there.\nFinally, our paper considers the application of these tools to the\npractically-important settings of spectral kernels and heavy-tailed data\nthat were not considered there.\n\\end{itemize}\n\n\n\\section{Conclusion}\n\\label{sxn:conclusion}\n\nWe have considered two simple machine learning problems motivated by recent\nwork in large-scale data analysis, and we have shown that although\ntraditional distribution-independent methods based on the VC-dimension fail\nto yield nontrivial sampling complexity bounds, we can use\ndistribution-dependent methods to obtain dimension-independent learning\nbounds.\nIn both cases, we take advantage of the fact that, although there may be\nindividual data points that are ``outlying,'' in aggregate their effect is\nnot too large.\nDue to the increased popularity of vector space-based methods (as opposed\nto more purely combinatorial methods) in machine learning in recent years,\ncoupled with the continued generation of noisy and poorly-structured data,\nthe tools we have introduced are likely promising more generally for\nunderstanding the effect of noise and noisy data on popular machine\nlearning tasks.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\\blfootnote{\n \n %\n \n \n \n \n \n \n \n %\n \\hspace{-0.65cm} \n This work is licensed under a Creative Commons \n Attribution 4.0 International License.\n License details:\n \\url{http:\/\/creativecommons.org\/licenses\/by\/4.0\/}.\n}\n\nRecently, commonsense reasoning tasks~\\cite{zellers2018swag,talmor2018commonsenseqa,lin2019comgen} have been proposed to investigate the ability of machines to make acceptable and logical inferences about ordinary scenes in our daily life. Both SWAG~\\cite{zellers2018swag} and CommonsenseQA~\\cite{talmor2018commonsenseqa} present a piece of text (an event description or a question) together with several choices (subsequent events or answers), and the system is asked to choose the correct option based on the context. Different from these two discriminative tasks, CommonGen~\\cite{lin2019comgen} moves to a generation setting. It requires the system to construct a logical sentence based on several concepts related to a specific scenario. \n\n\n\n\nThe task of text generation from given concepts are challenging in two ways. First, the sentence needs to be grammatically sound with the constraints of including given concepts. Second, the sentence needs to be correct in terms of our common knowledge. Existing approaches apply pretrained encoder-decoder models~\\cite{lewis2019bart,bao2020unilmv2} for description construction and concepts are modeled as constraints to guide the generation process. Sentences generated by these models are fluent, however, the output might violates the commonsense. An example is shown in Table~\\ref{comparison}. The model \\emph{BART} generates sentence with ``guitar sits\" which is incorrect. This demonstrates that the language model itself is not able to determine the rational relationship between concepts. \n\n\n\\begin{table}[!th]\n\\begin{center}\n\\begin{tabular}{l|l|l}\n\\midrule[1.0pt]\n\\emph{Concepts} &front, guitar, microphone, sit &ear, feel, pain, pierce \\\\\n\\midrule[1.0pt]\n\\multirow{2}{*}{\\emph{BART}} &\\underline{guitar sits} in front of a microphone &I can feel the pain in my ears and \\underline{feel} \\\\\n&in the front. &\\underline{the pierce} in my neck from the piercing. \\\\\n\\midrule[0.5pt]\n\\multirow{2}{*}{\\emph{Prototype}} &A singer performed the song standing in &He expresses severe pain as he tries \\\\\n&front of the audiences while playing guitar. &to pierce his hand.\\\\\n\\midrule[0.5pt]\n\\emph{BART+}&A singer sitting in front of the audiences &He expresses severe pain as he \\\\\n\\emph{Prototype}&while playing guitar. &pierces his ear.\\\\\n\\midrule[1.0pt]\n\\end{tabular}\n\\end{center}\n\\caption{Example of \\emph{BART}, \\emph{Prototype} and \\emph{BART+Prototype}.}\n\\label{comparison}\n\\end{table}\n\nIn order to enrich the source information and bridge the semantic gap between source and target, we argue that external knowledge related to the scene of given concepts are needed to determine the relationships between concepts. Motivated by the retrieve-and-generation framework~\\cite{song2016two,hashimoto2018retrieve} for text generation, we retrieve prototypes for concepts from external corpora as scene knowledge and construct sentences by editing prototypes. The prototype would introduce scenario knowledge to make up the shortcoming of the language model in finding out reasonable concept combination. Furthermore, prototypes would provide the missing key concepts besides the provided concept set, such as ``singer'' of the first example in Table~\\ref{comparison}, to help complete a natural and coherent scenario.\n\n\n\n\n\n\n\n\n\n\n\nIn order to better utilize the prototypes, we propose two additional modules on top of the pretrained encoder-decoder model with the guidance of given concepts. First, considering tokens in the prototype make various contributions in the sentence generation, a \\emph{scaling module} is introduced to assign weights to tokens in the prototype. Second, tokens closer to the concept words in prototypes might be more important for scene description generation, therefore, a \\emph{position indicator} is proposed to mark the relative position of different tokens in the prototypes. The main contributions of this work are three folds. 1) We propose a retrieve-and-edit framework, \\textbf{E}nhanced \\textbf{K}nowledge \\textbf{I}njection \\textbf{BART}, for the task of commonsense generation. 2) We combine the two modules, scaling module and prototype position indicator, to better utilize the scenario knowledge of prototype. 3) we conduct experiments on CommonGen benchmark, and results show that our method achieves significantly improvement by using both in-domain and out-of-domain plain text datasets as external knowledge source.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Model}\n\nIn this section, we introduce our retrieve-and-generation framework based \\emph{EKI-BART} as $G_{\\theta}$ with parameter $\\theta$ that retrieves prototype $\\mathcal{O}=(o_{1},o_{2},\\cdots,o_{n_{o}})$ from external text knowledge corpus and extracts the prototype knowledge under the guidance of concepts $\\mathcal{C}=(c_{1},\\cdots,c_{n_{c}})$ to improve the commonsense generation of target $\\mathcal{T}=(t_{1},\\cdots,t_{n_{t}})$.\nThe overall framework of our proposed model is shown in Figure~\\ref{framework}.\n\n\\subsection{Pretrained Encoder-Decoder}\nPretrained encoder-decoder model, \\emph{BART}~\\cite{lewis2019bart}, commonly follows the transformer architecture. Several encoder-layers stack as encoder and each of them is composed of a self-attention network and a feed-forward network. The input sequence would be encoded into a hidden state sequence $\\mathcal{H}^{e}=(h^{e}_{1},\\cdots,h^{e}_{n_{h}})$. Decoder is also stacked by a few decoder-layers and the key difference between encoder-layer and decoder-layer is that there exists an encoder-decoder-attention in the middle of self-attention and feed-forward network. In each encoder-decoder-attention module, the decoder representation $h^{d}_{u}$ would attend to $\\mathcal{H}^{e}$ following Equation~\\ref{raw-decoder-encoder-attention}.\n\\begin{align}\n &s_{x}(h^{d}_{u}, h^{e}_{v})=(W_{x,q}h^{d}_{u})^{T}(W_{x,k}h^{e}_{v})\\big\/\\sqrt{d_k} \\nonumber\\\\\n &a_{x}=softmax\\big(s_{x}(h^{d}_{u},h^{e}_{1}),\\cdots,s_{x}(h^{d}_{u},h^{e}_{n_{h}})\\big) \\nonumber \\\\\n &\\hat{h}^{d}_{u}=W_{o}\\big[W_{1,v}\\mathcal{H}^{e}a_{1},\\cdots,W_{X,v}\\mathcal{H}^{e}a_{X}\\big] \\nonumber \\\\\n &h^{d}_{u}=LN\\big(h^{d}_{u}+\\hat{h}^{d}_{u}\\big)\n ~\\label{raw-decoder-encoder-attention}\n\\end{align}\nwhere $x$ denotes the $x$th attention head, where $\\{W_{x,q},W_{x,k},W_{x,v}\\}\\in \\mathbb{R}^{d_{k}\\times d}$ are trainable parameters for query, key and value, $x$ denotes the attention head, $d$ is the hidden size, $d_k$ is the attention head dimension, and $LN$ is the layernorm function. Generally, there is a normalization operation before we get the encoder output, in other words, the correlation between $h^{e}_{v}$ and $h^{d}_{u}$ mainly depends on the direction of $h^{e}_{v}$ and $h^{d}_{u}$.\n\n\n\\subsection{Model Input}\nFollowing the input setting of \\emph{BART}, we concatenate the provided concepts $\\mathcal{C}$ and the retrieved prototype $\\mathcal{O}$ as a whole input $\\mathcal{S}$ to feed into the pretrained model.\n\\begin{gather}\n \\mathcal{S}=\\big[\\mathcal{C},\\mathcal{O}\\big]=\\big[c_{1},\\cdots,c_{n_{c}},o_{1},\\cdots,o_{n_{o}}\\big]\n\\end{gather}\nwhere $\\big[\\cdot,\\cdot\\big]$ is the concatenation operation.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width = 4.5in]{images\/framework.png}\n\t\\caption{The framework of our proposed \\emph{EKI-BART}. $E_{B}$ , $\\mathit{E}_{\\mathcal{C}}$, $\\mathit{E}_{\\mathcal{O}}$ and $\\mathit{E}_{D}$ are the embedding function of \\emph{BART} model, concept $\\mathcal{C}$, prototype $\\mathcal{O}$ and distance of prototype position indicator. $s_{v}$ and $h^{e}_{v}$ are the $v$th token of input and the corresponding \\emph{BART} encoder output. $\\mathcal{L}_{E}$ and $\\mathcal{L}_{D}$ are classification loss and the loss-likelihood loss, respectively. Refers to Table~\\ref{comparison} for the example in the framework.}\n\t\\label{framework}\n\\end{figure}\n\nIn our retrieve-and-generation framework, we need to modify the prototype $\\mathcal{O}$ to meet the requirement of $\\mathcal{C}$. To distinguish each token from $\\mathcal{O}$ or $\\mathcal{C}$, we add the group embedding on top of original \\emph{BART} embedding function as Equation~\\ref{EMB} shows.\n\\begin{align}\n \\mathit{E}(c_{j})=\\mathit{E}_{B}(c_{j})+\\mathit{E}_{\\mathcal{C}},\\\n \\mathit{E}(o_{k})=\\mathit{E}_{B}(o_{k})+\\mathit{E}_{\\mathcal{O}} \\label{EMB}\n\\end{align}\nwhere $\\mathit{E}_{B}$ stands for the original embedding function in \\emph{BART} including token embedding and position embedding, $\\mathit{E}_{\\mathcal{C}}$ and $\\mathit{E}_{\\mathcal{O}}$ are two group embedding for concepts $\\mathcal{C}$ and prototype ${\\mathcal{O}}$, and $\\mathit{E}$ is the final embedding function. \n\n\\subsection{Generation}\nThe prototype $\\mathcal{O}$ not only introduces scenario bias and effective additional concepts, but also brings noises into generation. In order to inject the retrieved knowledge into generation more effectively, we argue to extract the scenario knowledge of prototype in a more fine-grained manner. From Equation~\\ref{raw-decoder-encoder-attention}, we can see that each token in $\\mathcal{S}$ gets involved in encoder-decoder-attention with the encoder output $h^{e}_{v}$, thus we propose two mechanisms, namely, scaling module and prototype position indicator, to improve the generation.\n\n\n\\subsubsection{Encoder with Scaling Module}\nWe observe that noises and concept tokens both appear in the retrieved prototype, and these noises would dominate the generation.\nThe simplest solution is to utilize a hard mask, in other words, only keep those concept tokens in prototype and mask others, but the decoder would be no longer aware of the complete prototype scenario, and these effective additional concepts would be also unavailable. Instead of hard masking, we propose scaling module to assign scaling factor for input tokens which can be applied in encoder-decoder-attention, then the model is capable of receiving less noises and learn more from effective tokens.\n\nWe investigate the dot product based attention mechanism shown in Equation~\\ref{raw-decoder-encoder-attention}. Function $\\mathit{F}$ with a scaling factor $\\lambda$ on top of the normalized encoder output states $\\mathcal{H}$ is defined in Equation~\\ref{scale1},\n\\begin{align}\n F(\\lambda)&=S(h^{d}_{u}, \\lambda h^{e}_{v})=\\lambda \\Big(\\big(W_{q}h^{d}_{u}\\big)^{T}\\big(W_{k}h^{e}_{v}\\big)\\big\/\\sqrt{d_k}\\Big)=\\lambda S(h^{d}_{u}, h^{e}_{v})=\\lambda F(1) ~\\label{scale1}\n\\end{align}\nFrom Equation~\\ref{scale1}, we can see that when $\\big(W_{q}h^{d}_{u}\\big)^{T}\\big(W_{k}h^{e}_{v}\\big)$ is a large positive value or $h^{e}_{v}$ takes important attention weights in $h^{d}_{u}$, then $F(\\lambda)$ is a monotonically decreasing function. This inspires us to refine the representation of $h^{e}_{v}$ through $\\lambda$. Viewing $\\lambda$ as an importance factor, we are able to weaken\/strength $h^{e}_{v}$ in encoder-decoder-attention through decreasing\/increasing $\\lambda$. \n\n\n\n\n\n\n\n\nWith the awareness of the phenomenon in Equation~\\ref{scale1}, we devise a scaling module on the basis of Equation~\\ref{raw-decoder-encoder-attention}. \nIn practice, we attach a scaling module to the encoder, which can increase the norm if $h^{e}_{v}$ is likely to contribute to the generation and decrease when the $h^{e}_{v}$ has a conflict with concepts. Each channel of $h^{e}_{v}$ would be taken into account separately. This is accomplished with the following scaling module. The module is composed of \n\\begin{gather}\n \\Lambda=\\mathop{Sigmoid}\\Big(W_{2}ReLU\\big(W_{1}h^{e}_{v}+b_{1}\\big)+b_{2}\\Big) \\nonumber \\\\\n h^{e}_{v}=h^{e}_{v}\\odot\\big(2\\times\\Lambda\\big) \\label{scale-output}\n\\end{gather}\nwhere $W_{1}\\in\\mathbb{R}^{d_{s}\\times d},W_{2}\\in\\mathbb{R}^{d\\times d_{s}},b_{1}\\in\\mathbb{R}^{d_{s}},b_{2}\\in\\mathbb{R}^{d}$ are trainable parameters in the scaling module.\n\nConsider that the parameters of pretrained encoder-decoder model have been optimized during pretraining, simply adding the parameter $\\Lambda$ may destroy the distribution of encoder output $\\mathcal{H}$ and leads to training failure. We try to initialize these parameters in scaling module with $N(0,var)$, where $var$ is a small value, then the output with sigmoid activation would gather around 0.5, and $2\\times$ would make them fall near 1.0. Thus in the beginning of training, the participation of scaling module would not lead to a mess.\n\nIn our knowledge, prototype tokens that co-occur in $\\mathcal{T}$ should be more important than others for the generation of $\\mathcal{T}$. We hope these prior knowledge could help the model to better discriminate the importance of these prototype tokens, thus we introduce an encoder classification task that requires the scaling module to determine which tokens would appear in the generated sentence. \n\\begin{gather}\n \\mathcal{L}_{E}=-\\sum_{s_{v}\\in\\mathcal{S}}\\Big(\\mathcal{I}_{\\{s_{v}\\in\\mathcal{T}\\}}\\log\\mathop{Mean}(\\Lambda_{v})+\\mathcal{I}_{\\{s_{v}\\notin\\mathcal{T}\\}}\\log\\big(1-\\mathop{Mean}(\\Lambda_{v})\\big)\\Big)\n\\end{gather}\nwhere $Mean$ is to get the mean value and $\\mathcal{I}$ is indicator function, $\\mathcal{I}_{\\{s_{v}\\in\\mathcal{T}\\}}=0$ if $s_{v}\\in\\mathcal{T}$ otherwise 1, so is $\\mathcal{I}_{\\{s_{v}\\notin\\mathcal{T}\\}}$.\n\n\n\\subsubsection{Decoder with Prototype Position Indicator}\nThese surrounding tokens of concept tokens in prototype $\\mathcal{O}$ tend to describe how these concepts interact with the complete scenario. We argue that informing the decoder of these relative positions would help decoder better learn effective scenario bias of the prototype $\\mathcal{O}$.\n\n\n\nBefore the computation of encoder-decoder-attention, we devise a position indicator function to assign positions to those tokens in prototype. First, we assign virtual positions to tokens in prototype $\\mathcal{O}$ in sequence, from 1 to $n_{o}$. Second, we pick up the positions of those concept tokens in prototype as multiple position centers. Third, for each token $o_{v}\\in\\mathcal{O}$, we compute the smallest distance from $o_{v}$ to those concept tokens. The process is shown in Equation~\\ref{dist_compute}.\n\\begin{gather}\n \\mathit{D}(s_{v})=min\\big\\{|v-p|,s_{p}=c,s_{p}\\in\\mathcal{O},c\\in \\mathcal{C}\\big\\}\\label{dist_compute}\n\\end{gather}\n\nOur inputs tokens are composed of prototype ones and concept ones. Considering the particularity of concept words $\\mathcal{C}$, we assign them with a default position value 0 and adjust the position indicator function of prototype tokens through adding one, the process is shown in Equation~\\ref{all_dist_compute}.\n\\begin{gather}\n\\mathit{D}(s_{v})=\\left\\{\\begin{array}{ll}\n \\mathit{D}(s_{v})+1 & s_{v}\\in \\mathcal{O} \\\\\n 0 & s_{v} \\in \\mathcal{C} \n\\end{array} \\right.\\label{all_dist_compute}\n\\end{gather}\n\nOn the basis of the prototype position indicator function in Equation~\\ref{all_dist_compute}, we add the information of relative position from each token itself to the closest concept token in prototype into encoder-decoder-attention through Equation~\\ref{new-encoder-decoder-attention}.\n\\begin{gather}\n \\mathit{ED}(h^{e}_{v})=\\mathit{E}_{\\mathit{D}}\\big(\\mathit{D}(s_{v})\\big)\\nonumber \\\\\n S(h^{d}_{u}, h^{e}_{v})=\\big(W_{q}h^{d}_{u}\\big)^{T}\\big(W_{k}h^{e}_{v}+\\mathit{ED}(h^{e}_{v})\\big)\\big\/\\sqrt{d_{k}}\n ~\\label{new-encoder-decoder-attention}\n\\end{gather}\nwhere $\\mathit{E}_{\\mathit{D}}$ is the embedding for those distance values in $\\mathit{D}$. \nThese prototype tokens that more close to the concept tokens are expected to receive more attention than other tokens.\n\n\\subsection{Training}\nThe objective of our model is to maximize the log-likelihood for $\\mathcal{T}$ given $\\mathcal{O}$ and $\\mathcal{C}$.\n\\begin{gather}\n \\mathcal{L}_{D}=-\\mathop{log}\\sum_{k}P(t_{k}|\\mathcal{O},\\mathcal{C},t_{ B\n\\end{cases}} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$ A B. \n\t\tD\\_INCLUSIVE\\_BEFORE A B =}}\\\\[-1\\jot]\n\t\t& {\\texttt{ ($\\lambda$s. if A s $\\leq$ B s then A s}}\\\\[-1\\jot] \n\t\t&{\\texttt{ else PosInf)\n}}\\end{aligned}$ \\\\ \\hline\n\\end{tabular}\n\\vspace{20pt}\n\\end{table}\n\n\\begin{figure}[!b]\n\\centering\n\\subfigure[OR]{\n \\makebox[0.2\\textwidth]{\n\\includegraphics[scale=0.55]{OR3.jpg}\n}}\n\\subfigure[AND]{\n \\makebox[0.18\\textwidth]{\n\\includegraphics[scale=0.55]{AND3.jpg}}}\n\\subfigure[FDEP]{\n \\makebox[0.18\\textwidth]{\n\\includegraphics[scale=0.55]{FDEP3.jpg}}}\n\\subfigure[PAND]{\n \\makebox[0.18\\textwidth]{\n\\includegraphics[scale=0.55]{PAND3.jpg}}}\n\\subfigure[Spare]{\n \\makebox[0.18\\textwidth]{\n\\includegraphics[scale=0.55]{SPARE3.jpg}}}\n\\caption{Fault Tree Gates}\n\\label{fig:DFT_Gates}\n\\end{figure}\n\n\n\\indent In \\cite{Merle-thesis}, the DFT gates, shown in Figure~\\ref{fig:DFT_Gates}, are modeled based on the time of failure of their output. For instance, the Functional DEPendency (FDEP) gate is used to model failure triggers of system components. The spare gate models spare parts in a system, where the spare ($X$) replaces a main part ($Y$) after its failure. In the general case, the failure distribution of the spare is attenuated by a dormancy factor from the active state. Therefore, in the DFT algebra, two variables are used to distinguish the spare in both its states; active ($X_{a}$) and dormant ($X_{d}$). Table~\\ref{table:gates} lists the definitions of these gates. In~\\cite{elderhalli2019probabilistic}, we provided the HOL formalization of these gates. However, to verify the probability of failure expression given in Table~\\ref{table:gates}, it is required first to define a \\texttt{DFT\\_event} to be used in the probabilistic analysis. This is formally defined as\\cite{elderhalli2019probabilistic}:\n\n\\begin{defn}\n\\small{\\texttt{$\\vdash\\forall$p X t. DFT\\_event p X t = \\{s | X s $\\scriptstyle\\leq$ Normal t\\} $\\cap$ p\\_space p} }\n\\end{defn}\n\n\n\\noindent where \\texttt{p} is a probability space. \\texttt{p\\_space} is a function that returns the space of \\texttt{p}. \\texttt{X} is the time to failure function that can represent inputs and outputs of DFT gates and \\texttt{t} is the time until which we are interested in finding the probability of failure. The type of \\texttt{t} is real, while the time to failure functions are of type \\texttt{extreal} and thus it is required to typecast \\texttt{t} to \\texttt{extreal} using the \\texttt{Normal} function.\nWe verified the probability of failure of all DFT gates based on this event and using their formal definitions, as given in Table~\\ref{table:gates}~\\cite{elderhalli2019probabilistic}.\\\\\n\n\n\\begin{table}[!t]\n\\centering\n\\small\n\\caption{DFT Gates Expressions and Probability of Failure}\n\\label{table:gates}\n\\begin{tabular}{|l|l|l|}\n\\hline\nGate & Mathematical Expression & Probability of Failure \\\\ \\hline \\hline\n {AND} & $\\!\\begin{aligned}[b]\n\t{{ X \\cdot Y = max (X,Y)\n} \n}\n\t\\end{aligned}$ &$\\!\\begin{aligned}[b]\n\t{{ F_{X}(t) \\times F_{Y}(t)\n} \n}\n\t\\end{aligned}$ \\\\ \\hline\n {OR } & $\\!\\begin{aligned}[b]\n\t{{ X + Y = min (X,Y)\n} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[b]\n\t{{ F_{X}(t) + F_{Y}(t) - F_{X}(t) \\times F_{Y}(t)\n} \n}\n\t\\end{aligned}$ \\\\ \\hline\n {PAND} & $\\!\\begin{aligned}[b]\n\t{{ Q_{PAND}= }{ { \n\t\\begin{cases} Y, & X \\leq Y\\\\ + \\infty, & X > Y\n\\end{cases}}} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[b]\n\t{{ \\int_{0}^{t}f_{Y}(y)\\ F_{X}(y)\\ dy}}\n\t\\end{aligned}$ \\\\ \\hline\n {FDEP} & $\\!\\begin{aligned}[b]\n\t{{ X + Y = min (X,Y)\n} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[b]\n\t{{ F_{X}(t) + F_{Y}(t) - F_{X}(t) \\times F_{Y}(t) }}\n\t\\end{aligned}$ \\\\ \\hline\nSpare & $\\!\\begin{aligned}[c]\n\t Q_{SP} = & Y\\cdot(X_{d} \\lhd Y)+ X_{a}\\cdot(Y \\lhd X_{a}) \\\\ & {+Y\\Delta X_{a}+Y\\Delta X_{d}\n} \n\n\t\\end{aligned}$ & $\\!\\begin{aligned}[c]\n\t&{ \\int_{0}^{t} \\Big{(}\\int_{v}^{t} f_{(X_{a}|Y=v)} (u) du \\Big{)} f_{Y}(v) dv +}\\\\[-2\\jot]&{ { \\int_{0}^{t} f_{Y}(u) F_{X_{d}}(u) du}} \n\t\\end{aligned}$ \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[scale=0.55]{drive_by_wire2.jpg}\n\\caption{DFT of Drive-by-wire System}\n\\label{fig:dbw_dft}\n\\end{figure}\n\n\\indent As an example, we provide the details of analyzing the DFT of a drive-by-wire system (DBW) \\cite{altby2014design}, shown in Figure~\\ref{fig:dbw_dft}, to explain the required steps to use our formalized algebra. This system is used in modern vehicles to control its functionality using a computerized controller. We provide the reliability model of the brake and throttle subsystems. The throttle system fails due to the failure of the throttle ($TF$) or the engine ($EF$). The brake control unit ($BCU$) failure leads to the failure of this system. A spare gate is used to model the failure of a primary control unit ($PC$) with a warm spare ($SC$). Finally, the system can fail due to the failure of the throttle sensor ($TS$) or the brake sensor ($BS$).\\\\\n\n\n\\indent To formally conduct the analysis using our formalization, it is required first to express the function of the top event algebraically as:\n\n\\begin{minipage}{\\textwidth}\n\\center{{\\texttt{Q\\textsubscript{DBW} = (TF + EF) + BCU + WSP\\ PC\\ SC\\textsubscript{a}\\ SC\\textsubscript{d} + (TS + BS)}}}\\\\\n\\end{minipage}\n\\emph{}\\\\\n\n\\noindent Then, we create a \\texttt{DFT\\_event} for \\texttt{Q\\textsubscript{DBW}} as: \\texttt{DFT\\_event p Q\\textsubscript{DBW} t}, and verify that it equals the union of the individual DFT events, i.e.:\\\\\n\n\\noindent {\\texttt{DFT\\_event p TF t $\\cup$ DFT\\_event p EF t $\\cup$ DFT\\_event p BCU t $\\cup$ DFT\\_event p (WSP PC SC\\textsubscript{a} SC\\textsubscript{d}) t $\\cup$ DFT\\_event p TS t $\\cup$ DFT\\_event p BS t}}\\\\\n\n\n\\noindent Thus, we can use the probabilistic principle of inclusion and exclusion (PIE) \\cite{Merle-thesis} to verify the probability of failure of \\texttt{Q\\textsubscript{DBW}}. The probabilistic PIE expresses the probability of the union of events as the continuous summation and subtraction of the probabilities of combinations of intersection of events. \nThe DBW example is represented as the union of six events, therefore, applying the probabilistic PIE results in having 63 different terms in the final expression. We verify the probability of failure of the DBW as: \\\\\n\n\\begin{thm}\n\\label{thm:PROB_DBW}\n\\emph{}\\\\\n\\mbox{\\small{\\texttt{$\\vdash$ $\\forall$BS TS BCU PC SC$_{\\texttt{a}}$ SC$_{\\texttt{d}}$ EF TF p t f$_{\\texttt{PC}}$ f\\textsubscript{(SC\\textsubscript{a}|PC)} f\\textsubscript{SC\\textsubscript{a}PC}. 0 $\\leq$ t $\\wedge$ }}}\\\\\n\\mbox{\\small{\\texttt{dbw\\_event\\_req [BS; TS; BCU; PC; SC\\textsubscript{a}; SC\\textsubscript{d}; EF; TF] p t f$_{\\texttt{PC}}$ f\\textsubscript{(SC\\textsubscript{a}|PC)} f\\textsubscript{SC\\textsubscript{a}PC} $\\Rightarrow$}}}\\\\\n\\mbox{\\small{\\texttt{$\\bigg($prob p~(DFT\\_event p Q\\textsubscript{DBW} t) = }}}\\\\\n $\\!\\begin{aligned}[c]\n&\\small{\\texttt{F\\textsubscript{TF}(t)+F\\textsubscript{EF}(t)+F\\textsubscript{BCU}(t)+$\\bigg[\\int_{0}^{t}$f\\textsubscript{PC}(pc)$\\times\\big(\\int_{pc}^{t}$f\\textsubscript{(SC\\textsubscript{a}|PC=pc)}(sc\\textsubscript{a}) $d$sc\\textsubscript{a}$\\big)d$pc$\\bigg]$+F\\textsubscript{BS}(t)+F\\textsubscript{TS}}}\\\\[-2pt]\n&\\small{\\texttt{-...+...- F\\textsubscript{TF}(t)$\\times$F\\textsubscript{EF}(t)$\\times$F\\textsubscript{BCU}(t)$\\times$F\\textsubscript{BS}(t)$\\times$F\\textsubscript{TS}(t)$\\times$}} \\\\[-2pt]\n&\\small{\\texttt{$\\bigg[\\bigg(\\int_{0}^{t}$ f\\textsubscript{PC}(pc)$\\times\\bigg($ $\\int_{pc}^{t}$f\\textsubscript{(SC\\textsubscript{a}|PC=pc)}(sc\\textsubscript{a})$d$sc\\textsubscript{a}$\\bigg)d$pc$\\bigg)$+$\\int_{0}^{t}$f\\textsubscript{PC}(pc)$\\times$F\\textsubscript{SC\\textsubscript{d}}(pc)\\ $d$pc$\\bigg]\\bigg)$}}\n\\end{aligned}$ \n\\end{thm}\n\n\\noindent where \\texttt{dbw\\_event\\_req} ensures the required conditions for independence of the events and defines the conditional density functions with their proper conditions \\cite{ifm-short-code}. The first six terms in the conclusion of Theorem \\ref{thm:PROB_DBW} represent the probabilities of the six individual events of the union of the DBW. Since there are 63 different terms, we are only showing a part of the theorem and the full version is available at \\cite{ifm-short-code}. The script of the DBW DFT analysis required around 4850 lines of code and 24 man-hours to be developed. \n\n\\section{DRBD Algebra and its HOL Formalization}\n\\label{sec:DRBD-algebra}\n\nDRBDs capture the dynamic dependencies among system components using DRBD constructs, such as the spare and load sharing constructs. The blocks in a DRBD can be connected in series, parallel, series-parallel and parallel-series. \nRecently, we proposed an algebra that allows expressing the structure of a given DRBD based on system blocks \\cite{Yassmeen-DRBDTR}. The reliability of a given system can be expressed using this DRBD algebra. We defined several operators that enable expressing DRBDs of series and parallel configurations and even more complex structures. Furthermore, the defined operators allow modeling a DRBD spare construct to capture the behavior of spares in a system. We provided the HOL formalization of this algebra to ensure its soundness and enable the formal analysis using HOL4. We first formally define a DRBD event that creates the set of time until which we are interested in finding the reliability~\\cite{Yassmeen-DRBDTR}:\n\n\\begin{defn}\n\\small{\\texttt{$\\vdash\\forall$p X t. DRBD\\_event p X t = \\{z | Normal t < X s\\} $\\cap$ p\\_space p}}\n\\end{defn}\n\n\\noindent where $X$ is the time to failure function of a system component and $t$ is the moment of time until which we are interested in finding the reliability of the system. The probability of this event represents the reliability of the system until time $t$~\\cite{Yassmeen-DRBDTR}:\n\n\n\\begin{defn}\n\\small{\\texttt{$\\vdash\\forall$p X t. Rel p X t = prob p (DRBD\\_event p X t)}}\n\\end{defn}\n\n\\noindent Then, we verify that its probability is related to the CDF~\\cite{Yassmeen-DRBDTR}.\n\n\\begin{table}[t]\n\\caption{Definitions of DRBD Operators}\n\\small\n\\centering\n\\label{table:DRBD-element-operator}\n\\begin{tabular}{|l|l|l|}\n\\hline\nOperator & Mathematical Expression & Formalization \\\\ \\hline \\hline\n{ {\\texttt{AND}}}&\n$\\!\\begin{aligned}[b]\n\t{{ X \\cdot Y= min (X ,Y)}}\t\\end{aligned}$& $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$X Y. \n\t\tR\\_AND X Y =}}\\\\[-1\\jot]\n &\t{\\texttt{($\\lambda$s. min (X s) (Y s))}}\\end{aligned}$ \n \\\\ \\hline\n{ {\\texttt{OR}}}&\n$\\!\\begin{aligned}[b]\n\t{{ X + Y= max (X, Y)}\n}\n\t\\end{aligned}$& $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$X Y. \n\t\tR\\_OR X Y =}}\\\\[-1\\jot]\n &\t{\\texttt{($\\lambda$s. max (X s) (Y s))\n}}\\end{aligned}$ \n \\\\ \\hline\n\n{ {\\texttt{After}}}&\n$\\!\\begin{aligned}[b]\n\t{{ X \\rhd Y= }{ \n\t\\begin{cases} X, &X > Y\\\\ +\\infty, &X\\leq Y\n\\end{cases}} \n}\n\t\\end{aligned}$& $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$X Y. \n\t\tR\\_AFTER X Y =}}\\\\[-1\\jot]\n\t\t& {\\texttt{($\\lambda$s. if Y s < X s then X s}}\\\\[-1\\jot]\n &\t{\\texttt{else PosInf)\n}}\\end{aligned}$ \n \\\\ \\hline\n{ {\\texttt{Simultaneous}}}& $\\!\\begin{aligned}[b]\n\t{{ X \\Delta Y= }{ \n\t\\begin{cases} X, &X = Y\\\\ +\\infty, &X\\neq Y\n\\end{cases}} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$X Y. \n\t\tR\\_SIMULT X Y =}}\\\\[-1\\jot]\n\t\t& {\\texttt{($\\lambda$s. if X s = Y s then X s}}\\\\[-1\\jot]\n &\t{\\texttt{else PosInf)\n}}\\end{aligned}$ \\\\ \\hline\n{ {\\texttt{Inclusive After}}}& $\\!\\begin{aligned}[b]\n\t{{ X \\unrhd Y=}{ \n\t\\begin{cases} X, &X \\geq Y\\\\ +\\infty, &X < Y\n\\end{cases}} \n}\n\t\\end{aligned}$ & $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash$ $\\forall$ X Y. \n\t\tR\\_INCLUSIVE\\_AFTER X Y =}}\\\\ \t\t& {\\texttt{($\\lambda$s. if Y s $\\leq$ X s then X s}}\\\\[-1\\jot]\n &\t{\\texttt{else PosInf)\n}}\\end{aligned}$ \\\\ \\hline\n\\end{tabular}\n\\vspace{20pt}\n\\end{table}\n\n\nWe introduced DRBD identity elements and operators to model both the combinatorial and dynamic behaviors, as listed in Table~\\ref{table:DRBD-element-operator}. The idea is similar to the DFT algebra, where the blocks are modeled based on their time of failure. We need to recall that DRBDs are concerned in modeling the successful behavior, i.e., the ``not failing\" behavior, and thus we can use the time to failure functions to model the behavior of a given DRBD. We defined two identity elements for DRBD that are similar to the DFT elements, i.e., ALWAYS = $0$ and NEVER = $+\\infty$. The DRBD operators are listed in Table~\\ref{table:DRBD-element-operator}. The AND operator ($\\cdot$) models series DRBD blocks, where it is required that all the blocks are working. The output of the AND operator fails with the first failure of any component of its inputs. On the other hand, the OR operator ($+$) models parallel structures, where at least one of the blocks should continue to work to maintain the system functionality. To capture the dynamic behavior, we introduced three temporal operators, i.e., \\textit{After}, \\textit{Simultaneous} and \\textit{Inclusive-after}~\\cite{Yassmeen-DRBDTR}. The after operator ($\\rhd$) models the sequence of events, where the system continues to work as long as one component continues to work after the failure of the other. The simultaneous operator ($\\Delta$) is similar to the one of the DFT algebra, where its output fails when both inputs fail at the same time. Finally, the inclusive-after operator ($\\unrhd$) combines the behavior of both after and simultaneous operators. We provided the HOL formalization of these elements and operators based on lambda abstracted functions and \\texttt{extreal} numbers. The mathematical expressions and the HOL formalization are listed in Table~\\ref{table:DRBD-element-operator}. The reliability expressions of these operators are available at~\\cite{Yassmeen-DRBDTR}.\n\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[scale = 0.8]{spare_DRBD1.jpg}\n\\caption{DRBD Spare Construct}\n\\label{fig:drbd_spare}\n\\end{figure}\n\n\\indent A spare construct, shown in Figure~\\ref{fig:drbd_spare}, is introduced in DRBDs to model spare parts in systems by having spare controllers that activate the spare after the failure of the main part. In Table~\\ref{table:spare_reliability}, $Y$ is the main part and after its failure $X$ is activated. We use two variables ($X_{a}$, $X_{d}$), like the DFT algebra.\n\n\\begin{table}[!t]\n\\centering\n\\small\n\\caption{Mathematical and Reliability Expressions of Spare Constructs}\n\\label{table:spare_reliability}\n\\begin{tabular}{|c|c|}\n\\hline\n \\small{Math. Model} & \\small{Reliability}\\\\ \\hline\\hline$ {\n Q_{SP}= (X_{a} \\rhd Y)\\cdot (Y \\rhd X_{d}) }$ & $\\!\\begin{aligned}[t] { R_{SP}(t) =} & {1 - \\int_{0}^{t} \\int_{y}^{t} f_{(X_{a}|Y=y)}(x)\\ f_{Y}(y) dx dy}\\\\& { - \\int_{0}^{t} f_{Y}(y)F_{X_{d}}(y)dy } \\end{aligned}$ \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{table}[!b]\n\\centering\n\\renewcommand{\\arraystretch}{1}\n\\caption{Mathematical Models and Reliability of Series and Parallel Structures}\n\\label{table:series-parallel}\n\\begin{tabular}{ll|l|l|}\n\\cline{3-4}\n & & \\small{Math. Model} & \\small{Reliability} \\\\ \\hline\n\\multicolumn{1}{|l}{$\\!\\begin{aligned}[c] Series \\end{aligned}$} & {\\raisebox{-0.25cm}{$\\!\\begin{aligned}[c]\\includegraphics[scale = 0.45]{series1.jpg}\\end{aligned}$}} & $ \\bigcap_{i=1}^{n}(event\\ (X_{i},\\ t))$ &$ \\prod_{i=1}^{n}R_{X_{i}}(t)$ \\\\ \\hline\n\\multicolumn{1}{|l}{$\\!\\begin{aligned}[b] Parallel \\end{aligned}$} &{\\raisebox{-0.9cm}{ $\\!\\begin{aligned}[c] \\includegraphics[scale = 0.45]{parallel1.jpg}\\end{aligned}$}} & $ \\bigcup_{i=1}^{n}(event\\ (X_{i},\\ t))$ & $ 1- \\prod_{1=1}^{n}(1-R_{X_{i}}(t))$ \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nDRBD blocks can be connected in series, parallel and more nested structures. We provide here the details of only the series and parallel structures, as listed in Table~\\ref{table:series-parallel}. Details about the nested structures can be found in \\cite{Yassmeen-DRBDTR}. The series structure, shown in Table~\\ref{table:series-parallel}, continues to work as long as all the blocks are working. Once one of these blocks stops working, then the entire system stops as well. It can be expressed using the AND operator. Its mathematical model is expressed as the intersection of the individual DRBD events \\cite{hasan2015reliability}. The parallel structure, shown in Table~\\ref{table:series-parallel}, is composed of several blocks that are connected in parallel. Its structure function can be expressed using the OR operator. Its mathematical model is represented using the union of the individual DRBD events. We developed the HOL formalization of these structures and verified their reliability expressions assuming the independence of the individual blocks~\\cite{Yassmeen-DRBDTR}. \n\n\n\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[scale=0.8]{DBW_DRBD1.jpg}\n\\caption{DRBD of Drive-by-wire System}\n\\label{fig:dbw_drbd}\n\\end{figure}\n\n\nWe demonstrate the applicability of the DRBD algebra in the formal analysis of the DRBD of the DBW system given in Figure~\\ref{fig:dbw_drbd}. This DRBD is a series sructure with one spare construct to model the main part $PC$ that is replaced by $SC$ after failure. The structure function of the DBW DRBD ($F_{DBW}$) can be expressed as:\n\\begin{equation}\nF_{DBW} = TF \\cdot EF \\cdot BCU \\cdot (SC_{a} \\rhd PC) \\cdot (PC \\rhd SC_{d}) \\cdot TS \\cdot BS\n\\end{equation}\nThen, we verify the reliability of the DBW system as:\\\\\n\n\\begin{thm}\n\\label{thm:Rel_DBW}\n\\textup{\\small{\\texttt{$\\vdash\\forall$p TF EF BCU PC SC\\textsubscript{a} SC\\textsubscript{d} TS BS t.}}}\\\\\n\\mbox{\\textup{\\small{\\texttt{~DBW\\_set\\_req p TF EF BCU PC SC\\textsubscript{a} SC\\textsubscript{d} TS BS t $\\Rightarrow$}}}}\\\\\n\\mbox{\\textup{\\small{\\texttt{~(prob p (DRBD\\_event p F\\textsubscript{DBW} t) =}}}}\\\\\n\\mbox{\\textup{\\small{\\texttt{~~Rel p TF t * Rel p EF t * Rel p BCU t * Rel p (R\\_WSP PC SC\\textsubscript{a} SC\\textsubscript{d}) t *}}}}\\\\ \\mbox{\\textup{\\small{\\texttt{~~Rel p TS t * Rel p BS t})}}}\n\\end{thm}\n\n\\noindent where \\texttt{DBW\\_set\\_req} ascertains the required conditions for the independence of the DBW system blocks~\\cite{Yassmeen-DRBDTR}. The reliability of the spare construct can be further rewritten using the reliability expression of the spare using integrals. The script of the reliability analysis of the DBW DRBD is 150 lines long and required only one hour of work. \\\\\n\n\\section{Integrated Framework for Formal DFT-DRBD Analysis}\n\n\\begin{figure}[!b]\n\\centering\n\\includegraphics[width= \\textwidth]{methodology9.png}\n\\caption{Integrated Framework for Formal DFT-DRBD Analysis using HOL4}\n\\label{fig:methodology}\n\\end{figure}\n\nThe proposed framework integrating DFT and DRBD algebras is depicted in Figure~\\ref{fig:methodology}.\nIt can be utilized to conduct both DFT and DRBD analyses using the HOL formalized algebras and allows formally converting a DFT model into its corresponding DRBD based on the equivalence of both algebras. The analysis starts by a given system description that can be modeled as a DFT or DRBD. Formal models of the given system can be created based on the HOL formalized algebras. The DRBD model can be analyzed as described in Section~\\ref{sec:DRBD-algebra}, where a DRBD event is created and its reliability is verified based on the available verified theorems of DRBD algebra. On the other hand, a DFT model can be analyzed using the formalized DFT algebra, which requires dealing with the probabilistic PIE. Furthermore, the DRBD model can be converted to a DFT to model the failure instead of the success, then this model is analyzed using the DFT algebra. Similarly, the DFT model can be analyzed by converting it to its counterpart DRBD model, which results in an easier process as the PIE is not invoked. \n\nIn order to handle the DFT analysis using DRBD algebra and the DRBD analysis using the DFT algebra, it is required to be able to represent the DRBD of the corresponding DFT gates using the DRBD algebra and vice-versa (the equivalence proof in Figure~\\ref{fig:methodology}). According to \\cite{distefano2007dynamic}, the OR, AND and FDEP gates can be represented using series, parallel and series RBDs, respectively. Therefore, they can be modeled using AND and OR operators, while the spare gate corresponds to the spare construct. Finally, the PAND gate can be expressed using the inclusive after operator ($Y \\unrhd X$). However, we need to formally verify this equivalence to ensure its correctness. In Table~\\ref{table:verify-DRBD-DFT}, we provide the theorems of equivalence of DFT gates and DRBD operators and constructs, where \\texttt{D\\_AND}, \\texttt{D\\_OR}, \\texttt{FDEP}, \\texttt{P\\_AND} and \\texttt{WSP} are the names of the AND, OR, FDEP, PAND and spare DFT gates in our HOL formalization~\\cite{elderhalli2019probabilistic}. \\texttt{R\\_WSP} is the name of the spare DRBD construct in our formalized DRBD \\cite{Yassmeen-DRBDTR} and \\texttt{ALL\\_DISTINCT [Y X\\textsubscript{a} X\\textsubscript{d}]} ensures that the inputs cannot fail at the same time.\n\n\\begin{table}[!t]\n\\centering\n\\small\n\\caption{Verified Equivalence of DFT Gates and DRBD Algebra}\n\\label{table:verify-DRBD-DFT}\n\\begin{tabular}{|c|c|l|}\n\\hline\n\\small{DFT Gate}& \\small{DRBD Operator\/Construct} & \\small{Verified Theorem} \\\\ \\hline \\hline\n\\scriptsize{AND} & {OR} & {\\texttt{$\\vdash\\forall$X Y. D\\_AND X Y = R\\_OR X Y}} \\\\ \\hline\n {OR} & {AND} & {\\texttt{$\\vdash\\forall$X Y. D\\_OR X Y = R\\_AND X Y}} \\\\ \\hline\n {FDEP} & {AND} & {\\texttt{$\\vdash\\forall$X Y. FDEP X Y = R\\_AND X Y}} \\\\ \\hline\n {PAND} & {Inclusive After} & $\\!\\begin{aligned}[c] &{\\texttt{$\\vdash\\forall$X Y. P\\_AND X Y =}}\\\\[-1\\jot]\n &\\texttt{{ R\\_INCLUSIVE\\_AFTER Y X}} \\end{aligned}$ \\\\ \\hline\n {Spare} & {Spare} & $\\!\\begin{aligned}[c]\n\t& {\\texttt{$\\vdash\\forall$X\\textsubscript{a} X\\textsubscript{d} Y.}}\\\\[-1\\jot]\n\t&\\texttt{{($\\forall$s. ALL\\_DISTINCT [Y s;X\\textsubscript{a} s;X\\textsubscript{d} s]) $\\Rightarrow$ }}\\\\[-2\\jot]\n& {\\texttt{(WSP Y X\\textsubscript{a} X\\textsubscript{d} = R\\_WSP Y X\\textsubscript{a} X\\textsubscript{d})}} \\end{aligned}$ \n \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nIn order to use these verified expressions in Table~\\ref{table:verify-DRBD-DFT}, we need to verify that the \\texttt{DRBD\\_event} and the \\texttt{DFT\\_event} possess complementary sets in the probability space. We formally verify this as:\n\n\\begin{thm}\n\\small{\\texttt{$\\vdash\\forall$p X t. prob\\_space p $\\wedge$ (DFT\\_event p X t) $\\in$ events p $\\Rightarrow$}}\\\\\n\\mbox{\\small{\\texttt{(prob p (DRBD\\_event p X t) = 1 - prob p (DFT\\_event p X t))}}}\n\\end{thm} \n\n\\noindent where the conditions ensure that \\texttt{p} is a probability space and that the DFT event belongs to the events of the probability space. This theorem can be verified also if we ensure that the DRBD event belongs to the probability space. This theorem means that for the same time to failure function, the DRBD and DFT events are the complements of each other. This way, we can analyze DFTs using the DRBD algebra and vice-versa. \n\nBased on the verification results obtained in Table~\\ref{table:verify-DRBD-DFT}, DFT gates can be formally represented using DRBDs. We show that the amount of effort required by the verification engineer to formally analyze DFTs by analyzing its counterpart DRBD is less than that of analyzing the original DFT model. In Section~\\ref{sec:DFT-algebra}, a DFT is formally analyzed using the DFT algebra by expressing the DFT event of the structure function as the union of the individual DFT events. Then the probabilistic PIE is utilized to formally verify the probability of failure of the top event. The number of terms in the final result equals $2^n-1$, where $n$ is the number of individual events in the union of the structure function. Therefore, in the verification process, it is required to verify at least $2^n-1$ expressions. On the other hand, verifying a DRBD would require verifying a single expression for each nested structure.\n\nAs an example, consider the reliability analysis of the DBW system. Analyzing the DFT of this system required verifying 63 subgoals as the top event is composed of the union of six different events. While analyzing the DRBD of the DBW system required verifying only one main subgoal to be manipulated to reach the final goal. Table~\\ref{table:compare} provides a comparison of the size of the script, the required time to develop it and the number of goals to be verified. Based on these observations, analyzing the reliability of the DBW using the DRBD required $1\/24$ of the time needed by the DFT. These results show that it is more convenient to analyze the DRBD of a system rather than its DFT if the algebraic approaches are to be used. The only added step will be to formally verify that the DFT and DRBD are the complements of each other, which is straightforward utilizing the theorems in Table~\\ref{table:verify-DRBD-DFT}. Therefore, we verify this as:\\\\\n\n\\begin{thm}\n\\small{\\texttt{$\\vdash\\forall$p TF EF BCU PC SC\\textsubscript{a} SC\\textsubscript{d} TS BS t. }}\\\\\n\\mbox{\\small{\\texttt{prob\\_space p $\\wedge$ DBW\\_events\\_p p TF EF BCU PC SC\\textsubscript{a} SC\\textsubscript{d} TS BS t $\\Rightarrow$.}}}\\\\\n\\mbox{\\small{\\texttt{(prob p (DRBD\\_event p F\\textsubscript{DBW} t) = 1- prob p (DFT\\_event p Q\\textsubscript{DBW} t))}}}\\\\\n\\end{thm}\n\n\\noindent where \\texttt{DBW\\_events\\_p} ensures that the DBW DFT events are in the events of the probability space. Thus, we can use the DRBD reliability expression (Theorem~\\ref{thm:Rel_DBW}) to verify the probability of failure of the DFT, which results in a reduction in the analysis efforts. \n\n\\begin{table}[!t]\n\\centering\n\\caption{Comparison of Formal Analysis Efforts of DBW}\n\\label{table:compare}\n\\begin{tabular}{c|c|c|c|}\n\\cline{2-4}\n & \\small{\\# of subgoals} & \\small{\\# of lines in the script} & \\small{required time} \\\\ \\hline \\hline\n\\multicolumn{1}{|c|}{\\small{DFT}} & \\small{ 63} & \\small{ 4850} & \\small{ 24 hours} \\\\ \\hline\n\\multicolumn{1}{|c|}{\\small{DRBD}} & \\small{ 1 } & \\small{150} & \\small{1 hour } \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\section{Conclusions}\nIn this report, we proposed an integrated framework to enable the multiway formal algebraic analysis of DFTs and DRBDs within a theorem prover. This framework allows transforming a DFT and DRBD models into their corresponding DBRD and DFT models, respectively, to be either analyzed more effectively using the DRBD algebra or to clearly observe the failure dependencies in the form of a DFT. This requires formally verifying the equivalence of both DFT and DRBD algebras. To illustrate the efficiency and usefulness of the proposed framework, we provided a comparison of the efforts required to analyze a drive-by-wire system and the results showed that using the DRBD in the analysis instead of DFTs required verifying less goals (1:63), smaller script size (150:4850) and less time (1h:24h).\n\\bibliographystyle{unsrt}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\\subsection{Monitoring Complex Industrial Systems}\n\nThe industry is currently experiencing major changes in the condition monitoring of machines and production plants as data is always more easily captured and collected. This opens many opportunities to rethink the traditional problems of Prognostics and Health Management and to look for data-driven solutions, based on machine learning. Machine learning is being increasingly applied to fault detection, fault diagnostics (usually both combined in a classification task), and to the subsequent steps such as fault mitigation or decision support. One crucial component in the success of learning from the data, is the representativeness of the collected dataset used to train the models. Most modern machine learning methods, and deep learning in particular, require for their training not only a large collection of examples to learn from but also a representative sampling of these examples over the input space, or in other words, over the possible operating conditions. In fact, if the interpolation capabilities of machine learning have been acclaimed in many fields, extrapolation remains a challenge. Since most machine learning tools perform local transformation of the data to achieve better separability, it is very difficult to prove that these transformations are still relevant for data that are outside the value range used for training the models or in our case, for data stemming from different operating conditions.\n\nThis representativeness requirement on the training set is a major constraint for the health monitoring of complex or critical industrial systems, such as passenger transporting vehicles, power plants, or any systems whose failure would lead to dramatic consequences, and this for the following reasons: First, due to the fact that failures of such systems are by nature unacceptable, these systems are reliable by design and preventive maintenance is regularly performed to minimise any risk of a major fault developing. In addition, possible faults, if extremely unlikely, are plentiful and this prevents the gathering of enough data to perform data-driven fault recognition and classification. Second, the system operating conditions might evolve over very-long time scale (e.g.,\\ yearly trends in a power plant operation). Collecting a representative dataset to train a reliable data-driven health monitoring model would require too much time.\n\nThese two limitations, missing faulty patterns to learn from and the need of data representative of all operating conditions, have solutions in the literature but it is seldom that both problems are considered together. First, instead of learning the faulty patterns, novelty detection methods exist, and have already been successfully applied. Yet, when data representative of all possible operating conditions are lacking, such approaches face the difficult problem of distinguishing between anomalies and a normal evolution of the system that was not observed in the training dataset. To the opposite, in the second case, many works have focused on domain adaptation, that is, either on identifying patterns significant of faults that are independent of the operating conditions or on their adaptation to new conditions. Such approaches require however examples of all possible faults in some operating conditions, to then generalise their characteristics to other operating conditions.\n\nA intuitive approach to this domain adaptation task is to consider several similar systems each with different operating conditions and to learn fault signatures valid across all systems. This is the \\textit{fleet approach} to domain transfer. A trivial context is to assume the systems identical in design and in usage, such that a single model can be trained for the whole fleet. But the task becomes more challenging when one of both constraints is relaxed. First, in a fleet from the operator perspective \\parencite{Jin2015}, the units can come from different manufacturers but the units are used similarly. In this case the monitoring of the units might vary due to different sensor equipment and the challenge lies in the transformation of the data to a space independent of the sensing characteristics and technologies. Second, in a fleet from the manufacturer perspective, the units come from the same manufacturer but they are used in different operating conditions or by different operators \\parencite{Leone2017}. In this second case the operating conditions will be the distinguishing elements between units and the challenge is in the alignment of the data, such that faults can be recognisable independently of the operation. Of course, the combination of both is also a possibility and would lead to an even more challenging task. In this paper, we set ourselves in the context of similarly monitored units with different operating conditions, that is, in the fleet from the manufacturer perspective. \n\nFor the monitoring and diagnosis of such fleets, a vast literature exists proposing solutions that can be organised by increasing complexity of the task as follows:\n\\begin{enumerate}\n \\item Identifying some relevant parameters of the units in order to adapt them to each unit or to perform clustering and use the data of each cluster to train the model. \\textcite{Zio2010} compare multi-dimensional measurements, independent of time. \\textcite{lapira2012fault} clusters a fleet of wind turbine based on power versus wind diagrams, pre-selecting diagrams with similar wind regimes. \\textcite{Gonzalez-PriDa2016} propose an entropy inspired index based on availability and productivity to cluster the units. \\textcite{Peysson2019} propose a framework for fleet-wide maintenance with knowledge base architecture, uniting semantic and systemic approach of the fleet.\n \\item The entire time series are used to cluster units together. \\textcite{Leone2016} compare one dimensional time series by computing the euclidean distance between a trajectory and reference trajectories. \\textcite{liu2018cyber} proposes, among other, the use of time machine, clustering time series from a fleet a wind turbine with the DS3 algorithm. \\textcite{al2018framework} cluster nuclear power-plant based on their shut-down transient.\n \\item Model each unit functional behavior and try to identify similar ones. \\textcite{Michau2018b} use the whole set of condition monitoring data to define the similarity. Such approaches do not depend on the length of the observation since the functional relationship is learnt.\n \\item Align the feature space of different units, such as proposed in the present paper.\n\\end{enumerate}\n\nEach level increases the complexity of the solution, but tends to mitigate some of the limitations of the previous one. The main limitations of each of the above described levels are:\n\\begin{enumerate}\n \\item Aggregated parameters do not guarantee that all the relevant conditions have been covered. E.g.,\\ \\textcite{lapira2012fault} have to first segment diagram with similar wind regimes.\n \\item Comparing the distances between datasets is a problem affected by the curse of dimensionality: in high dimensions, the notion of distance loses its traditional meaning \\parencite{Domingos2012}, and the temporal dimension particularly important when operating conditions evolve, make this comparison even more challenging. E.g.,\\ \\textcite{al2018framework} restrict themselves to fixed-length extracted transient from the time series.\n \\item Even though such approaches are more robust to variations in the behaviour of the system, sufficient similarity in the operating range is still a strong requirement, which may require large fleets for the approach to be applicable. \n \\item When the alignment is really robust to variations in the operating conditions, it can be to the point that the subsequent condition monitoring model might interpret as natural some degradation of the system and miss the alarms. \n\\end{enumerate}\n\nAligning the feature space in the Prognostics and Health Management field is not a new idea, but it has been only applied to diagnosis or Remaining Useful Life estimation, to the best of the authors' knowledge. Such diagnostics problem have been extensively studied with traditional machine learning approaches \\parencite{margolis2011literature}, but also more recently and more specifically with deep-learning \\parencite{kouw2019review}. In the context of diagnostics, it is almost always assumed that the labels on the possible faults or on the degradation trajectories exist for some units, which will be used as reference for the alignment and are therefore denoted by \\textit{source} units. The challenge is then to make sure that the models perform as well on the \\textit{target} units for which diagnostics labels were not available in sufficient quantity or nor available at all.\n\nMost of the alignment methods follow the same framework: First, features are extracted, engineered or learned, such as to maximise the performances of a subsequent classifier trained in the source domain where labels are available. Some works aim at adapting the target features to match the source by means of a transformation \\parencite{fernando2013unsupervised,Xie2016,Zhang2017a}, others combine both alignment and feature learning in a single task. To do so, a min-max problem is solved, to minimise the classification loss in the source domain while maximising the loss of a domain discriminator. For example, \\textcite{Lu2017a} train a neural network such that one intermediate layer (namely the feature or latent space) minimises the Maximum Mean Discrepancy \\parencite{Borgwardt2006} between the source and the target and maximises the detection accuracy of a subsequent fault classifier. Ensuring that the origin of the features cannot be classified encourages a distribution overlap in the feature space.\n\\textcite{Li2018} introduced the use of generative model to transfer faults from one operating condition to another with a two level deep learning approach. Fake faulty and healthy samples in different operating conditions are generated to train in the second step, a classifier also on operating conditions where the faults were not really experienced.\n\nAs target labels are missing in the training set, such approaches are sometimes also denoted as \\textit{Unsupervised Domain Adaptation}, where adaptation is performed on an unsupervised domain, that is, the target domain \\parencite{fernando2013unsupervised}.\nThe training of the feature extractor and of the classifier is however supervised in the source domain.\nRecent results demonstrate that this supervision of the feature extractor training in the source domain through the classifier is essential to the success of these approaches. By making sure that the features can be classified, the classifier constrains greatly the feature space \\parencite{wang2019domain}.\n\nThese approaches cannot be directly applied in the context of unsupervised health monitoring, where anomalous labels and anomalous samples are available neither for the source nor for the target. In our context, as the training uses healthy data only, there is no classification information to back-propagate to the feature extractor. The feature extraction is in fact unsupervised with respect to the health of the system, while the alignment is still required. This setup is thus an even more challenging task never solved so far in the literature, to the best of our knowledge.\n\nTo solve this task, it seems necessary to constrain the feature space in an unsupervised manner, to convey maximal information about the health of the system within the features, as illustrated in Figure~\\ref{fig:FeatAlignt}. We propose three approaches.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=9cm]{Figures\/Feat-Alignt.png}\n\\caption{Feature alignment: (a) Combining healthy features of source and target without alignment lead to wide anomaly detection boundaries and missed anomalies. (b) Healthy feature non-discriminability. Without constraints on the transformation, anomalies might be mixed with healthy features. (c) By imposing non-discriminability and an homothetic transformation, inter-point relationships are kept, ensuring that the initial separability of the anomalies is conserved after the alignment.}\n\\label{fig:FeatAlignt}\n\\end{figure}\n\nFirst, we propose an auto-encoder with shared latent space both for source and target. The features of both source and target need to be encoded in a single latent space while allowing for good input reconstruction. Using an auto-encoder is a natural extension of data-driven unsupervised anomaly detection approaches \\parencite{michau_deep_2017}. We will test this approach with a variational auto-encoder, $\\beta$-VAE \\parencite{higgins2017beta}. VAE has shown in the past that, with its probabilistic description of the input and latent space, the obtained features are very useful for subsequent tasks \\parencite{ellefsen2019remaining}.\n\nSecond, we introduce the homothety loss. The loss is designed to make sure that inter-point distance relationships are kept in the latent space for both the source and the target. If the features were obtained through an homothetic projection from the input to the feature space, they would minimise the homothety loss.\n\nLast, we use an origin discriminator on the feature space trained in an adversarial manner. The discriminator itself is trained to best classify the origin dataset of the features while the feature extractor is trained such as the resulting features cannot be classified by the discriminator.\n\nThe remainder of the paper is organised as follows: \nSection 2 provides an overview of the tools used for the proposed approach and motivates their usage in the particular context of unsupervised feature learning and alignment. \nSection 3 presents a real application case study that faces the difficulties discussed above, including rare faults, limited observation time, limited representative condition monitoring data collected over a short period. The comparisons are performed on a fleet comprising 112 power plants monitored for one year, 12 of which experienced a fault. We limit ourselves to two months of data available for the target unit training, quite a small time interval compared to yearly fluctuations of a power plant operation.\n\n \\subsection{Notations}\n \n\\makebox[2cm]{${\\cdot}_S $} Variables for the source dataset\\\\\n\\makebox[2cm]{${\\cdot}_T $} Variables for the target dataset\\\\\n\\makebox[2cm]{$X_\\cdot$} Input Variable\\\\\n\\makebox[2cm]{$F_\\cdot$} Feature Variable\\\\\n\\makebox[2cm]{$Y_\\cdot$} One-class classifier output\\\\\n\\makebox[2cm]{$D_\\cdot$} Discriminator output\\\\\n\\makebox[2cm]{$\\mathcal{L}$} Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{rec}$} Reconstruction Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{KL}$} Kullback-Leibler divergence Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{FA}$} Feature Alignment Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{D}$} Discriminator Loss\\\\\n\\makebox[2cm]{$\\mathcal{L}_{H}$} Homothety Loss\\\\\n\\makebox[2cm]{ELM} Extreme Learning Machine\\\\\n\\makebox[2cm]{VAE} Variational Auto-Encoder\\\\\n\\makebox[2cm]{GRL} Gradient Reversal (GR) Layer\\\\\n\\makebox[2cm]{FPR} False Positive Rate (in \\%)\\\\\n\n\n\\section{Methodology for Adversarial Transfer of Unsupervised Detection}\n\\label{sec:ua}\n\n \\subsection{Anomaly Detection and One-Class Classification}\n \n\nThe traditional framework for unsupervised fault detection usually consists in a two step approach, with first, feature extraction, and second feature monitoring. Features can stem from a physical modelling of the system, from aggregated statistics on the dataset, from varied machine learning tools or from deep learning on surrogate tasks, such as auto-encoding. The monitoring of the features can be rule based (e.g.,\\ by defining a threshold on the features), or statistical (e.g.,\\ $\\chi^2$ or Student test) or use some machine learning tools such as clustering (K-means), nearest neighbours analysis, density based modelling, subspace analysis (e.g.,\\ PCA), or one-class classification such as one-class SVM or one-class classifier neural network (see the work of \\textcite{Pecht2019} for more details).\nTo the best of the author's knowledge, unsupervised detection has never been treated with end-to-end learning due to the lack of supervision inherent to the task.\n\nIn continuity with the traditional approaches to unsupervised fault detection, we propose here to split our architecture in a feature extractor and a feature monitoring network. To monitor the features, we propose to train a one-class classifier neural network, such as developed by \\textcite{leng2015one, yan2016one, michau_deep_2017, Michau2018a}, which has proven to provide more insights on system health than the traditional state-of-the-art one-class SVM, and provides better detection performances than other indices such as the \\textit{special index} \\parencite{Fan2017}.\n\nIn order to handle the lack of supervision in the training of the one-class classifier, \\textcite{Michau2018a} proposed to train a one-class Extreme Learning Machine (ELM). Such single layered networks have randomly drawn weights between the input layer and the hidden layer. Only the weights between the hidden layer and the output are learned. Mathematical proof has been given that they are universal approximators \\parencite{huang_universal_2006}, and since the problem of finding the output weights become a single variable convex optimisation problem, there is a unique optimal solution easily found by state-of-the-art algorithms with convergence guaranties, in little time compared to the traditional training of neural networks with iterative back-propagation.\n\n\\textcite{Michau2018a} demonstrated that a good decision boundary on the output of the one-class ELM is the threshold\n\\begin{equation}\n\\label{eq:thrd}\n\\mbox{Thrd} = \\gamma\\cdot \\mbox{percentile}_{p}(\\vert \\ensuremath{\\mymathbb 1}}%1\\hspace{-0.25em}\\mbox{l}} - \\val{Y}\\vert),\n\\end{equation}\nwhere $\\val{Y}$ is the ouput of the one-class classifier for a validation dataset, healthy but not used in the training. It provides an estimation of the natural fluctuation in the one-class classifier output in healthy operating conditions. $p$ represents the number of expected outliers, which is linked to the cleanness of the dataset and $\\gamma$ represents the sensitivity of the detection. In this paper, we take $\\gamma=1.5$ and $p=99.5$, values identified as relevant in the paper.\nOutliers are discriminated from the main healthy class if they are above this threshold. \n\nThis framework was developed and successfully applied in combination to a auto-encoder ELM (namely Hierarchical ELM or HELM) to the monitoring of a single machine for which long training was available \\parencite{michau_deep_2017,Michau2018a}. In the context of short training time, the same architecture was applied to paired units (source with one year of data available and target with only two months of data available) in the work of \\textcite{Michau2018b}. If this approach provided better detection rates with lower false alarm rates than using two months of data only, it faces the limitation that when the units have very different operating conditions, it is difficult to pair them such that the final model provides satisfactory performances.\n\nIn this paper, we propose to change the feature extractor from an ELM auto-encoder, that is not suitable for more advanced alignment since there is no possible back-propagation of any loss, to either a Variational Auto-Encoder (VAE) or a simple feed forward network with a new custom made loss function. We propose therefore new ways to learn features across source and target but still use the one-class classifier ELM to monitor the health of the target unit.\n\n \\subsection{Adversarial Unsupervised Feature Alignment}\n \nThe proposed framework is composed of a feature extractor and of a one-class classifier trained with healthy data only. In order to perform feature alignment we explore three strategies in the training of the feature extractor, auto-encoding, the homothety loss and adversarial discriminator. \nThese alignment strategies are not exclusive, so we also explore their different combinations.\n\n \\subsubsection{Auto-encoding as a Feature Constraint}\n \nAn auto-encoder is a model trained to reconstruct its inputs after some transformation of the data. These transformations could be to a space of lower dimensionality (compressive auto-encoder), higher dimensionality, linear (e.g.,\\ PCA) or non-linear (e.g.,\\ neural networks). Auto-encoding models are a popular feature extractors as they do not require any supervision. They rely on the assumption that since the learned features can be used to reconstruct the inputs, the features should contain the most meaningful information on the data and that they will be better suitable for subsequent machine learning tasks. It is in addition quite easy to enforce some additional properties of the features that seems suitable for the task at hand such as sparsity (e.g.,\\ $\\ell_1$ regularisation), low coefficients (e.g.,\\ $\\ell_2$ regularisation), robustness to noise (denoising auto-encoder), etc... \n\nAmong the vast family of possible auto-encoders, the variational auto-encoder, proposed by \\textcite{kingma2013auto}, is learning the best probabilistic representation of the input space using a superposition of Gaussian kernels. The neurons in the latent space are interpreted as mean and variance of different Gaussian kernels, from which features are sampled and decoded. Such networks can be used as traditional auto-encoders but also as generative models: by randomly sampling the Gaussian kernels, new samples can be decoded.\n\nThe training of the variational auto-encoder consists in the minimisation of two losses, first the reconstruction loss, $\\mathcal{L}_{rec}$, and second the loss on the distribution discrepancy between the input (or prior) and the Gaussian kernels, by mean of the Kullback-Leibler divergence, $\\mathcal{L}_{KL}$. \nThe reader interested in the implementation details of the variational auto-encoder can refer to the work of \\textcite{higgins2017beta}. In this work, the concept of $\\beta$-VAE is introduced and propose to apply a weight $\\beta$ to the Kullback-Leibler divergence loss to either prioritise a good distribution match over a good reconstruction (particularly important when VAE is used as a generative model) or the opposite.\n\nVariational auto-encoder have been successfully used in hierarchical architectures in many fields. In PHM, they have been used in semi-supervised learning tasks for remaining useful life prediction \\parencite{yoon2017semi,ellefsen2019remaining}, and for anomaly detection \\parencite{kim2018squeezed}).\n\n\n \\subsubsection{Homothety as a Feature Constraint}\n \nAn alternative to auto-encoding, is the use of more traditional feed-forward networks, on which we would impose an additional constraint on the feature space. Instead of the combined loss on both the reconstruction from the feature and on the feature distribution, we propose to introduce here a loss that encourages inter-point relationship conservation in the feature space. To do so, we define the homothety loss to keep constant the inter-point distance ratios between the input $X$ and the feature space $F$. The intuition lies on the idea that a good alignment of the two dataset should correspond to both source and target sharing the same lower dimensionality feature space while being scaled in similar ways.\n\nThe proposed homothety loss is defined as:\n\\begin{equation}\n\\mathcal{L}_H = \\sum_{S\\in \\left\\lbrace \\substack{\\mbox{Source}\\\\\\mbox{Target}} \\right\\rbrace} \\frac{1}{\\vert S \\vert} \\sum_{(i,j)\\in S}\\left\\Vert \\left\\Vert X_i - X_j \\right\\Vert_2 - \\eta \\left\\Vert F_i - F_j \\right\\Vert_2 \\right\\Vert_2\n\\end{equation}\nwhere\n\\begin{equation}\n\\eta = \\Argmin{\\tilde{\\eta}} \\mathcal{L}_H (\\tilde{\\eta})\n\\end{equation}\n\n \\subsubsection{Domain Discriminator}\n \nFor both the VAE and the Homothetic Feature Alignment, the alignment can be helped further with the help of an origin discriminator trained in an adversarial manner. This training is done by solving a min-max problem where the discriminator is trained at minimising the discrimination loss on sample origins, while the feature extractor is trained to maximise this loss, that is, to make the feature indistinguishable from the discriminator perspective.\n\nSuch adversarial training has been greatly simplified since the introduction of the Gradient Reversal Layer trick, proposed by \\textcite{Ganin2016}. This simple yet efficient idea consists in connecting the feature extractor to the discriminator through an additional layer which performs the identity operation in the forward pass but negates the gradient in the backward pass. The gradient passed to the feature extractor during the backward pass goes therefore in the opposite direction as the minimisation problem would require.\n\nFor the discriminator, we propose to experiment with a classic densely connected softmax classifier, and with a Wasserstein discriminator. This setup is inspired from the Wasserstein GAN \\parencite{arjovsky2017wasserstein}, where a generative model is trained to minimise the Wasserstein distance between generated samples and true samples, as to make their distribution indistinguishable. The authors demonstrate that, using the Kantorovich-Rubinstein duality, this problem can also be solved by adversarial training with a neural network playing the role of the discriminator aiming at maximising the following loss:\n\n\\begin{equation}\n\\mathcal{L}_{D} = \\mathbb{E} \\left( \\mbox{disc}\\left( F_{\\mbox{Source}}\\right)\\right) - \\delta_w \\mathbb{E} \\left(\\mbox{disc}\\left( F_{\\mbox{Target}}\\right) \\right)\n\\end{equation}\n\nOur implementation of the Wasserstein adversarial training takes into account the latest improvements, including the gradient penalty as proposed in \\textcite{gulrajani2017improved} to ensure the requirement of the Kantorovich-Rubinstein duality that the function disc() is 1-Lipschitz, and the asymmetrically relaxed Wasserstein distance as proposed by \\textcite{wu2019domain}. This relaxation, here when $\\delta_w >1.0$, encourages the target feature distribution to be fully contained in the source feature distribution, but does not require full reciprocal overlap. This is important here since we assume a small non-representative dataset for the target training. It is in the nature of this task to assume that the source feature distribution will have a larger support, from which the health monitoring model can learn from.\n\n\n \\subsection{Summary and overlook of final architectures}\n \n \\subsubsection{Tested Architectures}\n \n\nIn summary, we compare the various propositions alone and combined together as follows:\n\\begin{itemize}\n\\item \\textbf{$\\beta$-VAE}: the traditional $\\beta$-VAE whose features are used as input to a one-class classifier. \n\\item \\textbf{$\\beta$-VAEDs}: The same $\\beta$-VAE and one-class classifier with the addition of a softmax discriminator connected to the feature space with a GR-layer\n\\item \\textbf{$\\beta$-VAEDw}: Similar as before but with a Wasserstein discriminator. In this version the adversarial training aims at minimising the Wasserstein distance between source and target.\n\\end{itemize}\nSimilarly, for the homothetic alignment, we explore the following combinations:\n\\begin{itemize}\n\\item \\textbf{HFA}: Homothetic Feature Alignment, made out of a feature extractor trained with the homothetic loss and a one-class classifier.\n\\item \\textbf{HAFAs}: Homothetic and Adversarial Feature Alignment with softmax discriminator, similar as before with in addition a softmax discriminator connected through a GR-layer.\n\\item \\textbf{HAFAw}: Homothetic and Adversarial Feature Alignment with a Wasserstein discriminator.\n\\item \\textbf{AFAs}: Similar as the HAFAs architecture but without the homothetic loss.\n\\item \\textbf{AFAw}: Similar to the HAFAw architecture but without the homothetic loss.\n\\end{itemize}\n\nFigure~\\ref{fig:UAFA} summarises the architectures explored here and Figure~\\ref{fig:framework} how the whole framework is organised.\n\nWe compare in addition the results to our previous results in which units were paired together to train an HELM without alignment of any kind \\parencite{Michau2018b}.\n\n\\begin{figure*}\n\\hfil\n\\subfloat[]{\\includegraphics[width=7.5cm]{Figures\/HAFA.png}\\label{sfig:HAFA}}\n\\hfil\n\\subfloat[]{\\includegraphics[width=7.5cm]{Figures\/VAED.png}\\label{sfig:VAED}}\n\\hfil\n\\caption{\\textbf{Adversarial \\& Unsupervised Feature Alignment Architectures}. (a) HAFA's architecture. (b) $\\beta$-VAE based architectures.\nThe feature encoder $N_1$ is trained to minimise a feature alignment loss $\\mathcal{L}_{FA}$ composed of the reversed discriminator loss $-\\alpha\\mathcal{L}_D$ and either (a) the homothety loss $L_H$ or (b) the variational autoencoder loss ($\\mathcal{L}_{rec}+\\beta\\mathcal{L}_{KL}$). The discriminator $N_2$ is trained to minimise the classification loss $\\mathcal{L}_D$ on the origin of the data (source vs. target). For HFA and $\\beta$-VAE, the discriminator is removed. Alternatively, this corresponds to setting arbitrarily $\\mathcal{L}_D=0$.}\n\\label{fig:UAFA}\n\\end{figure*}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.7\\columnwidth]{Figures\/Framework.png}\n\\caption{\\textbf{Flow-Chart.} (a) Using Source and Target training data, both $N_1$ and $1C$ are trained. The training of $N_1$ requires incidentally the training of the discriminator $N_2$ and of the decoder when used. (b) The output of $1C$ is analysed with a validation set from both source and target and the threshold~\\eqref{eq:thrd} is computed. (c) This threshold is used as decision rule for test target data.}\n\\label{fig:framework}\n\\end{figure}\n\t\t\\subsubsection{Hyper-parameters}\n\t\nWe tested the architecture in similar context: A two layer feature extractor with 10 neurons, a two layer discriminator with 10 and 5 neurons, \\textit{relu} as activation function (but for the softmax output layer of the discriminator), a learning rate of $10^{-4}$, the ADAM optimiser \\parencite{kingma2014adam}, over 200 epoch with batch size of 1000. The Wasserstein discriminator has a gradient penalty weight of 10 as proposed in the seminal paper. The VAE decoder has a symmetric architecture to that of the feature extractor.\n\nThe asymmetric coefficient of the Wasserstein discriminator $\\delta_w$ is tested with $1.0$ (no asymmetric relaxation) and $4.0$ (as proposed in the seminal paper). The gradient reversal layer is tested with weights $\\alpha$ set to $1.0$ and $0.2$. With $0.2$, the discriminator would be trained with a gradient 5 times higher than the feature extractor, increasing the relative training of the discriminator. The $\\beta$ of the $\\beta$-VAE is tested with $10.0$ (more weight to the Kullback-Leibler loss), $1.0$ and $0.1$ (more weight to the reconstruction loss). Results are only reported for $\\alpha=\\beta=1.0$ as all other combinations, in our setting, were providing worse performance.\n \n \n\\section{Case Study}\n\\label{sec:cs}\n\t\\subsection{Introduction to the Case Study}\n\n\nTo demonstrate the suitability and effectiveness of the proposed approaches and compare between the different strategies, a comparison is performed on a fleet comprising 112 power plants, similarly to that presented in \\textcite{Michau2018b, Michau2019}. \nIn the available fleet, 100 gas turbines have not experienced identifiable faults during the observation period (approximately one year), they are therefore considered here as healthy and 12 units have experienced a failure of the stator vane. \n\nA vane in a compressor redirects the gas between the blade rows, leading to an increase in pressure and temperature.\nThe failure of a compressor vane in a gas turbine is usually due to a Foreign Object Damage (FOD) caused by a part loosening and travelling downstream, affecting subsequent compressor parts, the combustor or the turbine itself.\nFatigue and impact from surge can also affect the vane geometry and shape and lead to this failure mode. Parts are stressed to their limits to achieve high operational efficiency with complex cooling schemes to avoid their melting, especially during high load.\nSuch failures are undesirable due to the associated costs, including repair costs and operational costs of the unexpected power plant shutdown. \n\nBecause of the various different factors that can contribute to the source of the failure mode, including assembly, material errors, or the result of specific operation profiles, the occurrence of a specific failure mode is considered as being random. Therefore, the focus is nowadays on early detection and fault isolation and not on prediction.\n\nSo far, the detection of compressor vane failures mainly relied on analytic stemming from domain expertise. \nYet, if the algorithms are particularly tuned for high detection rates, they often generate too many false alarms. \nFalse alarms are very costly, each raised alarm is manually verified by an expert which makes it a time- and resource-consuming task.\n\n\t\\subsection{The dataset}\n\n\nThe turbines are monitored with 24 parameters, sampled every 5 minutes over 1 year. They stem from 15 real sensors and 9 ISO variables (measurements modified by a physical model to represent some hand-crafted features in standard operating conditions 15$^o$ C, 1 atmosphere).\nAvailable ISO measurements are, the power, the heat rate, the efficiency and indicators on the compressor (efficiency, pressure ratio, discharge pressure, discharge temperature, flow). Other measurements are pressures and temperatures from the different parts of the turbine and of the compressor (inlet and bell mouth), ambient condition measurements and operating state measurements such as the rotor speed, the turbine output, and fuel stroke. The data available in this case study is limited to one year, over which the gas turbines have not experienced all relevant operating conditions. We aim at being able to propose condition monitoring methods that rely on two months of data only from the turbine of interest.\n\nTo test the model and report the results, we apply the proposed methodology to the 12 gas turbines with faults as target, from which we extract the first two months of data as training set (around 17\\,000 measurements). The 100 remaining gas turbines are candidate source datasets, considered as healthy. \nFor the 12 target gas turbines, all data points after the first two months and until a month before the expert detected the fault (around 39\\,000 measurements), are considered as healthy and are used to quantify the percentage of false positives (FPR). The last month before the detection is ignored as faults could be the consequence of prior deterioration and a detection could not be reliably compared to any ground truth. Last, the data points after the time of detection by the expert are considered as unhealthy. As many of the 12 datasets have very few points available after that detection time (from 8 to 1000 points), we will consider the fault as detected if the threshold is exceeded at least twice successively.\n\nThe validation dataset is made by extracting 6\\% of the training dataset. The data has been normalised such that all variables 1st and 99th-percentiles are respectively $-1$ and $1$, such that the resulting normalisation is independent to the presence of outliers. Rows with any missing values or 0 (which is not a possible value for any of the measurement) have been removed. \n\n\t\\subsection{Alignment abilities}\n\n\\begin{table}\n\\centering\n\\setlength\\tabcolsep{3pt}\n\\begin{adjustbox}{max width=\\columnwidth}\n\n\\begin{tabular}{l|rrrrrrrrr}\n\\toprule\n\\small\nUnit &\\small HELM &\\small $\\beta$-VAE &\\small $\\beta$-VAEs &\\small $\\beta$-VAEw &\\small HFA &\\small AFAs &\\small AFAw &\\small HAFAs &\\small HAFAw \\\\\n\\midrule\n1 & 11 & 74 & 65 & 74 & 83 & 83 & 85 & 86 & 79 \\\\\n2 & 0 & 5 & 20 & 13 & 10 & 13 & 24 & 5 & 12 \\\\\n3 & 10 & 28 & 22 & 22 & 21 & 23 & 30 & 32 & 34 \\\\\n4 & 17 & 30 & 21 & 32 & 54 & 55 & 54 & 52 & 49 \\\\\n5 & 94 & 68 & 47 & 67 & 90 & 59 & 63 & 80 & 85 \\\\\n6 & 92 & 51 & 68 & 63 & 85 & 77 & 79 & 92 & 93 \\\\\n7 & 0 & 13 & 29 & 24 & 29 & 45 & 31 & 34 & 26 \\\\\n8 & 95 & 40 & 42 & 43 & 67 & 61 & 63 & 65 & 58 \\\\\n9 & 2 & 19 & 19 & 18 & 26 & 28 & 32 & 22 & 39 \\\\\n10 & 1 & 18 & 15 & 8 & 21 & 28 & 24 & 34 & 29 \\\\\n11 & 2 & 20 & 35 & 47 & 59 & 63 & 51 & 60 & 51 \\\\\n12 & 0 & 3 & 3 & 4 & 2 & 2 & 1 & 1 & 3 \\\\\n\\midrule\n\\small R\\% (5\\%) & 27.3 & 31.1 & 32.5 & 34.9 & 46.0 & 45.2 &45.2 & 47.4 & 47.0 \\\\\n\\small R\\% (1\\%) & 13.5 & 20.6 & 22.0 & 25.8 & 30.5 & 27.0 & 25.5 & 32.8 & 30.1\\\\\n\\bottomrule\n\\end{tabular}\n\\end{adjustbox}\\caption{Number of successfully aligned pairs (detected fault and less than 5\\% FPR). Last rows R are the mean ratio of aligned pairs (in \\%), with selection threshold on FPR at 5\\% and 1\\%.}\n\\label{tbl:alipair}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\setlength\\tabcolsep{3pt}\n\\begin{adjustbox}{max width=\\columnwidth}\n\\begin{tabular}{l|rrrcrrrrr}\n\\toprule\n\\small\nUnit &\\small HELM &\\small $\\beta$-VAE &\\small $\\beta$-VAEs &\\small $\\beta$-VAEw &\\small HFA &\\small AFAs &\\small AFAw &\\small HAFAs &\\small HAFAw \\\\\n\\midrule\n1 & 1.46 & 0.00 & 0.01 & 0.04 & 0.00 & 0.00 & 0.04 & 0.01 & 0.00 \\\\\n2 & 10.00 & 0.12 & 0.13 & 0.00 & 0.01 & 0.05 & 0.03 & 0.52 & 0.02 \\\\\n3 & 0.35 & 0.01 & 0.18 & 0.31 & 0.10 & 0.01 & 0.00 & 0.04 & 0.49 \\\\\n4 & 0.81 & 0.08 & 0.00 & 0.00 & 0.01 & 0.02 & 0.01 & 0.06 & 0.01 \\\\\n5 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n6 & 0.35 & 0.01 & 0.03 & 0.00 & 0.02 & 0.02 & 0.01 & 0.02 & 0.01 \\\\\n7 & 6.45 & 1.85 & 0.09 & 0.27 & 0.15 & 0.02 & 0.17 & 0.62 & 0.23 \\\\\n8 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\\\\n9 & 3.20 & 0.14 & 0.09 & 0.06 & 0.05 & 0.01 & 0.03 & 0.09 & 0.00 \\\\\n10 & 4.91 & 0.20 & 0.07 & 0.53 & 0.25 & 0.07 & 0.29 & 0.21 & 0.72 \\\\\n11 & 4.71 & 0.15 & 0.01 & 0.10 & 0.09 & 0.08 & 0.00 & 0.01 & 0.15 \\\\\n12 & 7.27 & 1.61 & 1.41 & 1.76 & 2.94 & 1.01 & 3.37 & 2.64 & 2.65 \\\\\n\\midrule\nMean & 3.29 & 0.35 & 0.17 & 0.26 & 0.30 & 0.11 & 0.33 & 0.35 & 0.36 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{adjustbox}\n\\caption{FPR for the best models.}\n\\label{tbl:bestmodel}\n\\end{table}\n\nThe results presented in this section aim at comparing the different combinations, and highlight the benefits of each components, in combination of others or alone. To provide insights on how well each model is able to align features from the different units of the fleet, we paired each of the 12 units, considered as target, with every one of the 100 healthy units, considered as source. \n\nThe fleet of tested unit has a strong variability and most pairs will not provide an interesting model. Therefore, a good indicator to compare the methodologies in their capability to take benefit of a pair of datasets is to tally the number of pairs leading to a relatively successful model, that is, a model that detect the fault in the target, and with a low FPR. Thus we report the results by setting an arbitrary threshold at 5\\% FPR. We also report the aggregated results with a threshold at 1\\%, to demonstrate that the comparative study conclusions remain valid independently of this selection process.\nOut of the resulting 34 combinations (8 architectures and different hyper-parameter settings) trained and tested on the 1200 pairs, we report the results for each architecture for the hyper-parameters maximising the overall number of successfully models.\n\nWe report in Table~\\ref{tbl:alipair} for each unit, how many aligned pairs were achieved with each combination. \nWe also report in Table~\\ref{tbl:bestmodel} the lowest FPR achieved for each of the 12 units, for models which could detect the fault.\nWe ran the experiments with Euler V\\footnote{\\url{https:\/\/scicomp.ethz.ch\/wiki\/Euler}}, with two cores from a processor Intel Xeon E5-2697v2. The heaviest models (HAFA's family) took around two minutes to be trained and tested. 40800 models were trained totalling more than 1.5 months of computation time.\n\n\\section{Discussion}\n\\label{sec:disc}\n\n\\begin{table}\n\\centering\n\\setlength\\tabcolsep{3pt}\n\\begin{adjustbox}{max width=\\columnwidth}\n\\begin{tabular}{l|rrrrrrrrrr}\n\\toprule\n\\small Unit & \\tiny{2mHELM} &\\small HELM &\\small $\\beta$-VAE &\\small $\\beta$-VAEs &\\small $\\beta$-VAEw &\\small HFA &\\small AFAs &\\small AFAw &\\small HAFAs &\\small HAFAw \\\\\n\\midrule\n1 & 9.05 & 1.46 & 0.98 & 0.55 & 0.74 & 3.61 & 0.32 & 4.77 & 0.52 & 0.82 \\\\\n2 & & 15.12 & 8.55 & 7.88 & 0.34 & & & & & 5.48 \\\\\n3 & 18.90 & 13.22 & 4.89 & 0.70 & 4.63 & & & & 4.54 & 3.05 \\\\\n4 & 42.61 & 26.34 & & & & 1.51 & 2.18 & 0.70 & 3.81 & \\\\\n5 & 4.20 & 1.50 & 0.08 & 3.48 & 0.08 & & 0.01 & 0.03 & 1.02 & 0.26 \\\\\n6 & 3.14 & 1.16 & 1.16 & 1.25 & 0.66 & 2.13 & 1.14 & 0.27 & 1.71 & 0.99 \\\\\n7 & 56.55 & 20.33 & 13.30 & 1.26 & 14.17 & 5.43 & & 6.22 & 0.62 & 4.77 \\\\\n8 & 0.19 & 0.08 & & 1.28 & & 0.03 & & 0.19 & 0.05 & 0.16 \\\\\n9 & 20.13 & & 13.11 & 6.65 & 0.06 & 4.56 & & & 0.63 & 0.20 \\\\\n10 & 81.76 & 39.79 & 2.26 & & 8.06 & & 0.07 & 30.62 & & 2.71 \\\\\n11 & 36.18 & 25.53 & 14.72 & 0.15 & 3.09 & 2.97 & 0.26 & 0.90 & 4.37 & 0.17 \\\\\n12 & 68.60 & 37.05 & & & & & 19.60 & 8.10 & & \\\\\n\\midrule\n\\#AP & 3 & 4 & 5 & 7 & 7 & 6 & 6 & 6 & \\textbf{9} & \\textbf{9} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{adjustbox}\n\\caption{FPR for the pairs minimising the Maximum Mean Discrepancy. Empty cells correspond to models with missed fault. The last row (\\#AP) contains the number of models with less than 5\\% FPR.}\n\\label{tbl:mmdsel}\n\\end{table}\n\nThe results presented in Table~\\ref{tbl:alipair} demonstrate that the proposed alignment methodology all bring positive impact on the problem of transferring operating conditions between units for unsupervised health monitoring of the gas turbines. Compared to the naive combination of the source and target dataset in a single training set with HELM, all proposed alignment methods improve the number of successfully aligned pairs (target fault detected and less than 5\\% FPR). Each alignment strategy improves the results and leads to very efficient models as illustrated in Table~\\ref{tbl:bestmodel} (most aligned models have far less than 1\\% FPR). The homothetic loss is clearly the most contributing factor to the alignment as all architectures making use of it align successfully more pairs. The use of adversarial discriminator is also contributing quite significantly to the alignment process. It improves the results from the $\\beta$-VAE significantly and provides good alignment even when used alone (\\textit{cf.}\\ AFAs and AFAw). When used with the $\\beta$-VAE, the Wasserstein discriminator provides the best results with an asymmetric coefficient $\\delta_w=4.0$. This demonstrates the importance to relax the distribution alignment. In the homothetic alignment conditions, the selected asymmetric coefficient is $\\delta_w=1.0$, showing that the homothetic loss encourages already a distribution overlap and reduces the need of asymmetric alignment. In that case, the classic softmax discriminator is actually providing better results.\n\nThe results presented above demonstrate that, first the alignment procedures lead to an increased probability of training a useful model given a pair of units and that second, the models are performing better with alignment (lower FPR and higher detection rate). An interesting question is the \\textit{a priori} selection of the pair of units which, once aligned, can be used to train a successful health monitoring model. While this question is left for future research, a possible solution is to compare units based on few relevant characteristics. Here we propose a possible solution consisting in the selection of the pair for which the Maximum Mean Discrepancy (MMD) on two representative variables is minimal (Output Power and Inlet Guide Vanes Angle). The results are reported in Table~\\ref{tbl:mmdsel}. Empty cells corresponds to cases where the fault was not detected by the model. For comparison purposes, these models are compared to a simple HELM only trained with the two months of data from the target unit (2mHELM). Based on this imperfect selection process, the proposed alignment procedures all improves the results, both in number of useful models but also in reducing the FPR. Previous results demonstrated that the units are very different from each others, this is highlighted here with the relatively low number of successfully aligned pairs for units 2, 3, 7, 9, 10, 11 and 12. These units have also very high variability in their operating conditions as shown by the very high FPR of the model trained only on their first two months of data. This high variability makes it extremely challenging to identify the other units likely to bring beneficial information to the model. \n\nTo the opposite, units 5, 6 and 8 looks very stable in their operation as the models trained only on the first two months already provide satisfactory results (less than 5\\% FPR). Already with HELM they could be matched with almost all other units. This can explain that aligning those unit with other sources is not necessary and might actually confuse the subsequent one-class classifier (for these units, the number of successfully aligned pairs decreases with some of the alignment procedures). Yet, for the best performing approaches (HAFAs and HAFAw), the results are improved, even on these units.\n\nAlso, the results presented in this paper focused on providing a fair comparison of different alignment strategies. The different combinations trained on the 1200 pairs led to the training of over 40\\,000 models. Once a strategy is chosen, plenty of room is left for hyper-parameter tuning, which can only improve the results presented here.\n\n\\section{Conclusions}\n\nIn this paper, we tackled the problem of feature alignment for unsupervised anomaly detection. In the context where a fleet of units is available, we proposed to align a target unit, for which little condition monitoring data are available, with a source unit, for which longer observations period had been recorded, both in healthy conditions. Contrarily to the traditional case in domain alignment, the feature learning cannot rely on the back-propagation of the loss of a subsequent classifier. Instead, we presented three alignment strategies, auto-enconding in shared latent space, the newly proposed homothety loss and the adversarial training of a discriminator (with a traditional softmax classifier and a Wasserstein discriminator). All strategies improve the results for the subsequent unsupervised anomaly detection model. Among these strategies, we demonstrated that the newly proposed homothety loss has the strongest impact on the results and can be further improved by the use of a discriminator.\n\nIn the future, deeper analysis of the unit characteristics might help to identify the units for which additional data is required and to also identify which other units to select among the fleet. Such analysis could rely on expert knowledge or on an online adaptation of the sourcing data depending on the current operation. Such approach would face the challenge of distinguishing new from degraded operating conditions. Another interesting trail is the selection of the sourcing data among the whole fleet rather than attempting to pair specific units. In such case, the right data selection remains an open research question.\n\n\\section*{Acknowledgement}\nThis research was funded by the Swiss National Science Foundation (SNSF)\nGrant no. PP00P2 176878.\n\\printbibliography\n \n\\end{document}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\noindent In this paper, we consider the following finite-sum composite convex optimization problem:\n\\vspace{-2mm}\n\\begin{equation}\\label{equ1}\n\\min_{x\\in\\mathbb{R}^{d}} \\phi(x):=f(x)+g(x)=\\frac{1}{n}\\!\\sum\\nolimits_{i=1}^{n}\\!f_{i}(x)+g(x),\n\\vspace{-2mm}\n\\end{equation}\nwhere $f(x)\\!:=\\!\\frac{1}{n}\\!\\sum^{n}_{i=1}f_{i}(x)$ is a convex function that is a finite average of $n$ convex functions $f_{i}(x)\\!:\\!\\mathbb{R}^{d}\\!\\rightarrow\\!\\mathbb{R}$, and $g(x)$ is a ``simple\" possibly non-smooth convex function (referred to as a regularizer, e.g.\\ $\\lambda_{1}\\|x\\|^{2}$, the $\\ell_{1}$-norm regularizer $\\lambda_{2}\\|x\\|_{1}$, and the elastic net regularizer $\\lambda_{1}\\|x\\|^{2}\\!+\\!\\lambda_{2}\\|x\\|_{1}$, where $\\lambda_{1},\\lambda_{2}\\!\\geq\\!0$ are the regularization parameters). Such a composite problem~\\eqref{equ1} naturally arises in many applications of machine learning and data mining, such as regularized empirical risk minimization (ERM) and eigenvector computation~\\cite{shamir:pca,garber:svd}. As summarized in~\\cite{zhu:Katyusha,zhu:box}, there are mainly four interesting categories of Problem~\\eqref{equ1} as follows:\n\n\\vspace{-2mm}\n\\begin{itemize}\n\\item Case 1: Each $f_{i}(x)$ is $L$-smooth and $\\phi(x)$ is $\\mu$-strongly convex ($\\mu$-SC). Examples: ridge regression and elastic net regularized logistic regression.\n\\vspace{-2mm}\n\\item Case 2: Each $f_{i}(x)$ is $L$-smooth and $\\phi(x)$ is non-strongly convex (NSC). Examples: Lasso and $\\ell_{1}$-norm regularized logistic regression.\n\\vspace{-2mm}\n\\item Case 3: Each $f_{i}(x)$ is non-smooth (but Lipschitz continuous) and $\\phi(x)$ is $\\mu$-SC. Examples: linear support vector machine (SVM).\n\\vspace{-2mm}\n\\item Case 4: Each $f_{i}(x)$ is non-smooth (but Lipschitz continuous) and $\\phi(x)$ is NSC. Examples: $\\ell_{1}$-norm regularized SVM.\n\\vspace{-2mm}\n\\end{itemize}\n\nTo solve Problem~\\eqref{equ1} with a large sum of $n$ component functions, computing the full (sub)gradient of $f(x)$ (e.g.\\ $\\nabla\\!f(x)\\!=\\!\\frac{1}{n}\\!\\sum^{n}_{i=1}\\!\\nabla\\!f_{i}(x)$ for the smooth case) in first-order methods is expensive, and hence stochastic (sub)gradient descent (SGD), also known as incremental gradient descent, has been widely used in many large-scale problems~\\cite{sutskever:sgd,zhang:sgd}. SGD approximates the gradient from just one example or a mini-batch, and thus it enjoys a low per-iteration computational complexity. Moreover, SGD is extremely simple and highly scalable, making it particularly suitable for large-scale machine learning, e.g., deep learning~\\cite{sutskever:sgd}. However, the variance of the stochastic gradient estimator may be large in practice~\\cite{johnson:svrg,zhao:sampling}, which leads to slow convergence and poor performance. Even for Case 1, standard SGD can only achieve a sub-linear convergence rate~\\cite{rakhlin:sgd,shamir:sgd}.\n\n\nRecently, the convergence speed of standard SGD has been dramatically improved with various variance reduced methods, such as SAG~\\cite{roux:sag}, SDCA~\\cite{shalev-Shwartz:sdca}, SVRG~\\cite{johnson:svrg}, SAGA~\\cite{defazio:saga}, and their proximal variants, such as~\\cite{schmidt:sag}, \\cite{shalev-Shwartz:acc-sdca}, \\cite{xiao:prox-svrg} and \\cite{koneeny:mini}. Indeed, many of those stochastic methods use past full gradients to progressively reduce the variance of stochastic gradient estimators, which leads to a revolution in the area of first-order methods. Thus, they are also called the semi-stochastic gradient descent method~\\cite{koneeny:mini} or hybrid gradient descent method~\\cite{zhang:svrg}. In particular, these recent methods converge linearly for Case 1, and their overall complexity (total number of component gradient evaluations to find an $\\epsilon$-accurate solution) is $\\mathcal{O}\\!\\left((n\\!+\\!{L}\/{\\mu})\\log({1}\/{\\epsilon})\\right)$, where $L$ is the Lipschitz constant of the gradients of $f_{i}(\\cdot)$, and $\\mu$ is the strong convexity constant of $\\phi(\\cdot)$. The complexity bound shows that those stochastic methods always converge faster than accelerated deterministic methods (e.g.\\ FISTA~\\cite{beck:fista})~\\cite{koneeny:mini}. Moreover, \\cite{zhu:vrnc} and \\cite{reddi:svrnc} proved that SVRG with minor modifications can converge asymptotically to a stationary point in the non-convex case. However, there is still a gap between the overall complexity and the theoretical bound provided in~\\cite{woodworth:bound}. For Case 2, they converge much slower than accelerated deterministic algorithms, i.e., $\\mathcal{O}(1\/T)$ vs.\\ $\\mathcal{O}(1\/T^2)$.\n\nMore recently, some accelerated stochastic methods were proposed. Among them, the successful techniques mainly include the Nesterov's acceleration technique~\\cite{lan:rpdg,lin:vrsg,nitanda:svrg}, the choice of growing epoch length~\\cite{mahdavi:sgd,zhu:univr}, and the momentum acceleration trick~\\cite{zhu:Katyusha,hien:asmd}. \\cite{lin:vrsg} presents an accelerating Catalyst framework and achieves a complexity of $\\mathcal{O}((n\\!+\\!\\!\\sqrt{n{L}\/{\\mu}})\\log({L}\/{\\mu})\\log({1}\/{\\epsilon}))$ for Case 1. However, adding a dummy regularizer hurts the performance of the algorithm both in theory and in practice~\\cite{zhu:univr}. The methods~\\cite{zhu:Katyusha,hien:asmd} attain the best-known complexity of $\\mathcal{O}(n\\sqrt{1\/\\epsilon}+\\!\\sqrt{nL\/\\epsilon})$ for Case 2. Unfortunately, they require at least two auxiliary variables and two momentum parameters, which lead to complicated algorithm design and high per-iteration complexity.\n\n\\textbf{Contributions:} To address the aforementioned weaknesses of existing methods, we propose a fast stochastic variance reduced gradient (FSVRG) method, in which we design a novel update rule with the Nesterov's momentum~\\cite{nesterov:fast}. The key update rule has only one auxiliary variable and one momentum weight. Thus, FSVRG is much simpler and more efficient than~\\cite{zhu:Katyusha,hien:asmd}. FSVRG is a direct accelerated method without using any dummy regularizer, and also works for non-smooth and proximal settings. Unlike most variance reduced methods such as SVRG, which only have convergence guarantee for Case 1, FSVRG has convergence guarantees for both Cases 1 and 2. In particular, FSVRG uses a flexible growing epoch size strategy as in~\\cite{mahdavi:sgd} to speed up its convergence. Impressively, FSVRG converges much faster than the state-of-the-art stochastic methods. We summarize our main contributions as follows.\n\n\\vspace{-2mm}\n\\begin{itemize}\n\\item We design a new momentum accelerating update rule, and present two selecting schemes of momentum weights for Cases 1 and 2, respectively.\n\\vspace{-2mm}\n\\item We prove that FSVRG attains linear convergence for Case 1, and achieves the convergence rate of $\\mathcal{O}(1\/T^2)$ and a complexity of $\\mathcal{O}(n\\sqrt{1\/\\epsilon}\\!+\\!\\sqrt{nL\/\\epsilon})$ for Case 2, which is the same as the best known result in~\\cite{zhu:Katyusha}.\n\\vspace{-2mm}\n\\item Finally, we also extend FSVRG to mini-batch settings and non-smooth settings (i.e., Cases 3 and 4), and provide an empirical study on the performance of FSVRG for solving various machine learning problems.\n\\end{itemize}\n\n\n\\section{Preliminaries}\nThroughout this paper, the norm $\\|\\!\\cdot\\!\\|$ is the standard Euclidean norm, and $\\|\\!\\cdot\\!\\|_{1}$ is the $\\ell_{1}$-norm, i.e., $\\|x\\|_{1}\\!=\\!\\sum_{i}\\!|x_{i}|$. We denote by $\\nabla\\!f(x)$ the full gradient of $f(x)$ if it is differentiable, or $\\partial\\!f(x)$ a sub-gradient of $f(x)$ if $f(x)$ is only Lipschitz continuous. We mostly focus on the case of Problem~\\eqref{equ1} when each $f_{i}(x)$ is $L$-smooth\\footnote{In fact, we can extend all our theoretical results below for this case (i.e., when the gradients of all component functions have the same Lipschitz constant $L$) to the more general case, when some $f_{i}(x)$ have different degrees of smoothness.}. For non-smooth component functions, we can use the proximal operator oracle~\\cite{zhu:box} or the Nesterov's smoothing~\\cite{nesterov:smooth} and homotopy smoothing~\\cite{xu:hs} techniques to smoothen them, and then obtain the smoothed approximations of all functions $f_{i}(\\cdot)$.\n\nWhen the regularizer $g(\\cdot)$ is non-smooth (e.g., $\\!g(\\cdot)\\!=\\!\\lambda\\|\\cdot\\|_{1}\\!$), the update rule of general SGD is formulated as follows:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ2}\nx_{k}=\\mathop{\\arg\\min}_{y\\in\\mathbb{R}^{d}}\\, g(y)\\!+\\!y^{T}\\nabla\\! f_{i_{k}}\\!(x_{k-\\!1})\\!+\\!({1}\/{2\\eta_{k}})\\!\\cdot\\!\\|y\\!-\\!x_{k-\\!1}\\|^2,\n\\vspace{-2mm}\n\\end{equation}\nwhere $\\eta_{k}\\!\\propto\\!1\/k$ is the step size (or learning rate), and $i_{k}$ is chosen uniformly at random from $\\{1,\\ldots,n\\}$. When $g(x)\\!\\equiv\\!0$, the update rule in~\\eqref{equ2} becomes $x_{k}\\!=\\!x_{k-\\!1}\\!-\\!\\eta_{k}\\nabla\\!f_{i_{k}}\\!(x_{k-\\!1})$. If each $f_{i}(\\cdot)$ is non-smooth (e.g., the hinge loss), we need to replace $\\nabla\\! f_{i_{k}}\\!(x_{k-\\!1})$ in~\\eqref{equ2} with $\\partial\\!f_{i_{k}}\\!(x_{k-\\!1})$.\n\nAs the representative methods of stochastic variance reduced optimization, SVRG~\\cite{johnson:svrg} and its proximal variant, Prox-SVRG~\\cite{xiao:prox-svrg}, are particularly attractive because of their low storage requirement compared with~\\cite{roux:sag,shalev-Shwartz:sdca,defazio:saga,shalev-Shwartz:acc-sdca}, which need to store all the gradients of the $n$ component functions $f_{i}(\\cdot)$ (or dual variables), so that $O(nd)$ storage is required in general problems. At the beginning of each epoch of SVRG, the full gradient $\\nabla\\! f(\\widetilde{x})$ is computed at the snapshot point $\\widetilde{x}$. With a constant step size $\\eta$, the update rules for the special case of Problem~\\eqref{equ1} (i.e., $g(x)\\!\\equiv\\!0$) are given by\n\\vspace{-1mm}\n\\begin{equation}\\label{equ3}\n\\begin{split}\n\\widetilde{\\nabla}\\! f_{i_{k}}\\!(x_{k-1})&=\\nabla\\! f_{i_{k}}\\!(x_{k-1})-\\nabla\\! f_{i_{k}}\\!(\\widetilde{x})+\\nabla\\! f(\\widetilde{x}),\\\\\nx_{k}&=x_{k-1}-\\eta\\widetilde{\\nabla}\\! f_{i_{k}}\\!(x_{k-1}).\n\\end{split}\n\\vspace{-2mm}\n\\end{equation}\n\\cite{zhu:univr} proposed an accelerated SVRG method, SVRG++~, with doubling-epoch techniques. Moreover, Katyusha~\\cite{zhu:Katyusha} is a direct accelerated stochastic variance reduction method, and its main update rules are formulated as follows:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ4}\n\\begin{split}\n&x_{k}=\\theta_{1}y_{k-\\!1}+\\theta_{2}\\widetilde{x}+(1-\\theta_{1}-\\theta_{2})z_{k-\\!1},\\\\\n&y_{k}=\\mathop{\\arg\\min}_{y\\in\\mathbb{R}^{d}}\\, g(y)\\!+\\!y^{T}\\widetilde{\\nabla}\\! f_{i_{k}}\\!(x_{k})\\!+\\!({1}\/{2\\eta})\\!\\cdot\\!\\|y\\!-\\!y_{k-\\!1}\\|^2,\\\\\n\\vspace{-2mm}\n&z_{k}=\\mathop{\\arg\\min}_{z\\in\\mathbb{R}^{d}}\\, g(z)\\!+\\!z^{T}\\widetilde{\\nabla}\\! f_{i_{k}}\\!(x_{k})\\!+\\!({3L}\/{2})\\!\\cdot\\!\\|z\\!-\\!x_{k}\\|^2,\n\\end{split}\n\\vspace{-2mm}\n\\end{equation}\nwhere $\\theta_{1},\\theta_{2}\\!\\in\\![0,1]$ are two parameters, and $\\theta_{2}$ is fixed to $0.5$ in~\\cite{zhu:Katyusha} to eliminate the need for parameter tuning.\n\n\n\\section{Fast SVRG with Momentum Acceleration}\nIn this paper, we propose a fast stochastic variance reduction gradient (FSVRG) method with momentum acceleration for Cases 1 and 2 (e.g., logistic regression) and Cases 3 and 4 (e.g., SVM). The acceleration techniques of the classical Nesterov's momentum and the Katyusha momentum in~\\cite{zhu:Katyusha} are incorporated explicitly into the well-known SVRG method~\\cite{johnson:svrg}. Moreover, FSVRG also uses a growing epoch size strategy as in~\\cite{mahdavi:sgd} to speed up its convergence.\n\n\n\\subsection{Smooth Component Functions}\nIn this part, we consider the case of Problem (\\ref{equ1}) when each $f_{i}(\\cdot)$ is smooth, and $\\phi(\\cdot)$ is SC or NSC (i.e., Case 1 or 2). Similar to existing stochastic variance reduced methods such as SVRG~\\cite{zhu:Katyusha} and Prox-SVRG~\\cite{xiao:prox-svrg}, we design a simple fast stochastic variance reduction algorithm with momentum acceleration for solving smooth objective functions, as outlined in Algorithm~\\ref{alg1}. It is clear that Algorithm~\\ref{alg1} is divided into $S$ epochs (which is the same as most variance reduced methods, e.g., SVRG and Katyusha), and each epoch consists of $m_{s}$ stochastic updates, where $m_{s}$ is set to $m_{s}\\!=\\!\\rho^{s-\\!1}\\!\\cdot m_{1}$ as in~\\cite{mahdavi:sgd}, where $m_{1}$ is a given initial value, and $\\rho\\!>\\!1$ is a constant. Within each epoch, a full gradient $\\nabla\\! f(\\widetilde{x}^{s})$ is calculated at the snapshot point $\\widetilde{x}^{s}$. Note that we choose $\\widetilde{x}^{s}$ to be the average of the past $m_{s}$ stochastic iterates rather than the last iterate because it has been reported to work better in practice~\\cite{xiao:prox-svrg,zhu:univr,zhu:Katyusha}. Although our convergence guarantee for the SC case depends on the initialization of $x^{s}_{0}\\!=\\!y^{s}_{0}\\!=\\!\\widetilde{x}^{s-\\!1}$, the choices of $x^{s+\\!1}_{0}\\!=\\!x^{s}_{m_{s}}$ and $y^{s+\\!1}_{0}\\!=\\!y^{s}_{m_{s}}$ also work well in practice, especially for the case when the regularization parameter is relatively small (e.g., $10^{-7}$), as suggested in~\\cite{shang:vrsgd}.\n\n\\begin{algorithm}[t]\n\\caption{FSVRG for smooth component functions}\n\\label{alg1}\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Initialize:}}\n\\renewcommand{\\algorithmicoutput}{\\textbf{Output:}}\n\\begin{algorithmic}[1]\n\\REQUIRE the number of epochs $S$ and the step size $\\eta$.\\\\\n\\ENSURE $\\widetilde{x}^{0}$\\!, $m_{1}$, $\\theta_{1}$, and $\\rho>1$.\\\\\n\\FOR{$s=1,2,\\ldots,S$}\n\\STATE {$\\widetilde{\\mu}=\\frac{1}{n}\\!\\sum^{n}_{i=1}\\!\\nabla\\!f_{i}(\\widetilde{x}^{s-\\!1})$, $x^{s}_{0}=y^{s}_{0}=\\widetilde{x}^{s-\\!1}$;}\n\\FOR{$k=1,2,\\ldots,m_{s}$}\n\\STATE {Pick $i^{s}_{k}$ uniformly at random from $\\{1,\\ldots,n\\}$;}\n\\STATE {$\\widetilde{\\nabla} f_{i^{s}_{k}}(x^{s}_{k-\\!1})=\\nabla f_{i^{s}_{k}}(x^{s}_{k-\\!1})-\\nabla f_{i^{s}_{k}}(\\widetilde{x}^{s-\\!1})+\\widetilde{\\mu}$;}\n\\STATE {$y^{s}_{k}=y^{s}_{k-\\!1}-\\eta\\;\\![\\widetilde{\\nabla}f_{i^{s}_{k}}(x^{s}_{k-\\!1})+\\nabla g(x^{s}_{k-\\!1})]$;}\n\\STATE {$x^{s}_{k}=\\widetilde{x}^{s-1}+\\theta_{s}(y^{s}_{k}-\\widetilde{x}^{s-1})$;}\n\\ENDFOR\n\\STATE {$\\widetilde{x}^{s}=\\frac{1}{m_{s}}\\!\\sum^{m_{s}}_{k=1}\\!x^{s}_{k}$, $\\,m_{s+1}=\\lceil\\rho^{s}\\!\\cdot m_{1}\\rceil$;}\n\\ENDFOR\n\\OUTPUT {$\\widetilde{x}^{S}$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsubsection{Momentum Acceleration}\nWhen the regularizer $g(\\cdot)$ is smooth, e.g., the $\\ell_{2}$-norm regularizer, the update rule of the auxiliary variable $y$ is\n\\begin{equation}\\label{equ5}\ny^{s}_{k}=y^{s}_{k-\\!1}-\\eta[\\widetilde{\\nabla}f_{i^{s}_{k}}(x^{s}_{k-\\!1})+\\nabla g(x^{s}_{k-\\!1})].\n\\end{equation}\nWhen $g(\\cdot)$ is non-smooth, e.g., the $\\ell_{1}$-norm regularizer, the update rule of $y$ is given as follows:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ6}\ny^{s}_{k}=\\textup{prox}_{\\,\\eta,g}\\!\\left(y^{s}_{k-\\!1}-\\eta\\widetilde{\\nabla}\\!f_{i^{s}_{k}}(x^{s}_{k-\\!1})\\right)\\!,\n\\vspace{-1mm}\n\\end{equation}\nand the proximal operator $\\textup{prox}_{\\,\\eta,g}(\\cdot)$ is defined as\n\\vspace{-1mm}\n\\begin{equation}\\label{equ7}\n\\textup{prox}_{\\,\\eta,g}(y):=\\mathop{\\arg\\min}_{x}({1}\/{2\\eta})\\!\\cdot\\!\\|x-y\\|^{2}+g(x).\n\\vspace{-1mm}\n\\end{equation}\nThat is, we only need to replace the update rule (\\ref{equ5}) in Algorithm~\\ref{alg1} with (\\ref{equ7}) for the case of non-smooth regularizers.\n\nInspired by the momentum acceleration trick for accelerating first-order optimization methods~\\cite{nesterov:fast,nitanda:svrg,zhu:Katyusha}, we design the following update rule for $x$:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ8}\nx^{s}_{k}=\\widetilde{x}^{s-1}+\\theta_{s}(y^{s}_{k}-\\widetilde{x}^{s-1}),\n\\end{equation}\nwhere $\\theta_{s}\\!\\in\\![0,1]$ is the weight for the key momentum term. The first term of the right-hand side of (\\ref{equ8}) is the snapshot point of the last epoch (also called as the Katyusha momentum in~\\cite{zhu:Katyusha}), and the second term plays a key role as the Nesterov's momentum in deterministic optimization.\n\nWhen $\\theta_{s}\\!\\equiv\\!1$ and $\\rho\\!=\\!2$, Algorithm~\\ref{alg1} degenerates to the accelerated SVRG method, SVRG++~\\cite{zhu:univr}. In other words, SVRG++ can be viewed as a special case of our FSVRG method. As shown above, FSVRG only has one additional variable $y$, while existing accelerated stochastic variance reduction methods, e.g., Katyusha~\\cite{zhu:Katyusha}, require two additional variables $y$ and $z$, as shown in (\\ref{equ4}). In addition, FSVRG only has one momentum weight $\\theta_{s}$, compared with the two weights $\\theta_{1}$ and $\\theta_{2}$ in Katyusha~\\cite{zhu:Katyusha}. Therefore, FSVRG is much simpler than existing accelerated methods~\\cite{zhu:Katyusha,hien:asmd}.\n\n\n\\subsubsection{Momentum Weight}\nFor the case of SC objectives, we give a selecting scheme for the momentum weight $\\theta_{s}$. As shown in Theorem~\\ref{theo1} below, it is desirable to have a small convergence factor $\\alpha$, implying fast convergence. The following proposition obtains the optimal $\\theta_{\\star}$, which can yield the smallest $\\alpha$ value.\n\n\\begin{proposition}\nGiven the appropriate learning rate $\\eta$, the optimal weight $\\theta_{\\star}$ is given by\n\\vspace{-2mm}\n\\begin{equation}\\label{equ10}\n\\theta_{\\star}=\\mu\\eta m_{s}\/2.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nUsing Theorem 1 below, we have\n\\begin{equation*}\n\\alpha(\\theta)=1-\\theta+{\\theta^{2}}\/({\\mu\\eta m_{s}}).\n\\end{equation*}\nTo minimize $\\alpha(\\theta)$ with given $\\eta$, we have $\\theta_{\\star}\\!=\\!\\mu\\eta m_{s}\/2$.\n\\end{proof}\n\\vspace{-2mm}\n\nIn fact, we can fix $\\theta_{s}$ to a constant for the case of SC objectives, e.g., $\\theta_{s}\\!\\equiv\\!0.9$ as in accelerated SGD~\\cite{ruder:sgd}, which works well in practice. Indeed, larger values of $\\theta_{s}$ can result in better performance for the case when the regularization parameter is relatively large (e.g., $10^{-4}$).\n\nUnlike the SC case, we initialize $y^{s+\\!1}_{0}\\!=\\!y^{s}_{m_{s}}$ in each epoch for the case of NSC objectives. And the update rule of $\\theta_{s}$ is defined as follows: $\\theta_{1}\\!=\\!1\\!-\\!{L\\eta}\/({1\\!-\\!L\\eta})$, and for any $s\\!>\\!1$,\n\\vspace{-1mm}\n\\begin{equation}\\label{equ11}\n\\theta_{s}=(\\sqrt{\\theta^{4}_{s-\\!1}+4\\theta^{2}_{s-\\!1}}-\\theta^{2}_{s-\\!1})\/{2}.\n\\end{equation}\nThe above rule is the same as that in some accelerated optimization methods~\\cite{nesterov:co,su:nag,liu:sadmm}.\n\n\n\\subsubsection{Complexity Analysis}\nThe per-iteration cost of FSVRG is dominated by the computation of $\\nabla\\! f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$, $\\nabla\\! f_{i^{s}_{k}}\\!(\\widetilde{x}^{s})$, and $\\nabla\\!g(x^{s}_{k-\\!1})$ or the proximal update~\\eqref{equ6}, which is as low as that of SVRG~\\cite{johnson:svrg} and SVRG++~\\cite{zhu:univr}. For some ERM problems, we can save the intermediate gradients $\\nabla\\! f_{i}(\\widetilde{x}^{s})$ in the computation of $\\widetilde{\\mu}$, which requires $O(n)$ additional storage in general. In addition, FSVRG has a much lower per-iteration complexity than other accelerated methods such as Katyusha~\\cite{zhu:Katyusha}, which have at least one more variable, as analyzed above.\n\n\n\\subsection{Non-Smooth Component Functions}\nIn this part, we consider the case of Problem (\\ref{equ1}) when each $f_{i}(\\cdot)$ is non-smooth (e.g., hinge loss and other loss functions listed in~\\cite{yang:ssgd}), and $\\phi(\\cdot)$ is SC or NSC (i.e.\\ Case 3 or 4). As stated in Section 2, the two classes of problems can be transformed into the smooth ones as in~\\cite{nesterov:smooth,zhu:box,xu:hs}, which can be efficiently solved by Algorithm~\\ref{alg1}. However, the smoothing techniques may degrade the performance of the involved algorithms, similar to the case of the reduction from NSC problems to SC problems~\\cite{zhu:box}. Thus, we extend Algorithm~\\ref{alg1} to the non-smooth setting, and propose a fast stochastic variance reduced sub-gradient algorithm (i.e., Algorithm~\\ref{alg2}) to solve such problems directly, as well as the case of Algorithm~\\ref{alg1} to directly solve the NSC problems in Case 2.\n\nFor each outer iteration $s$ and inner iteration $k$, we denote by $\\widetilde{\\partial}f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$ the stochastic sub-gradient $\\partial f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})\\!-\\!\\partial f_{i^{s}_{k}}\\!(\\widetilde{x}^{s-\\!1})\\!+\\!\\widetilde{\\xi}$, where $\\widetilde{\\xi}\\!=\\!\\!\\frac{1}{n}\\!\\!\\sum^{n}_{i=1}\\!\\!\\partial f_{i}(\\widetilde{x}^{s-\\!1})$, and $\\partial f_{i}(\\widetilde{x}^{s-\\!1})$ denotes a sub-gradient of $f_{i}(\\cdot)$ at $\\widetilde{x}^{s-\\!1}$. When the regularizer $g(\\cdot)$ is smooth, the update rule of $y$ is given by\n\\vspace{-1mm}\n\\begin{equation}\\label{equ12}\ny^{s}_{k}=\\Pi_{\\mathcal{K}}\\!\\!\\left[y^{s}_{k-\\!1}-\\eta\\left(\\widetilde{\\partial}f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})+\\nabla g(x^{s}_{k-\\!1})\\right)\\right]\\!,\n\\end{equation}\nwhere $\\Pi_{\\mathcal{K}}$ denotes the orthogonal projection on the convex domain $\\mathcal{K}$. Following the acceleration techniques for stochastic sub-gradient methods~\\cite{rakhlin:sgd,julien:ssg,shamir:sgd}, a general weighted averaging scheme is formulated as follows:\n\\vspace{-2mm}\n\\begin{equation}\\label{equ13}\n\\widetilde{x}^{s}=\\frac{1}{\\sum_{k}\\! w_{k}}\\!\\sum^{m_{s}}_{k=1}w_{k}x^{s}_{k},\n\\vspace{-1mm}\n\\end{equation}\nwhere $w_{k}$ is the given weight, e.g., $w_{k}\\!=\\!1\/m_{s}$.\n\n\\begin{algorithm}[t]\n\\caption{FSVRG for non-smooth component functions}\n\\label{alg2}\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Initialize:}}\n\\renewcommand{\\algorithmicoutput}{\\textbf{Output:}}\n\\begin{algorithmic}[1]\n\\REQUIRE the number of epochs $S$ and the step size $\\eta$.\\\\\n\\ENSURE $\\widetilde{x}^{0}$\\!, $m_{1}$, $\\theta_{1}$, $\\rho>1$, and $w_{1},\\ldots,w_{m}$.\\\\\n\\FOR{$s=1,2,\\ldots,S$}\n\\STATE {$\\widetilde{\\xi}=\\frac{1}{n}\\!\\sum^{n}_{i=1}\\!\\partial f_{i}(\\widetilde{x}^{s-\\!1})$, $x^{s}_{0}=y^{s}_{0}=\\widetilde{x}^{s-\\!1}$;}\n\\FOR{$k=1,2,\\ldots,m_{s}$}\n\\STATE {Pick $i^{s}_{k}$ uniformly at random from $\\{1,\\ldots,n\\}$;}\n\\STATE {$\\widetilde{\\partial}f_{i^{s}_{k}}(x^{s}_{k-\\!1})=\\partial f_{i^{s}_{k}}(x^{s}_{k-\\!1})-\\partial f_{i^{s}_{k}}(\\widetilde{x}^{s-\\!1})+\\widetilde{\\xi}$;}\n\\STATE {$y^{s}_{k}=\\Pi_{\\mathcal{K}}\\!\\!\\left[y^{s}_{k-\\!1}-\\eta\\;\\!(\\widetilde{\\partial}f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})+\\nabla g(x^{s}_{k-\\!1}))\\right]$;}\n\\STATE {$x^{s}_{k}=\\widetilde{x}^{s-1}+\\theta_{s}(y^{s}_{k}-\\widetilde{x}^{s-1})$;}\n\\ENDFOR\n\\STATE {$\\widetilde{x}^{s}\\!=\\!\\frac{1}{\\sum_{k}\\! w_{k}}\\!\\sum^{m_{s}}_{k=1}\\!w_{k} x^{s}_{k}$, $\\,m_{s+1}=\\lceil\\rho^{s}\\!\\cdot m_{1}\\rceil$;}\n\\ENDFOR\n\\OUTPUT {$\\widetilde{x}^{S}$}\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Convergence Analysis}\nIn this section, we provide the convergence analysis of FSVRG for solving the two classes of problems in Cases 1 and 2. Before giving a key intermediate result, we first introduce the following two definitions.\n\n\\begin{definition}[Smoothness]\\label{assum1}\nA function $h(\\cdot):\\mathbb{R}^{d}\\!\\rightarrow\\!\\mathbb{R}$ is $L$-smooth if its gradient is $L$-Lipschitz, that is, $\\|\\nabla h(x)-\\nabla h(y)\\|\\leq L\\|x-y\\|$ for all $x,y\\in\\mathbb{R}^{d}$.\n\\end{definition}\n\n\\begin{definition}[Strong Convexity]\\label{assum2}\nA function $\\phi(\\cdot):\\mathbb{R}^{d}\\!\\rightarrow\\!\\mathbb{R}$ is $\\mu$-strongly convex ($\\mu$-SC), if there exists a constant $\\mu\\!>\\!0$ such that for any $x,y\\!\\in\\!\\mathbb{R}^{d}$,\n\\begin{equation}\\label{equ30}\n\\phi(y)\\geq \\phi(x)+\\nabla\\phi(x)^{T}(y-x)+\\frac{\\mu}{2}\\|y-x\\|^{2}.\n\\vspace{-2mm}\n\\end{equation}\n\\end{definition}\nIf $\\phi(\\cdot)$ is non-smooth, we can revise the inequality~\\eqref{equ30} by simply replacing $\\nabla\\phi(x)$ with an arbitrary sub-gradient $\\partial\\phi(x)$.\n\n\\begin{lemma}\\label{lemm1}\nSuppose each component function $f_{i}(\\cdot)$ is $L$-smooth. Let $x_{\\star}$ be the optimal solution of Problem~\\eqref{equ1}, and $\\{\\widetilde{x}^{s},y^{s}_{k}\\}$ be the sequence generated by Algorithm~\\ref{alg1}. Then the following inequality holds for all $s\\!=\\!1,\\ldots,S$:\n\\begin{equation}\\label{equ31}\n\\begin{split}\n&\\!\\!\\!\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s})\\!-\\!\\phi(x_{\\star})\\right]\\leq(1\\!-\\!\\theta_{s})\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s-\\!1})-\\phi(x_{\\star})\\right]\\\\\n&\\quad\\qquad\\quad\\quad\\quad\\;\\;\\;\\;+\\!\\frac{\\theta^{2}_{s}}{2\\eta m_{s}}\\mathbb{E}\\!\\left[\\|y^{s}_{0}\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{m_{s}}\\!\\!-\\!x_{\\star}\\|^2\\right]\\!.\n\\end{split}\n\\end{equation}\n\\end{lemma}\n\nThe detailed proof of Lemma~\\ref{lemm1} is provided in APPENDIX. To prove Lemma 1, we first give the following lemmas, which are useful for the convergence analysis of FSVRG.\n\n\\begin{lemma}[Variance bound, \\cite{zhu:Katyusha}]\n\\label{lemm2}\nSuppose each function $f_{i}(\\cdot)$ is $L$-smooth. Then the following inequality holds:\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\left\\|\\widetilde{\\nabla}\\! f_{i^{s}_{k}}(x^{s}_{k-1})-\\nabla\\! f(x^{s}_{k-1})\\right\\|^{2}\\right]\\\\\n\\leq&\\,2L\\!\\left[f(\\widetilde{x}^{s-\\!1})-f(x^{s}_{k-\\!1})+[\\nabla\\!f(x^{s}_{k-\\!1})]^{T}(x^{s}_{k-\\!1}-\\widetilde{x}^{s-\\!1})\\right].\n\\end{split}\n\\end{equation*}\n\\end{lemma}\n\nLemma~\\ref{lemm2} is essentially identical to Lemma 3.4 in~\\cite{zhu:Katyusha}. This lemma provides a tighter upper bound on the expected variance of the variance-reduced gradient estimator $\\widetilde{\\nabla}\\! f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$ than that of~\\cite{xiao:prox-svrg,zhu:univr}, e.g., Corollary 3.5 in~\\cite{xiao:prox-svrg}.\n\n\\begin{lemma} [3-point property, \\cite{lan:sgd}]\n\\label{lemm3}\nAssume that $z^{*}$ is an optimal solution of the following problem,\n\\begin{displaymath}\n\\min_{z}({\\tau}\/{2})\\!\\cdot\\!\\|z-z_{0}\\|^{2}+\\psi(z),\n\\end{displaymath}\nwhere $\\psi(z)$ is a convex function (but possibly non-differentiable). Then for any $z\\!\\in\\!\\mathbb{R}^{d}$, we have\n\\begin{displaymath}\n\\psi(z)+\\frac{\\tau}{2}\\|z-z_{0}\\|^{2}\\geq \\psi(z^{*})+\\frac{\\tau}{2}\\|z^{*}-z_{0}\\|^{2}+\\frac{\\tau}{2}\\|z-z^{*}\\|^{2}.\n\\end{displaymath}\n\\end{lemma}\n\n\n\\subsection{Convergence Properties for Case 1}\nFor SC objectives with smooth component functions (i.e., Case 1), we analyze the convergence property of FSVRG.\n\n\\begin{theorem}[Strongly Convex]\\label{theo1}\nSuppose each $f_{i}(\\cdot)$ is $L$-smooth, $\\phi(\\cdot)$ is $\\mu$-SC, $\\theta_{s}$ is a constant $\\theta$ for Case 1, and $m_{s}$ is sufficiently large\\footnote{If $m_{1}$ is not sufficiently large, the first epoch can be viewed as an initialization step.} so that\n\\vspace{-2mm}\n\\begin{equation*}\n\\alpha_{s}:=\\,1-\\theta+{\\theta^{2}}\/({\\mu\\eta m_{s}})< 1.\n\\vspace{-1mm}\n\\end{equation*}\nThen Algorithm 1 has the convergence in expectation:\n\\vspace{-2mm}\n\\begin{equation}\\label{equ32}\n\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star})\\right]\\leq\\,\\left(\\prod^{S}_{s=1}\\alpha_{s}\\right)[\\phi(\\widetilde{x}^{0})-\\phi(x_{\\star})].\n\\end{equation}\n\\end{theorem}\n\n\n\\begin{proof}\nSince $\\phi(x)$ is $\\mu$-SC, then there exists a constant $\\mu\\!>\\!0$ such that for all $x\\!\\in\\!\\mathbb{R}^{d}$\n\\vspace{-1mm}\n\\begin{equation*}\n\\phi(x)\\geq \\phi(x_{\\star})+[\\nabla\\!\\phi(x_{\\star})]^{T}(x-x_{\\star})+\\frac{\\mu}{2}\\|x-x_{\\star}\\|^{2}.\n\\vspace{-1mm}\n\\end{equation*}\n\nSince $x_{\\star}$ is the optimal solution, we have\n\\vspace{-2mm}\n\\begin{equation}\\label{equ33}\n\\nabla\\phi(x_{\\star})=0\\,\\footnote{When $\\phi(\\cdot)$ is non-smooth, there must exist a sub-graident $\\partial\\phi(x_{\\star})$ of $\\phi(\\cdot)$ at $x_{\\star}$ such that $\\partial\\phi(x_{\\star})=0$.},\\;\\;\\;\\phi(x)-\\phi(x_{\\star})\\geq \\frac{\\mu}{2}\\|x-x_{\\star}\\|^{2}.\n\\vspace{-1mm}\n\\end{equation}\nUsing the inequality in \\eqref{equ33} and $y^{s}_{0}\\!=\\!\\widetilde{x}^{s-\\!1}$, we have\n\\vspace{-1mm}\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s})-\\phi(x_{\\star})\\right]\\\\\n\\leq&\\,(1\\!-\\!\\theta)\\mathbb{E}[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{\\theta^{2}\\!\/\\eta}{2 m_{s}\\!\\!}\\mathbb{E}\\!\\left[\\|y^{s}_{0}\\!\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{m_{s}}\\!\\!-\\!x_{\\star}\\|^2\\right]\\\\\n\\leq&\\,(1\\!-\\!\\theta)\\mathbb{E}[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{\\theta^{2}\\!\/\\eta}{\\mu m_{s}}\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})\\right]\\\\\n=&\\,\\left(1\\!-\\!\\theta+\\frac{\\theta^{2}\\!\/\\eta}{\\mu m_{s}}\\right)\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s-\\!1})-\\phi(x_{\\star})\\right]\\!,\n\\end{split}\n\\end{equation*}\nwhere the first inequality holds due to Lemma 1, and the second inequality follows from the inequality in \\eqref{equ33}.\n\\end{proof}\n\nFrom Theorem~\\ref{theo1}, it is clear that $\\alpha_{s}$ decreases as $s$ increases, i.e., $1\\!>\\!\\alpha_{1}\\!>\\!\\alpha_{2}\\!>\\!\\ldots\\!>\\!\\alpha_{S}$. Therefore, there exists a positive constant $\\gamma\\!<\\!1$ such that $\\alpha_{s}\\!\\leq\\!\\alpha_{1}\\gamma^{s-\\!1}$ for all $s\\!=\\!1,\\ldots,S$. Then the inequality in \\eqref{equ32} can be rewritten as $\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})\\!-\\!\\phi(x_{\\star})\\right]\\!\\leq\\!(\\alpha_{1}\\sqrt{\\gamma^{S-\\!1}})^{S}[\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star})]$, which implies that FSVRG attains linear (geometric) convergence.\n\n\n\\subsection{Convergence Properties for Case 2}\n\\label{sect42}\nFor NSC objectives with smooth component functions (i.e., Case 2), the following theorem gives the convergence rate and overall complexity of FSVRG.\n\n\\begin{theorem}[Non-Strongly Convex]\\label{theo2}\nSuppose each $f_{i}(\\cdot)$ is $L$-smooth. Then the following inequality holds:\n\\vspace{-1mm}\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star})\\right]\\\\\n\\leq&\\,\\frac{4(1\\!-\\!\\theta_{1})}{\\theta^{2}_{1}(S\\!+\\!2)^{2}}\\![\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{2\/\\eta}{m_{1}(S\\!+\\!2)^{2}}\\|x_{\\star}\\!-\\!\\widetilde{x}^{0}\\|^2.\n\\end{split}\n\\end{equation*}\nIn particular, choosing $m_{1}\\!=\\!\\Theta(n)$, Algorithm~\\ref{alg1} achieves an $\\varepsilon$-accurate solution, i.e., $\\mathbb{E}[\\phi(\\widetilde{x}^{S})]\\!-\\!\\phi(x_{\\star})\\leq \\varepsilon$ using at most $\\mathcal{O}(\\frac{n\\sqrt{\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star})}}{\\sqrt{\\varepsilon}}\\!+\\!\\frac{\\sqrt{nL}\\|\\widetilde{x}^{0}\\!-\\!x_{\\star}\\|}{\\sqrt{\\varepsilon}})$ iterations.\n\\end{theorem}\n\n\\begin{proof}\nUsing the update rule of $\\theta_{s}$ in (\\ref{equ11}), it is easy to verify that\n\\begin{equation}\\label{equ34}\n({1-\\theta_{s+1}})\/{\\theta^{2}_{s+1}}\\leq{1}\/{\\theta^{2}_{s}},\\;\\;\\theta_{s}\\leq2\/(s+2).\n\\end{equation}\nDividing both sides of the inequality in (\\ref{equ31}) by $\\theta^{2}_{s}$, we have\n\\vspace{-3mm}\n\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}[\\phi(\\widetilde{x}^{s})-\\phi(x_{\\star})]\/\\theta^{2}_{s}\\\\\n\\leq&\\frac{1\\!-\\!\\theta_{s}\\!}{\\theta^{2}_{s}}\\mathbb{E}[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{1\/\\eta}{2m_{s}\\!}\\mathbb{E}[\\|x_{\\star}\\!\\!-\\!y^{s}_{0}\\|^2\\!-\\!\\|x_{\\star}\\!\\!-\\!y^{s}_{m_{s}}\\!\\|^2],\n\\end{split}\n\\end{equation*}\nfor all $s\\!=\\!1,\\ldots,S$. By $y^{s+\\!1}_{0}\\!=\\!y^{s}_{m_{s}}$ and the inequality in (\\ref{equ34}), and summing the above inequality over $s\\!=\\!1,\\ldots,S$, we have\n\\vspace{-1mm}\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star})\\right]\/\\theta^{2}_{S}\\\\\n\\leq&\\frac{1\\!-\\!\\theta_{1}}{\\theta^{2}_{1}}[\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)]\\!+\\!\\frac{1\/\\eta}{2m_{1}}\\mathbb{E}\\!\\!\\left[\\|x_{\\star}\\!-\\!y^{1}_{0}\\|^2\\!-\\!\\|x_{\\star}\\!-\\!y^{S}_{m_{S}}\\!\\|^2\\right]\\!.\n\\end{split}\n\\end{equation*}\n\\vspace{-2mm}\n\nThen\n\\vspace{-1mm}\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star}\\!)\\right]\\\\\n\\leq&\\frac{4(1\\!-\\!\\theta_{1})}{\\theta^{2}_{1}(S\\!\\!+\\!\\!2)^{2}}\\![\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)]\\!+\\!\\!\\frac{2\/\\eta}{m_{1}\\!(S\\!\\!+\\!\\!2)^{2}}\\!\\mathbb{E}\\!\\!\\left[\\!\\|x_{\\star}\\!\\!-\\!y^{1}_{0}\\|^2\\!\\!-\\!\\|x_{\\star}\\!\\!-\\!y^{S}_{m_{\\!S}}\\!\\|^2\\!\\right]\\\\\n\\leq&\\frac{4(1\\!-\\!\\theta_{1})}{\\theta^{2}_{1}(S\\!+\\!2)^{2}}\\![\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)]\\!+\\!\\frac{2\/\\eta}{m_{1}(S\\!+\\!2)^{2}}\\!\\left[\\|x_{\\star}\\!-\\!\\widetilde{x}^{0}\\|^2\\right].\n\\end{split}\n\\end{equation*}\nThis completes the proof.\n\\end{proof}\n\n\\begin{figure*}[t]\n\\centering\n\\includegraphics[width=0.496\\columnwidth]{Fig11-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig13-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig15-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig17-eps-converted-to.pdf}\n\\vspace{-1.6mm}\n\n\\subfigure[IJCNN: $\\lambda\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig12-eps-converted-to.pdf}}\n\\subfigure[Protein: $\\lambda\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig14-eps-converted-to.pdf}}\n\\subfigure[Covtype: $\\lambda\\!=\\!10^{-5}$]{\\includegraphics[width=0.496\\columnwidth]{Fig16-eps-converted-to.pdf}}\n\\subfigure[SUSY: $\\lambda\\!=\\!10^{-6}$]{\\includegraphics[width=0.496\\columnwidth]{Fig18-eps-converted-to.pdf}}\n\\vspace{-3mm}\n\n\\caption{Comparison of SVRG~\\cite{johnson:svrg}, SVRG++~\\cite{zhu:vrnc}, Katyusha~\\cite{zhu:Katyusha}, and our FSVRG method for $\\ell_{2}$-norm (i.e., $\\lambda\\|x\\|^{2}$) regularized logistic regression problems. The $y$-axis represents the objective value minus the minimum, and the $x$-axis corresponds to the number of effective passes (top) or running time (bottom).}\n\\label{figs1}\n\\end{figure*}\n\nFrom Theorem 2, we can see that FSVRG achieves the optimal convergence rate of $\\mathcal{O}(1\/T^2)$ and the complexity of $\\mathcal{O}(n\\sqrt{1\/\\epsilon}\\!+\\!\\sqrt{nL\/\\epsilon})$ for NSC problems, which is consistent with the best known result in~\\cite{zhu:Katyusha,hien:asmd}. By adding a proximal term into the problem of Case 2 as in~\\cite{lin:vrsg,zhu:box}, one can achieve faster convergence. However, this hurts the performance of the algorithm both in theory and in practice~\\cite{zhu:univr}.\n\n\\begin{figure*}[!th]\n\\centering\n\\subfigure[IJCNN: $\\lambda_{1}\\!=\\!\\lambda_{2}\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig42-eps-converted-to.pdf}}\n\\subfigure[Protein: $\\lambda_{1}\\!=\\!\\lambda_{2}\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig44-eps-converted-to.pdf}}\n\\subfigure[Covtype: $\\lambda_{1}\\!=\\!\\lambda_{2}\\!=\\!10^{-5}$]{\\includegraphics[width=0.496\\columnwidth]{Fig46-eps-converted-to.pdf}}\n\\subfigure[SUSY: $\\lambda_{1}\\!=\\!\\lambda_{2}\\!=\\!10^{-6}$]{\\includegraphics[width=0.496\\columnwidth]{Fig48-eps-converted-to.pdf}}\n\\vspace{-3mm}\n\n\\caption{Comparison of Prox-SVRG~\\cite{xiao:prox-svrg}, SVRG++~\\cite{zhu:vrnc}, Katyusha~\\cite{zhu:Katyusha}, and our FSVRG method for elastic net (i.e., $\\lambda_{1}\\|x\\|^{2}\\!+\\!\\lambda_{2}\\|x\\|_{1}$) regularized logistic regression problems.}\n\\label{figs2}\n\\end{figure*}\n\n\n\\subsection{Convergence Properties for Mini-Batch Settings}\nIt has been shown in~\\cite{shwartz:svm,nitanda:svrg,koneeny:mini} that mini-batching can effectively decrease the variance of stochastic gradient estimates. So, we extend FSVRG and its convergence results to the mini-batch setting. Here, we denote by $b$ the mini-batch size and $I^{s}_{k}$ the selected random index set $I_{k}\\!\\subset\\![n]$ for each outer-iteration $s\\!\\in\\![S]$ and inner-iteration $k\\!\\in\\!\\{1,\\ldots,m_{s}\\}$. Then the stochastic gradient estimator $\\widetilde{\\nabla}\\!f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$ becomes\n\\begin{equation}\\label{equ36}\n\\widetilde{\\nabla}\\! f_{I^{s}_{k}}(x^{s}_{k-\\!1})\\!=\\!\\frac{1}{b}\\!\\sum_{i\\in I^{s}_{k}}\\!\\!\\left[\\nabla\\!f_{i}(x^{s}_{k-\\!1})\\!-\\!\\nabla\\! f_{i}(\\widetilde{x}^{s-\\!1})\\right]\\!+\\!{\\nabla}\\! f(\\widetilde{x}^{s-\\!1}).\n\\vspace{-2mm}\n\\end{equation}\nAnd the momentum weight $\\theta_{s}$ is required to satisfy $\\theta_{s}\\!\\leq\\! 1\\!-\\!\\frac{\\rho(b)L\\eta}{1-L\\eta}$ for SC and NSC cases, where $\\rho(b)\\!=\\!(n\\!-\\!b)\/[(n\\!-\\!1)b]$. The upper bound on the variance of $\\widetilde{\\nabla}\\!f_{i^{s}_{k}}(x^{s}_{k-\\!1})$ in Lemma~\\ref{lemm2} is extended to the mini-batch setting as follows~\\cite{liu:sadmm}.\n\n\\begin{corollary}[Variance bound of Mini-Batch]\n\\label{cor11}\n\\begin{displaymath}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\left\\|\\widetilde{\\nabla}\\! f_{I^{s}_{k}}(x^{s}_{k-1})-\\nabla\\! f(x^{s}_{k-1})\\right\\|^{2}\\right]\\\\\n\\leq&2L\\rho(b)\\!\\left[\\nabla\\! f(x^{s}_{k-\\!1})^{T}\\!(x^{s}_{k-\\!1}\\!-\\!\\widetilde{x}^{s-\\!1})\\!-\\!f(x^{s}_{k-\\!1})\\!+\\!f(\\widetilde{x}^{s-\\!1})\\right]\\!.\n\\end{split}\n\\end{displaymath}\n\\end{corollary}\n\\vspace{-2mm}\n\nIt is easy to verify that $0\\!\\leq\\!\\rho(b)\\!\\leq\\!1$, which implies that mini-batching is able to reduce the variance upper bound in Lemma~\\ref{lemm2}. Based on the variance upper bound in Corollary~\\ref{cor11}, we further analyze the convergence properties of our algorithms for the mini-batch setting. Obviously, the number of stochastic iterations in each epoch is reduced from $m_{s}$ to $\\lfloor m_{s}\/b\\rfloor$. For the case of SC objective functions, the mini-batch variant of FSVRG has almost identical convergence properties to those in Theorem~\\ref{theo1}. In contrast, we need to initialize $\\theta_{1}\\!=\\!1\\!-\\!\\frac{\\rho(b){L}\\eta}{1-{L}\\eta}$ and update $\\theta_{s}$ by the procedure in (10) for the case of NSC objective functions. Theorem~\\ref{theo2} is also extended to the mini-batch setting as follows.\n\n\\begin{corollary}\\label{cor12}\nSuppose $f_{i}(\\cdot)$ is $L$-smooth, and let $\\theta_{1}\\!=\\!1\\!-\\!\\frac{\\rho(b){L}\\eta}{1-{L}\\eta}$ and $\\beta\\!=\\!{1}\/{L\\eta}$, then the following inequality holds:\n\\vspace{-1mm}\n\\begin{equation}\\label{equ37}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})\\!-\\phi(x_{\\star})\\right]\\leq\\frac{2\/\\eta}{m_{1}(S\\!+\\!2)^{2}}\\!\\left[\\|x_{\\star}\\!-\\!\\widetilde{x}^{0}\\|^2\\right]\\\\\n&\\quad\\;\\;+\\!\\frac{4(\\beta-1)\\rho(b)}{(\\beta\\!-\\!1\\!-\\!\\rho(b))^{2}(S\\!+\\!2)^{2}}[\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)].\n\\end{split}\n\\end{equation}\n\\end{corollary}\n\n\\begin{proof}\nSince\n\\begin{displaymath}\n\\theta_{1}=1-({\\rho(b){L}\\eta})\/({1-{L}\\eta})=1-{\\rho(b)}\/({\\beta-1}),\n\\end{displaymath}\nthen we have\n\\begin{equation*}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{S})-\\phi(x_{\\star})\\right]\\\\\n\\leq&\\frac{4(\\beta-1)\\rho(b)}{(\\beta\\!-\\!1\\!-\\!\\rho(b))^{2}(S\\!\\!+\\!\\!2)^{2}}\\![\\phi(\\widetilde{x}^{0})\\!-\\!\\phi(x_{\\star}\\!)]\\!+\\!\\!\\frac{2\/\\eta}{ m_{1}\\!(S\\!\\!+\\!\\!2)^{2}}\\!\\left[\\|x_{\\star}\\!\\!-\\!\\widetilde{x}^{0}\\!\\|^2\\right]\\!\\!.\n\\end{split}\n\\end{equation*}\nThis completes the proof.\n\\end{proof}\n\n\\begin{remark}\nWhen $b\\!=\\!1$, we have $\\rho(1)\\!=\\!1$, and then Corollary~\\ref{cor12} degenerates to Theorem~\\ref{theo2}. If $b\\!=\\!n$ (i.e., the batch setting), we have $\\rho(n)\\!=\\!0$, and the second term on the right-hand side of \\eqref{equ37} diminishes. In other words, FSVRG degenerates to the accelerated deterministic method with the optimal convergence rate of $\\mathcal{O}(1\/T^{2})$.\n\\end{remark}\n\n\n\\section{Experimental Results}\n\\label{sec5}\nIn this section, we evaluate the performance of our FSVRG method for solving various machine learning problems, such as logistic regression, ridge regression, Lasso and SVM. All the codes of FSVRG and related methods can be downloaded from the first author's website.\n\n\n\\subsection{Experimental Setup}\nFor fair comparison, FSVRG and related stochastic variance reduced methods, including SVRG~\\cite{johnson:svrg}, Prox-SVRG~\\cite{xiao:prox-svrg}, SVRG++~\\cite{zhu:univr} and Katyusha~\\cite{zhu:Katyusha}, were implemented in C++, and the experiments were performed on a PC with an Intel i5-2400 CPU and 16GB RAM. As suggested in~\\cite{johnson:svrg,xiao:prox-svrg,zhu:Katyusha}, the epoch size is set to $m\\!=\\!2n$ for SVRG, Prox-SVRG, and Katyusha. FSVRG and SVRG++ have the similar strategy of growing epoch size, e.g., $m_{1}\\!\\!=\\!n\/2$ and $\\rho\\!=\\!1.6$ for FSVRG, and $m_{1}\\!\\!=\\!n\/4$ and $\\rho\\!=\\!2$ for SVRG++. Then for all these methods, there is only one parameter to tune, i.e., the learning rate. Note that we compare their performance in terms of both the number of effective passes (evaluating $n$ component gradients or computing a single full gradient is considered as one effective pass) and running time (seconds). Moreover, we do not compare with other stochastic algorithms such as SAGA~\\cite{defazio:saga} and Catalyst~\\cite{lin:vrsg}, as they have been shown to be comparable or inferior to Katyusha~\\cite{zhu:Katyusha}.\n\n\n\\subsection{Logistic Regression}\nIn this part, we conducted experiments for both the $\\ell_{2}$-norm and elastic net regularized logistic regression problems on the four popular data sets: IJCNN, Covtype, SUSY, and Protein, all of which were obtained from the LIBSVM Data website{\\footnote{\\url{https:\/\/www.csie.ntu.edu.tw\/~cjlin\/libsvm\/}}} and the KDD Cup 2004 website{\\footnote{\\url{http:\/\/osmot.cs.cornell.edu\/kddcup}}}. Each example of these date sets was normalized so that they have unit length as in~\\cite{xiao:prox-svrg}, which leads to the same upper bound on the Lipschitz constants $L_{i}$ of functions $f_{i}(\\cdot)$.\n\nFigures~\\ref{figs1} and~\\ref{figs2} show the performance of different methods for solving the two classes of logistic regression problems, respectively. It can be seen that SVRG++ and FSVRG consistently converge much faster than the other methods in terms of both running time (seconds) and number of effective passes. The accelerated stochastic variance reduction method, Katyusha, has much better performance than the standard SVRG method in terms of number of effective passes, while it sometimes performs worse in terms of running time. FSVRG achieves consistent speedups for all the data sets, and outperforms the other methods in all the settings. The main reason is that FSVRG not only takes advantage of the momentum acceleration trick, but also can use much larger step sizes, e.g., 1\/(3$L$) for FSVRG vs.\\ 1\/(7$L$) for SVRG++ vs.\\ 1\/(10$L$) for SVRG. This also confirms that FSVRG has much lower per-iteration cost than Katyusha.\n\n\n\\begin{figure*}[!th]\n\\centering\n\\includegraphics[width=0.496\\columnwidth]{Fig51-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig53-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig55-eps-converted-to.pdf}\n\\includegraphics[width=0.496\\columnwidth]{Fig57-eps-converted-to.pdf}\n\\vspace{-1.6mm}\n\n\\subfigure[IJCNN: $\\lambda\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig52-eps-converted-to.pdf}}\n\\subfigure[Protein: $\\lambda\\!=\\!10^{-4}$]{\\includegraphics[width=0.496\\columnwidth]{Fig54-eps-converted-to.pdf}}\n\\subfigure[Covtype: $\\lambda\\!=\\!10^{-5}$]{\\includegraphics[width=0.496\\columnwidth]{Fig56-eps-converted-to.pdf}}\n\\subfigure[SUSY: $\\lambda\\!=\\!10^{-6}$]{\\includegraphics[width=0.496\\columnwidth]{Fig58-eps-converted-to.pdf}}\n\\vspace{-3mm}\n\n\\caption{Comparison of Prox-SVRG~\\cite{xiao:prox-svrg}, SVRG++~\\cite{zhu:vrnc}, Katyusha~\\cite{zhu:Katyusha}, and FSVRG for Lasso problems.}\n\\label{figs3}\n\\end{figure*}\n\n\n\n\\subsection{Lasso}\nIn this part, we conducted experiments for the Lasso problem with the regularizer $\\lambda\\|x\\|_{1}$ on the four data sets. We report the experimental results of different methods in Figure~\\ref{figs3}, where the regularization parameter is varied from $\\lambda\\!=\\!10^{-4}$ to $\\lambda\\!=\\!10^{-6}$. From all the results, we can observe that FSVRG converges much faster than the other methods, and also outperforms Katyusha in terms of number of effective passes, which matches the optimal convergence rate for the NSC problem. SVRG++ achieves comparable and sometimes even better performance than SVRG and Katyusha. This further verifies that the efficiency of the growing epoch size strategy in SVRG++ and FSVRG.\n\n\n\\subsection{Ridge Regression}\nIn this part, we implemented all the algorithms mentioned above in C++ for high-dimensional sparse data, and compared their performance for solving ridge regression problems on the two very sparse data sets, Rcv1 and Sido0 (whose sparsity is 99.84\\% and 90.16\\%), as shown in Figure~\\ref{figs4}. The two data sets can be downloaded from the LIBSVM Data website and the Causality Workbench website{\\footnote{\\url{http:\/\/www.causality.inf.ethz.ch\/home.php}}}. From the results, one can observe that although Katyusha outperforms SVRG in terms of number of effective passes, both of them usually have similar convergence speed. SVRG++ has relatively inferior performance (maybe due to large value for $\\rho$, i.e., $\\rho\\!=\\!2$) than the other methods. It can be seen that the objective value of FSVRG is much lower than those of the other methods, suggesting faster convergence.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.486\\columnwidth]{Fig61-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig63-eps-converted-to.pdf}\n\\vspace{-1.6mm}\n\n\\subfigure[Rcv1]{\\includegraphics[width=0.486\\columnwidth]{Fig62-eps-converted-to.pdf}}\n\\subfigure[Sido0]{\\includegraphics[width=0.486\\columnwidth]{Fig64-eps-converted-to.pdf}}\n\\vspace{-3mm}\n\n\\caption{Comparison of SVRG~\\cite{johnson:svrg}, SVRG++~\\cite{zhu:vrnc}, Katyusha~\\cite{zhu:Katyusha}, and FSVRG for ridge regression problems with regularization parameter $\\lambda\\!=\\!10^{-4}$.}\n\\label{figs4}\n\\end{figure}\n\n\nFigure~\\ref{figs5} compares the performance of our FSVRG method with different mini-batch sizes on the two data sets, IJCNN and Protein. It can be seen that by increasing the mini-batch size to $b\\!=\\!2,4$, FSVRG has comparable or even better performance than the case when $b\\!=\\!1$.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.486\\columnwidth]{Fig71-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig72-eps-converted-to.pdf}\n\\vspace{-3mm}\n\n\\caption{Results of FSVRG with different mini-batch sizes on IJCNN (left) and Protein (right).}\n\\label{figs5}\n\\end{figure}\n\n\n\\subsection{SVM}\nFinally, we evaluated the empirical performance of FSVRG for solving the SVM optimization problem\n\\vspace{-2mm}\n\\begin{equation*}\n\\min_{x}\\frac{1}{n}\\sum^{n}_{i=1}\\max\\{0,1-b_{i}\\langle a_{i},x\\rangle\\}+\\frac{\\lambda}{2}\\|x\\|^{2},\n\\vspace{-2mm}\n\\end{equation*}\nwhere $(a_{i},b_{i})$ is the feature-label pair. For the binary classification data set, IJCNN, we randomly divided it into 10\\% training set and 90\\% test set. We used the standard one-vs-rest scheme for the multi-class data set, the MNIST data set{\\footnote{\\url{http:\/\/yann.lecun.com\/exdb\/mnist\/}}}, which has a training set of 60,000 examples and a test set of 10,000 examples. The regularization parameter is set to $\\lambda\\!=\\!10^{-5}$. Figure~\\ref{figs6} shows the performance of the stochastic sub-gradient descent method (SSGD)~\\cite{shamir:sgd}, SVRG and FSVRG for solving the SVM problem. Note that we also extend SVRG to non-smooth settings, and use the same scheme in (12). We can see that the variance reduced methods, SVRG and FSVRG, yield significantly better performance than SSGD. FSVRG consistently outperforms SSGD and SVRG in terms of convergence speed and testing accuracy. Intuitively, the momentum acceleration trick in (8) can lead to faster convergence. We leave the theoretical analysis of FSVRG for this case as our future research.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.486\\columnwidth]{Fig81-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig82-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig83-eps-converted-to.pdf}\n\\includegraphics[width=0.486\\columnwidth]{Fig84-eps-converted-to.pdf}\n\\vspace{-3mm}\n\n\\caption{Comparison of different methods for SVM problems on IJCNN (top) and MNIST (bottom).}\n\\label{figs6}\n\\end{figure}\n\n\n\\section{Discussion and Conclusion}\nRecently, there is a surge of interests in accelerating stochastic variance reduction gradient optimization. SVRG++~\\cite{zhu:vrnc} uses the doubling-epoch technique (i.e., $m_{s+\\!1}\\!\\!=\\!2m_{s}$), which can reduce the number of gradient calculations in the early iterations, and lead to faster convergence in general. In contrast, our FSVRG method uses a more general growing epoch size strategy as in~\\cite{mahdavi:sgd}, i.e., $m_{s+\\!1}\\!=\\!\\rho m_{s}$ with the factor $\\rho\\!>\\!1$, which implies that we can be much more flexible in choosing $\\rho$. Unlike SVRG++, FSVRG also enjoys the momentum acceleration trick and has the convergence guarantee for SC problems in Case 1.\n\nThe momentum acceleration technique has been used in accelerated SGD~\\cite{sutskever:sgd} and variance reduced methods~\\cite{lan:rpdg,lin:vrsg,nitanda:svrg,zhu:Katyusha}. Different from other methods~\\cite{lan:rpdg,lin:vrsg,nitanda:svrg}, the momentum term of FSVRG involves the snapshot point, i.e., $\\widetilde{x}^{s}$, which is also called as the Katyusha momentum in~\\cite{zhu:Katyusha}. It was shown in~\\cite{zhu:Katyusha} that Katyusha has the best known overall complexities for both SC and NSC problems. As analyzed above, FSVRG is much simpler and more efficient than Katyusha, which has also been verified in our experiments. Therefore, FSVRG is suitable for large-scale machine learning and data mining problems.\n\nIn this paper, we proposed a simple and fast stochastic variance reduction gradient (FSVRG) method, which integrates the momentum acceleration trick, and also uses a more flexible growing epoch size strategy. Moreover, we provided the convergence guarantees of our method, which show that FSVRG attains linear convergence for SC problems, and achieves the optimal $\\mathcal{O}(1\/T^2)$ convergence rate for NSC problems. The empirical study showed that the performance of FSVRG is much better than that of the state-of-the-art stochastic methods. Besides the ERM problems in Section~\\ref{sec5}, we can apply FSVRG to other machine learning problems, e.g., deep learning and eigenvector computation~\\cite{garber:svd}.\n\n\n\\section*{APPENDIX: Proof of Lemma 1}\n\n\\vspace{3mm}\n\\begin{proof}\nThe smoothness inequality in Definition 1 has the following equivalent form,\n\\begin{displaymath}\nf(x_{2})\\!\\leq\\! f(x_{1})\\!+\\!\\langle\\nabla\\! f(x_{1}),x_{2}\\!-\\!x_{1}\\rangle\\!+\\!0.5L\\|x_{2}\\!-\\!x_{1}\\|^{2}\\!,\\forall x_{1},x_{2}\\!\\in\\!\\mathbb{R}^{d}.\n\\end{displaymath}\nLet $\\beta_{1}\\!\\!>\\!2$ be an appropriate constant, $\\beta_{2}\\!\\!=\\!\\beta_{1}\\!\\!-\\!1\\!>\\!1$, and $\\widetilde{\\nabla}_{\\!i^{s}_{k}}\\!:=\\!\\widetilde{\\nabla}\\!f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})$. Using the above inequality, we have\n\\begin{equation}\\label{equ71}\n\\begin{split}\n\\!\\!\\!f(x^{s}_{k})\\!\\leq& f(x^{s}_{k\\!-\\!1})\\!+\\!\\langle\\nabla\\!f(x^{s}_{k\\!-\\!1}),x^{s}_{k}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\rangle\\!\\!+\\!0.5L\\|x^{s}_{k}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\|^{2}\\!\\\\\n=& f(x^{s}_{k-\\!1})\\!+\\!\\left\\langle\\nabla\\! f(x^{s}_{k-\\!1}),x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\right\\rangle\\\\\n&+\\!0.5\\beta_{1} L\\!\\left\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\right\\|^{2}\\!-\\!0.5\\beta_{2}L\\!\\left\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\right\\|^{2}\\\\\n=& f(x^{s}_{k-\\!1})\\!+\\!\\langle \\widetilde{\\nabla}_{\\!i^{s}_{k}},x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\rangle\\!+\\!0.5\\beta_{1}L\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\|^2\\\\\n&\\!+\\!\\!\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1})\\!-\\!\\!\\widetilde{\\nabla}_{\\!i^{s}_{k}}\\!,x^{s}_{k}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\rangle\\!-\\!0.5\\beta_{2}L\\|x^{s}_{k}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\|^{2}\\!\\!.\n\\end{split}\n\\end{equation}\n\\vspace{-5mm}\n\n\\begin{equation}\\label{equ72}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\langle\\nabla\\! f(x^{s}_{k-\\!1})\\!-\\!\\widetilde{\\nabla}_{i^{s}_{k}},\\,x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\rangle\\right]\\\\\n\\leq\\,& \\mathbb{E}\\!\\left[\\frac{1}{2\\beta_{2}{L}}\\|\\nabla\\!f(x^{s}_{k-\\!1})\\!-\\!\\widetilde{\\nabla}_{i^{s}_{k}}\\|^{2}+\\frac{\\beta_{2}{L}}{2}\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\|^{2}\\right]\\\\\n\\leq\\,& \\frac{1}{\\beta_{2}}\\!\\left[f(\\widetilde{x}^{s-\\!1})\\!-\\!f(x^{s}_{k-\\!1})\\!-\\!\\left\\langle\\nabla\\! f(x^{s}_{k-\\!1}),\\,\\widetilde{x}^{s-\\!1}\\!-\\!x^{s}_{k-\\!1}\\right\\rangle\\right]\\\\\n&+\\!{0.5\\beta_{2}L}\\;\\!\\mathbb{E}\\!\\left[\\|x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\|^{2}\\right],\n\\end{split}\n\\end{equation}\nwhere the first inequality follows from the Young's inequality (i.e.\\ $x^{T}y\\!\\leq\\!{\\|x\\|^2}\\!\/{(2\\beta)}\\!+\\!{\\beta\\|y\\|^2}\\!\/{2}$ for all $\\beta\\!>\\!0$), and the second inequality holds due to Lemma~\\ref{lemm2}. Note that for the mini-batch setting, Lemma~\\ref{lemm2} needs to be replaced with Corollary~\\ref{cor11}, and all statements in the proof of this lemma can be revised by simply replacing $1\/\\beta_{2}$ with $\\rho(b)\/\\beta_{2}$.\n\nSubstituting the inequality \\eqref{equ72} into the inequality \\eqref{equ71}, and taking expectation over $i^{s}_{k}$, we have\n\\vspace{-1mm}\n\\begin{equation*}\\label{equ73}\n\\begin{split}\n&\\;\\;\\;\\;\\,\\mathbb{E}\\left[\\phi(x^{s}_{k})\\right]-f(x^{s}_{k-\\!1})\\\\\n&\\leq\\!\\mathbb{E}\\!\\left[g(x^{s}_{k})\\!+\\!\\langle\\widetilde{\\nabla}_{\\!i^{s}_{k}}, x^{s}_{k}\\!\\!-\\!x^{s}_{k-\\!1}\\rangle\\!+\\!0.5\\beta_{1}\\!{L}\\|x^{s}_{k}\\!\\!-\\!x^{s}_{k-\\!1}\\|^2\\right]\\\\\n&\\quad+\\!\\beta^{-\\!1}_{2}\\!\\left[f(\\widetilde{x}^{s-\\!1})\\!-\\!f(x^{s}_{k-\\!1})\\!+\\!\\left\\langle\\nabla\\! f(x^{s}_{k-\\!1}),\\,x^{s}_{k-\\!1}\\!-\\!\\widetilde{x}^{s-\\!1}\\right\\rangle\\right]\\\\\n&\\leq\\!\\mathbb{E}\\!\\left[\\langle \\theta_{\\!s}\\!\\widetilde{\\nabla}_{\\!i^{s}_{k}}, y^{s}_{k}\\!-\\!y^{s}_{k-\\!1}\\rangle\\!+\\!0.5\\beta_{1}\\!{L}\\theta_{s}^{2}\\|y^{s}_{k}\\!-\\!y^{s}_{k-\\!1}\\|^2\\!+\\!\\theta_{s} g(y^{s}_{k})\\right]\\\\\n&\\quad\\!\\!\\!\\!\\!+\\!(1\\!\\!-\\!\\theta_{\\!s}\\!)g(\\widetilde{x}^{s\\!-\\!1}\\!)\\!+\\!\\beta^{-\\!1}_{2}\\!\\!\\left[f(\\widetilde{x}^{s\\!-\\!1}\\!)\\!-\\!\\!f(x^{s}_{k\\!-\\!1}\\!)\\!+\\!\\!\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1}\\!),x^{s}_{k\\!-\\!1}\\!\\!-\\!\\widetilde{x}^{s\\!-\\!1}\\rangle\\right]\\\\\n&\\leq\\!\\mathbb{E}\\!\\!\\left[\\langle \\theta_{\\!s}\\!\\widetilde{\\nabla}_{\\!i^{s}_{k}}\\!,x_{\\star}\\!\\!-\\!y^{s}_{k\\!-\\!1}\\rangle\\!+\\!\\frac{\\!\\beta_{1}\\!{L} \\theta_{\\!s}^{2}\\!\\!}{2}(\\|y^{s}_{k\\!-\\!1}\\!\\!-\\!x_{\\star}\\!\\|^2\\!\\!-\\!\\|y^{s}_{k}\\!\\!-\\!x_{\\star}\\!\\|^2)\\!+\\!\\theta_{\\!s} g(x_{\\star})\\!\\right]\\\\\n&\\quad\\!\\!\\!\\!+\\!(1\\!\\!-\\!\\theta_{\\!s}\\!)g(\\widetilde{x}^{s\\!-\\!1}\\!)\\!+\\!\\beta^{-\\!1}_{2}\\!\\!\\left[f(\\widetilde{x}^{s\\!-\\!1}\\!)\\!-\\!\\!f(x^{s}_{k\\!-\\!1}\\!)\\!+\\!\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1}\\!),x^{s}_{k\\!-\\!1}\\!\\!-\\!\\widetilde{x}^{s\\!-\\!1}\\rangle\\right]\\\\\n&=\\!\\mathbb{E}\\!\\!\\left[\\frac{\\beta_{1}\\!{L} \\theta_{\\!s}^{2}}{2}\\!\\!\\left(\\|y^{s}_{k\\!-\\!1}\\!-\\!x_{\\star}\\!\\|^2\\!\\!-\\!\\|y^{s}_{k}\\!-\\!x_{\\star}\\!\\|^2\\right)\\!+\\!\\theta_{\\!s} g(x_{\\star})\\right]\\!\\!+\\!(1\\!\\!-\\!\\theta_{\\!s}\\!)g(\\widetilde{x}^{s\\!-\\!1}\\!)\\\\\n&\\quad\\!\\!\\!\\!\\!\\!+\\!\\left\\langle\\!\\nabla\\! f(x^{s}_{k\\!-\\!1}\\!),\\theta_{\\!s} x_{\\star}\\!\\!+\\!(1\\!\\!-\\!\\theta_{\\!s}\\!)\\widetilde{x}^{s\\!-\\!1}\\!\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\!+\\!\\!\\beta^{-\\!1}_{2}\\!(x^{s}_{k\\!-\\!1}\\!\\!-\\!\\widetilde{x}^{s\\!-\\!1}\\!)\\!\\right\\rangle\\!+\\!\\!\\beta^{-\\!1}_{2}\\!f(\\widetilde{x}^{s\\!-\\!1}\\!)\\\\\n&\\quad\\!\\!\\!\\!\\!\\!+\\!\\mathbb{E}\\!\\!\\left[\\left\\langle\\!-\\!\\nabla\\!f_{i^{s}_{k}}\\!(\\widetilde{x}^{s\\!-\\!1}\\!)\\!+\\!\\!\\nabla\\! f(\\widetilde{x}^{s\\!-\\!1}\\!),\\theta_{\\!s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{\\!s}\\!)\\widetilde{x}^{s\\!-\\!1}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\right\\rangle\\!\\right]\\!\\!-\\!\\!\\beta^{-\\!1}_{2}\\!f(x^{s}_{\\!k\\!-\\!1}\\!)\\\\\n&=\\!\\mathbb{E}\\!\\!\\left[\\!\\frac{\\beta_{1}\\!{L} \\theta_{\\!s}^{2}}{2}\\!\\!\\left(\\|y^{s}_{k-\\!1}\\!\\!-\\!x_{\\star}\\!\\|^2\\!\\!-\\!\\|y^{s}_{k}\\!\\!-\\!x_{\\star}\\!\\|^2\\right)\\!+\\!\\theta_{\\!s} g(x_{\\star})\\!\\right]\\!\\!+\\!(1\\!-\\!\\theta_{\\!s}\\!)g(\\widetilde{x}^{s-\\!1})\\\\\n&\\quad\\!+\\!\\!\\left\\langle\\nabla\\! f(x^{s}_{k-\\!1}),\\theta_{s} x_{\\star}\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!-\\!x^{s}_{k-\\!1}\\!+\\!\\beta^{-\\!1}_{2}(x^{s}_{k-\\!1}\\!-\\!\\widetilde{x}^{s-\\!1})\\right\\rangle\\\\\n&\\quad\\!+\\!\\beta^{-\\!1}_{2}\\!\\left[f(\\widetilde{x}^{s-\\!1})\\!-\\!f(x^{s}_{k-\\!1})\\right],\n\\end{split}\n\\vspace{-2mm}\n\\end{equation*}\nwhere the second inequality follows from the facts that $x^{s}_{k}\\!-\\!x^{s}_{k-\\!1}\\!=\\!\\theta_{\\!s}(y^{s}_{k}\\!-\\!y^{s}_{k-\\!1})$ and\n$g(\\theta_{\\!s}y^{s}_{k}\\!+\\!(1\\!\\!-\\!\\theta_{\\!s})\\widetilde{x}^{s-\\!1})\\!\\leq\\! \\theta_{\\!s} g(y^{s}_{k})\\!+\\!(1\\!\\!-\\!\\theta_{\\!s})g(\\widetilde{x}^{s-\\!1})$; the third inequality holds due to Lemma~\\ref{lemm3} with $z^{*}\\!=\\!y^{s}_{k}$, $z\\!=\\!x_{\\star}$, $z_{0}\\!=\\!y^{s}_{k-\\!1}$, $\\rho\\!=\\!\\beta_{1}L\\theta_{\\!s}$, and $\\psi(z)\\!:=\\!g(z)+\\!\\langle \\widetilde{\\nabla}_{\\!i^{s}_{k}},z\\!-\\!y^{s}_{k-\\!1}\\rangle$ (or $\\psi(z)\\!:=\\!\\langle \\widetilde{\\nabla}_{\\!i^{s}_{k}}\\!+\\!\\nabla\\! g(x^{s}_{k-\\!1}),z\\!-\\!y^{s}_{k-\\!1}\\rangle$). The first equality holds due to the facts that\n\\vspace{-2mm}\n\\begin{displaymath}\n\\begin{split}\n&\\langle \\theta_{s}\\!\\widetilde{\\nabla}_{\\!i^{s}_{k}},\\, x_{\\star}\\!\\!-\\!y^{s}_{k-\\!1}\\rangle\\!=\\!\\langle \\widetilde{\\nabla}_{\\!i^{s}_{k}},\\, \\theta_{s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!\\!-\\!x^{s}_{k-\\!1}\\rangle\\\\\n=&\\left\\langle\\nabla\\! f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1}),\\, \\theta_{s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!\\!-\\!x^{s}_{k-\\!1}\\right\\rangle\\\\\n&+\\!\\left\\langle-\\nabla\\! f_{i^{s}_{k}}\\!(\\widetilde{x}^{s-\\!1})\\!+\\!\\nabla\\! f(\\widetilde{x}^{s-\\!1}),\\, \\theta_{s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!\\!-\\!x^{s}_{k-\\!1}\\right\\rangle,\n\\end{split}\n\\end{displaymath}\nand $\\mathbb{E}[\\nabla\\! f_{i^{s}_{k}}\\!(x^{s}_{k-\\!1})]\\!=\\!\\nabla\\! f(x^{s}_{k-\\!1})$. The last equality follows from\n$\\mathbb{E}\\!\\left[\\langle-\\nabla\\! f_{i^{s}_{k}}\\!(\\widetilde{x}^{s-\\!1})\\!+\\!\\nabla\\! f(\\widetilde{x}^{s-\\!1}),\\,\\theta_{s} x_{\\star}\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s-\\!1}\\!\\!-\\!x^{s}_{k-\\!1}\\rangle\\right]\\!=\\!0.$\n\\vspace{-4mm}\n\n\\begin{equation*}\\label{equ75}\n\\begin{split}\n&\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1}),\\theta_{s} x_{\\star}\\!\\!+\\!(1\\!-\\!\\theta_{s})\\widetilde{x}^{s\\!-\\!1}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\!\\!+\\!\\beta^{-\\!1}_{2}\\!(x^{s}_{k\\!-\\!1}\\!\\!-\\!\\widetilde{x}^{s\\!-\\!1})\\rangle\\\\\n=&\\langle\\nabla\\! f(x^{s}_{k\\!-\\!1}),\\theta_{s} x_{\\star}\\!+\\!(1\\!-\\!\\theta_{s}\\!-\\!\\beta^{-\\!1}_{2})\\widetilde{x}^{s\\!-\\!1}\\!\\!+\\!\\beta^{-\\!1}_{2}\\!x^{s}_{k\\!-\\!1}\\!\\!-\\!x^{s}_{k\\!-\\!1}\\rangle\\\\\n\\leq &f\\!\\left(\\theta_{s} x_{\\star}\\!+\\!(1\\!-\\!\\theta_{s}\\!-\\!\\beta^{-\\!1}_{2})\\widetilde{x}^{s-\\!1}\\!+\\!\\beta^{-\\!1}_{2}\\!x^{s}_{k-\\!1}\\right)\\!-\\!f(x^{s}_{k-\\!1})\\\\\n\\leq &\\theta_{s} f(x_{\\star})\\!+\\!(1\\!-\\!\\theta_{s}\\!-\\!\\beta^{-\\!1}_{2})f(\\widetilde{x}^{s-\\!1})\\!+\\!\\beta^{-\\!1}_{2}\\!f(x^{s}_{k-\\!1})\\!-\\!f(x^{s}_{k-\\!1}),\n\\end{split}\n\\end{equation*}\nwhere the first inequality holds due to the fact that $\\langle \\nabla\\! f(x_{1}),$ $x_{2}-x_{1}\\rangle\\!\\leq\\! f(x_{2})\\!-\\!f(x_{1})$, and the last inequality follows from the convexity of $f(\\cdot)$ and $1\\!-\\!\\theta_{s}\\!-\\!\\beta^{-\\!1}_{2}\\!\\geq\\!0$, which can be easily satisfied. Combining the above two inequalities, we have\n\\vspace{-1mm}\n\\begin{equation*}\\label{equ76}\n\\begin{split}\n&\\mathbb{E}\\!\\left[\\phi(x^{s}_{k})\\right]\\\\\n\\leq&\\theta_{s}\\phi(x_{\\star})\\!+\\!(1\\!\\!-\\!\\theta_{s})\\phi(\\widetilde{x}^{s-\\!1})\\!+\\!\\frac{\\!\\beta_{1}\\!{L} \\theta_{\\!s}^{2}\\!}{2}\\mathbb{E}\\!\\!\\left[\\|y^{s}_{k-\\!1}\\!\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{k}\\!\\!-\\!x_{\\star}\\|^2\\right]\\!\\!,\n\\end{split}\n\\end{equation*}\n\\vspace{-4mm}\n\\begin{equation*}\n\\begin{split}\n&\\;\\,\\mathbb{E}\\!\\left[\\phi(x^{s}_{k})\\!-\\!\\phi(x_{\\star})\\right]\\\\\n\\vspace{-1mm}\n\\leq&\\,(1\\!\\!-\\!\\theta_{s})[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]\\!+\\!\\frac{\\!\\beta_{1}\\!{L} \\theta_{\\!s}^{2}\\!}{2}\\mathbb{E}\\!\\!\\left[\\|y^{s}_{k-\\!1}\\!\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{k}\\!-\\!x_{\\star}\\|^2\\right]\\!\\!.\n\\end{split}\n\\vspace{-1mm}\n\\end{equation*}\nDue to the convexity of $\\phi(\\cdot)$ and the definition $\\widetilde{x}^{s}\\!\\!=\\!\\!\\frac{1}{m_{s}}\\!\\!\\sum^{m_{s}}_{k=1}\\!x^{s}_{k}$, then $\\phi\\!\\left(\\frac{1}{m_{s}}\\!\\!\\sum^{m_{s}}_{k=1}\\!x^{s}_{k}\\right)\\!\\leq\\!\\frac{1}{m_{s}}\\!\\sum^{m_{s}}_{k=1}\\!\\!\\phi(x^{s}_{k})$. Taking expectation over the history of $i_{1},\\ldots,i_{m_{s}}$ on the above inequality, and summing it up over $k\\!=\\!1,\\ldots,m_{s}$ at the $s$-th stage, we have\n$\\mathbb{E}\\!\\left[\\phi(\\widetilde{x}^{s})\\!-\\!\\phi(x_{\\star})\\right]\n\\leq\\frac{\\theta_{s}^{2}}{2\\eta m_{s}}\\mathbb{E}\\!\\left[\\|y^{s}_{0}\\!-\\!x_{\\star}\\|^2\\!-\\!\\|y^{s}_{m_{s}}\\!-\\!x_{\\star}\\|^2\\right]\\!+(1\\!-\\!\\theta_{s})\\mathbb{E}[\\phi(\\widetilde{x}^{s-\\!1})\\!-\\!\\phi(x_{\\star})]$. This completes the proof.\n\\end{proof}\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}