diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcmxq" "b/data_all_eng_slimpj/shuffled/split2/finalzzcmxq" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcmxq" @@ -0,0 +1,5 @@ +{"text":"\\section{#1}\n}\n\\def\\hskip 1pt\\vrule width1pt\\hskip 1pt{\\hskip 1pt\\vrule width1pt\\hskip 1pt}\n\\def{\\mathsf b}{{\\mathsf b}}\n\\def\\mathsf P{\\mathsf P}\n\\def{\\mathbb H}{{\\mathbb H}}\n\\def{\\mathbb N}{{\\mathbb N}}\n\\def{\\mathbb R}{{\\mathbb R}}\n\\def{\\mathbb C}{{\\mathbb C}}\n\\def{\\mathbb Z}{{\\mathbb Z}}\n\\def{\\mathbb P}{{\\mathbb P}}\n\\def{{\\rm I}\\kern-1pt\\Pi}{{{\\rm I}\\kern-1pt\\Pi}}\n\\def{\\mathbb S}{{\\mathbb S}}\n\\def\\mathfrak{S}{\\mathfrak{S}}\n\\def\\mathbb T{\\mathbb T}\n\\def\\mathcal{T}{\\mathcal{T}}\n\\def\\alpha{\\alpha}\n\\def\\b #1;{{\\bf #1}}\n\\def{\\bf x}{{\\bf x}}\n\\def{\\bf k}{{\\bf k}}\n\\def{\\bf y}{{\\bf y}}\n\\def\\mathbf{u}{\\mathbf{u}}\n\\def{\\bf w}{{\\bf w}}\n\\def{\\bf z}{{\\bf z}}\n\\def{\\bf h}{{\\bf h}}\n\\def\\mathfrak{m}{\\mathfrak{m}}\n\\def\\mathbf{j}{\\mathbf{j}}\n\\def\\mathbf{r}^*{\\mathbf{r}^*}\n\\def{\\mathbf 0}{{\\mathbf 0}}\n\\def\\epsilon{\\epsilon}\n\\def\\lambda{\\lambda}\n\\def\\phi{\\phi}\n\\def\\varphi{\\varphi}\n\\def\\sigma{\\sigma}\n\\def\\mathbf{t}{\\mathbf{t}}\n\\def\\Delta{\\Delta}\n\\def\\delta{\\delta}\n\\def{\\cal F}{{\\cal F}}\n\\def{\\cal O}{{\\cal O}}\n\\def{\\cal P}{{\\cal P}}\n\\def{\\mathcal M}{{\\mathcal M}}\n\\def{\\mathcal C}{{\\mathcal C}}\n\\def\\CR{{\\mathcal C}}\n\\def{\\mathcal N}{{\\mathcal N}}\n\\def{\\cal Z}{{\\cal Z}}\n\\def{\\cal A}{{\\cal A}}\n\\def{\\cal V}{{\\cal V}}\n\\def{\\cal E}{{\\cal E}}\n\\def{\\mathcal H}{{\\mathcal H}}\n\\def{\\bf W}{{\\bf W}}\n\\def{\\mathbb W}{{\\mathbb W}}\n\\def\\mathbb{Y}{\\mathbb{Y}}\n\\def{\\bf Y}{{\\bf Y}}\n\\def{\\widetilde\\epsilon}{{\\widetilde\\epsilon}}\n\\def{\\widetilde E}{{\\widetilde E}}\n\\def{\\overline{B}}{{\\overline{B}}}\n\\def\\ip#1#2{{\\langle {#1}, {#2}\\rangle}}\n\\def\\derf#1#2{{#1}^{(#2)}}\n\\def\\node#1#2{x_{{#1},{#2}}}\n\\def\\mathop{\\hbox{{\\rm ess sup}}}{\\mathop{\\hbox{{\\rm ess sup}}}}\n\\def{{\\bf e}_{q+1}}{{{\\bf e}_{q+1}}}\n\\def\\begin{equation}{\\begin{equation}}\n\\def\\end{equation}{\\end{equation}}\n\\def\\begin{eqnarray}{\\begin{eqnarray}}\n\\def\\end{eqnarray}{\\end{eqnarray}}\n\\def\\eref#1{(\\ref{#1})}\n\\def\\displaystyle{\\displaystyle}\n\\def\\cfn#1{\\chi_{{}_{ #1}}}\n\\def\\varrho{\\varrho}\n\\def\\intpart#1{{\\lfloor{#1}\\rfloor}}\n\\def\\largeint#1{{\\lceil{#1}\\rceil}}\n\\def\\mathbf{ \\hat I}{\\mathbf{ \\hat I}}\n\\def\\mbox{{\\rm dist }}{\\mbox{{\\rm dist }}}\n\\def\\mbox{{\\rm Prob }}{\\mbox{{\\rm Prob }}}\n\\def\\donchitre#1#2{\\vskip 6.5cm\\noindent\n\\parbox[t]{1in}{\\special{eps:#1.eps x=6.5cm y=5.5cm}}\n\\hbox to 7cm{}\\parbox[t]{0.0cm}{\\special{eps:#2.eps x=6.5cm y=5.5cm}}}\n\\def\\chitra#1{\\vskip 9.5cm\\noindent\n\\hskip2in{\\special{eps:#1.eps x=8.5cm y=8.5cm}}\n}\n\\defFrom\\ {From\\ }\n\\def|\\!|\\!|{|\\!|\\!|}\n\\def{\\mathbb X}{{\\mathbb X}}\n\\def{\\mathbb B}{{\\mathbb B}}\n\\def\\mbox{\\mathsf{span }}{\\mbox{\\mathsf{span }}}\n\\def\\bs#1{{\\boldsymbol{#1}}}\n\\def\\bs{\\omega}{\\bs{\\omega}}\n\\def\\bs{\\theta}{\\bs{\\theta}}\n\\def\\mathbf{t}^*{\\mathbf{t}^*}\n\\def\\mathbf{x}^*{\\mathbf{x}^*}\n\\def\\mathsf{dist }{\\mathsf{dist }}\n\\def\\mathsf{supp\\ }{\\mathsf{supp\\ }}\n\\def\\corr#1{{\\color{red} {#1}}}\n\\title{Deep nets for local manifold learning}\n\\author{Charles K. Chui\\thanks{Department of Statistics, Stanford University, Stanford, CA 94305. The research of this author is supported by ARO Grant W911NF-15-1-0385.\n\\textsf{email:} ckchui@stanford.edu. }\\ \n\\ and\nH.~N.~Mhaskar\\thanks{Department of Mathematics, California Institute of Technology, Pasadena, CA 91125;\nInstitute of Mathematical Sciences, Claremont Graduate University, Claremont, CA 91711. The research of this author is supported in part by ARO Grant W911NF-15-1-0385.\n\\textsf{email:} hrushikesh.mhaskar@cgu.edu. } }\n\\date{}\n\\begin{document}\n\\maketitle\n\\begin{abstract}\nThe problem of extending a function $f$ defined on a training data ${\\mathcal C}$ on an\nunknown manifold ${\\mathbb X}$ to the entire manifold and a tubular neighborhood of this manifold is considered in this paper. For ${\\mathbb X}$ embedded in a high dimensional ambient Euclidean space ${\\mathbb R}^D$, a deep learning algorithm is developed for finding a local coordinate system for the manifold \\textbf{without eigen--decomposition}, which reduces the problem to the classical problem of function approximation on a low dimensional cube. Deep nets (or multilayered neural networks) are proposed to accomplish this approximation scheme by using the training data. Our methods do not involve such optimization techniques as back--propagation, while assuring optimal (a priori) error bounds on the output in terms of the number of derivatives of the target function. In addition, these methods are universal, in that they do not require a prior knowledge of the smoothness of the target function, but adjust the accuracy of approximation locally and automatically, depending only upon the local smoothness of the target function. Our ideas are easily extended to solve both the pre--image problem and the out--of--sample extension problem, with a priori bounds on the growth of the function thus extended. \n\\end{abstract}\n\n\\bhag{Introduction}\nMachine learning is an active sub--field of Computer Science on algorithmic development for learning and making predictions based on some given data, with a long list of applications that range from computational finance and advertisement, to information retrieval, to computer vision, to speech and handwriting recognition, and to structural healthcare and medical diagnosis. In terms of function approximation, the data for learning and prediction can be formulated as $\\{({\\bf x}, f_{\\bf x})\\}$, obtained with an unknown probability distribution. Examples include: the Boston housing problem (of predicting the median price $f_{\\bf x}$ of a home based on some vector ${\\bf x}$ of 13 other attributes \\cite{vapnik2013nature}) and the floor market problem \\cite{tiao1989model, chakraborty1992forecasting} (that deals with the indices of the wheat floor pricing in three major markets in the United States). For such problems, the objective is to predict the index $f_{\\bf x}$ in the next month, say, based on a vector ${\\bf x}$ of their values over the past few months. Other similar problems include the prediction of blood glucose level $f_{\\bf x}$ of a patient based on a vector ${\\bf x}$ of the previous few observed levels \\cite{sergei, mnpdiabetes}, and the prediction of box office receipts ($f_{\\bf x}$) on the date of release of a movie in preparation, based on a vector ${\\bf x}$ of the survey results about the movie \\cite{sharda2002forecasting}. It is pointed out in \\cite{multilayer, mauropap, compbio} that all the pattern classification problems can also be viewed fruitfully as problems of function approximation. While it is an ongoing research to allow non--numeric input ${\\bf x}$ (e.g., \\cite{treepap}), we restrict our attention in this paper to the consideration of ${\\bf x}\\in{\\mathbb R}^D$, for some integer $D\\ge 1$.\n\n\nIn the following discussion, the first component ${\\bf x}$ is considered as input, while the second component $f_{\\bf x}$ is considered the output of the underlying process. The central problem is to estimate the conditional expectation of $f_{\\bf x}$ given ${\\bf x}$. Various statistical techniques and theoretical advances in this direction are well--known (see, for example \\cite{vapnik1998statistical}). In the context of neural and radial--basis--function networks, an explicit formulation of the input\/output machines was pointed out in \\cite{girosi1990networks,girosi1995regularization}. More recently, the nature of deep learning as an input\/output process is formulated in the same way, as explained in \\cite{lecun2015deep, poggio_deep_net_2015}. To complement the statistical perspective and understand the theoretical capabilities of these processes, it is customary to think of the expected value of $f_{\\bf x}$, given ${\\bf x}$ , as a function $f$ of ${\\bf x}$. The question of empirical estimation in this context is to carry out the approximation of $f$ given samples $\\{({\\bf x}, f({\\bf x}))\\}_{{\\bf x}\\in{\\mathcal C}}$, where ${\\mathcal C}$ is a finite \\emph{training data} set. In practice, because of the random nature of the data, it may be possible that there are several pairs of the form $({\\bf x}, f_{\\bf x})$ in the data for the same values of ${\\bf x}$. In this case, a statistical scheme, such as some kind of averaging of $f_{\\bf x}$ being the simplest one, can be used to obtain a desired value $f({\\bf x})$ for the sample of $f$ at ${\\bf x}$, ${\\bf x}\\in{\\mathcal C}$. From this perspective, the problem of extending $f$ from the traning data set ${\\mathcal C}$ to ${\\bf x}$ not in ${\\mathcal C}$ in machine learning is called the \\emph{generalization problem}.\n\nWe will illustrate this general line of ideas by using neural networks as an example. To motivate this idea, let us first recall a theorem originating with Kolmogorov and Arnold \\cite[Chapter~17, Theorem~1.1]{lorentz_advanced}. According to this theorem, there exist \\emph{universal} Lipschitz continuous functions $\\phi_1,\\cdots,\\phi_{2D+1}$ and \\emph{universal} scalars $\\lambda_1,\\cdots,\\lambda_D\\in (0,1)$, for which every continuous function $f : [0,1]^D\\to {\\mathbb R}$ can be written as \n\\begin{equation}\\label{kolmtheo}\nf({\\bf x})=\\sum_{j=1}^{2D+1}g\\left(\\sum_{k=1}^D \\lambda_k\\phi_j(x_k)\\right), \\qquad {\\bf x}=(x_1,\\cdots,x_D)\\in [0,1]^D,\n\\end{equation}\nwhere $g$ is a continuous function that depends on $f$. In other words, for a given $f$, only one function $g$ has to be determind to give the representation formula \\eref{kolmtheo} of $f$. \n\nA neural network, used as an input\/output machine, consists of an input layer, one or more hidden layers, and an output layer. Each hidden layer consists of a number of neurons arranged according to the network architecture. Each of these neurons has a local memory and performs a simple non--linear computation upon its input. The input layer fans out the input ${\\bf x}\\in{\\mathbb R}^D$ to the neurons at the first hidden layer. The output layer typically takes a linear combination of the outputs of the neurons at the last hidden layer. \nThe right hand side of \\eref{kolmtheo} is a neural network with two hidden layers. The first contains $D$ neurons, where the $j$--th neuron computes the sum $\\sum_{k=1}^D \\lambda_k\\phi_j(x_k)$. The next hidden layer contains $2D+1$ neurons each evaluating the function $g$ on the \noutput of the $j$--th neuron in the first hidden layer. The output layers takes the sum of the results as indicated in \\eref{kolmtheo}.\n\n\nFrom a practical point of view, such a network is clearly hard to construct, since only the existence of the functions $\\phi_j$ and $g$ is known, without a numerical procedure for computing these. In the early mathematical development of neural networks during the late 1980s and early 1990s, instead of finding these functions for the representation of a given continuous function $f$ in \\eref{kolmtheo}, the interest was to study the existence and characterization of \\emph{universal} functions $\\sigma :{\\mathbb R}\\to{\\mathbb R}$, called \\emph{activation functions} of the neural networks, such that each neuron evaluates the activation function upon an affine transform of its input, and the network is capable of approximating any desired real-valued continuous target function $f: K\\to{\\mathbb R}$ arbitrarily closely on $K$, where $K\\subset {\\mathbb R}^D$ is any compact set.\n\nFor example, a neural network with one hidden layer can be expressed as a function\n\\begin{equation}\\label{onehiddenlayer}\n\\mathcal{N}({\\bf x})=\\mathcal{N}_n(\\{{\\bf w}_k\\}, \\{a_k\\}, \\{b_k\\};{\\bf x})= \\sum_{k=1}^n a_k\\sigma({\\bf w}_k\\cdot {\\bf x} +b_k), \\qquad {\\bf x}\\in{\\mathbb R}^D.\n\\end{equation}\nHere, the hidden layer consists of $n$ neurons, each of which has a local memory. The local memory of the $k$--th neuron contains the \\emph{weights} ${\\bf w}_k\\in {\\mathbb R}^D$, and the \\emph{threshold} $b_k\\in{\\mathbb R}$. Upon receiving the input ${\\bf x}\\in{\\mathbb R}^D$ from the input later, the $k$--th neuron evaluates $\\sigma({\\bf w}_k\\cdot{\\bf x}+b_k)$ as its output, where $\\sigma$ is a non--linear activation function. The output layer is just one circuit where the coefficients $\\{a_k\\}$ are stored in a local memory, and the evaluates the linear combination as indicated in \\eref{onehiddenlayer}. Training of this network in order to learn a function $f$ on a compact subset $K\\subset{\\mathbb R}^D$ to an accuracy of $\\epsilon>0$ involves finding the parameters $\\{a_k\\}$, $\\{{\\bf w}_k\\}$, $\\{b_k\\}$ so that \n\\begin{equation}\\label{universal appr} \n\\max_{{\\bf x}\\in K}|f({\\bf x})-\\mathcal{N}({\\bf x})|<\\epsilon.\n\\end{equation}\nThe most popular technique for doing this is the so called back--propagation, which seeks to find these quantities by minimizing an error functional usually with some regularization parameters. \nWe remark that the number $n$ of \\emph{neurons} in the approximant \\eref{onehiddenlayer} must increase, if the tolerance $\\epsilon >0$ in the approximation\nof the target function $f$ is required to be smaller. \n\n\nFrom a theoretical perspective, the main attraction of neural networks with one hidden layer is their \\emph{universal approximation property} as formulated in \\eref{universal appr}, which overshadows the properties of their predecessors, namely: the perceptrons \\cite{minsky1988perceptrons}. In particular, the question of finding sufficient conditions on the actvation function $\\sigma$ that ensure the universal approximation property was investigated in great detail by many authors, with emphasis on the most popular \\emph{sigmoidal function}, defined by the property $\\sigma(t)\\to 1$ for $t\\to\\infty$ and $\\sigma(t)\\to 0$ for $t\\to -\\infty$. For example, Funahashi \\cite{funahashi1989} applied some discretization of an integral formula from \\cite{irie1988} to prove the universal approximation property for some sigmoidal function $\\sigma $. A similar theorem was proved by Hornik, Stinchcombe, White \\cite{hornik1989} by using the Stone--Weierstrass theorem, and another by Cybenko \\cite{cybenko1989} by applying the Hahn--Banach and Riesz Representation theorems. A constructive proof via approximation by ridge functions was given in our paper \\cite{chuili1992}, with algorithm for implementation presented in our follow-up work \\cite{chui1993realization}. A complete characterization of which activation functions are allowed to achieve the universal approximation property was given later in \\cite{mhasmich, leshnolinpinkus}. \n\nHowever, for neural networks with one hidden layer, one of the severe limitations to applying training algorithms based on optimization, such as back--propagation or those proposed in the book \\cite{vapnik1998statistical} of Vapnik, is that it is neccessary to know the number of neurons in $\\mathcal{N}$ in advance. \nTherefore, one major problem in the 1990s, known as the \\emph{complexity problem}, was to estimate the number of neurons required to approximate a function to a desired accuracy. In practice, this gives rise to a trade-off: to achieve a good approximation, one needs to have a large number of neurons, which makes the implementation of the training algorithm harder.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIn this regard, nearly a century of research in approximation theory suggests that the higher the order of smoothness of the target function, the smaller the number of neurons should be, needed to achieve the desired accuracy. There are many different definitions of smoothness that give rise to different estimates. For example, under the condition that the Fourier transform of the target function $f$ satisfies $\\displaystyle\\int_{{\\mathbb R}^D} |\\bs{\\omega}\\hat{f}(\\bs{\\omega})|d\\bs{\\omega}<\\infty$, Barron \\cite{barron1993} proved the existence of a neural network with ${\\cal O}(\\epsilon^{-2})$ neurons that gives an $L^2([0,1]^D)$ error of ${\\cal O}(\\epsilon)$. While it is interesting to note that this number of neurons is essentially independent of the dimension $D$, the constants involved in the ${\\cal O}$ term as well as the number of derivatives needed to ensure the condition on the target function may increase with $D$. Several authors have subsequently improved upon such results under various conditions on the activation function as well as the target function so as to ensure that the constants depend polynomially on $D$ (e.g., \\cite{kurkova1, kurkova2, tractable} and references therein).\n\nThe most commonly understood definition of smoothness is just the number of derivatives of the target function. It is well--known from the theory of $n$-widths that if $r\\ge 1$ is an integer, and the only a priori information assumed on the unknown target function is that it is $r$--times continuously differentiable function, then a stable and uniform approximation to within $\\epsilon$ by neural networks must have at least a constant multiple of $\\epsilon^{-D\/r}$ neurons. \nIn \\cite{optneur}, we gave an explicit construction for a neural network that achieves the accuracy of $\\epsilon$ using ${\\cal O}(\\epsilon^{-D\/r})$ neurons arranged in a single hidden layer. It follows that this suffers from a curse of dimensionality, in that the number of neurons increases exponentially with the input dimension $D$. Clearly, if the smoothness $r$ of the function increases linearly with $D$, as it has to in order to satisfy the condition in \\cite{barron1993}, then this bound is also ``dimension independent''.\n\nWhile this is definitely unavoidable for neural networks with one hidden layer, the most natural way out is to achieve local approximation; i.e., given an input ${\\bf x}$, construct a network with a uniformly bounded number of neurons that approximates the target function with the optimal rate of approximation near the point ${\\bf x}$, preferably using the values of the function also in a neighborhood of ${\\bf x}$. Unfortunately, this can never be achieved as we proved in \n\\cite{chui1994neural}. Furthermore, we have proved in \\cite{ chui1996limitations} that even if we allow each neuron to evaluate its own activation function, this local approximation fails. Therefore the only way out is to use a neural network with more than one hidden layer, called \\emph {deep net} (for deep neural network). Indeed, local approximation can be achieved by a deep net as proved in our papers \\cite{multilayer, mhaskar1993neural}. In this regard, it is of interest to point out that an adaptive version of \\cite{multilayer, mhaskar1993neural} was derived in \\cite{lermont} for prediction of time series, yielding as much as 150\\% improvement upon the state--of--the--art at that time, in the study of the floor market problem. \n\nOf course, the curse of dimensionality is inherent to the problem itself, whether with one or more hidden layers. Thus, while it is possible to construct a deep net to approximate a function at each point arbitrarily closely by using a uniformly bounded number of neurons, the uniform approximation on an entire compact set, such as a cube, would still require an approximation at a number of points in the cube, and this number increasing exponentially with the input dimension. Equivalently, the effective number of neurons for approximation on the entire cube is still exponentially increasing with the input dimension.\n\n\nIn addition to the high dimensionality, another difficulty in solving the function approximation problem is that the data may be not just high dimensional but unstructured and sparse. A relatively recent idea which has been found very useful in applications, in fact, too many to list exhaustively, is to consider the points ${\\bf x}$ as being sampled from an unknown, low dimensional sub--manifold ${\\mathbb X}$ of the ambient high dimensional space ${\\mathbb R}^D$. The understanding of the geometry of ${\\mathbb X}$ is the subject of the bulk of modern research in the area of diffusion geometry. An introduction to this subject can be found in the special issue \\cite{achaspissue} of Applied and Computational Harmonic Analysis. \nThe basic idea is to construct the so--called diffusion matrix from the data, and use its eigen--decomposition for finding local coordinate charts and other useful aspects of the manifold. The convergence of the eigen--decomposition of the matrices to that of the Laplace--Beltrami and other differential operators on the manifold is discussed, for example, in \\cite{belkinfound, lafon, singer}. It is shown in \\cite{jones2008parameter, jones2010universal} that some of the eigenfunctions on the manifolds yield a local coordinate chart on the manifold. In the context of deep learning, this idea is explored as a function approximation problem in \\cite{cohen_diffusion_net2015}, where a deep net is developed in order to learn the coordinate system given by the eigenfunctions. \n\nOn the other hand, while much of the research in this direction is focused on understanding the data geometry, the theoretical foundations for the problems of function approximation and harmonic analysis on such data--defined manifold are developed extensively in \\cite{mauropap, frankbern, modlpmz, eignet, heatkernframe, chuiinterp}. The theory is developed more recently for kernel construction on directed graphs and analysis of functions on changing data in our paper \\cite{tauberian}. However, a drawback of the approach based on data--defined manifolds, known as the out--of--sample extension problem, is that since the diffusion matrix is constructed entirely using the available data, the whole process must be done again if new data become available. A popular idea is then to extend the eigen--functions to the ambient space by using the so called Nystr\\\"om extension \\cite{coifman2006geometric}. \n\nThe objective of this present paper is to describe a deep learning approach to the problem of function approximation, using three groups of networks in the deep net. The lowest layer accomplishes dimensionality reduction by learning the local coordinate charts on the unknown manifold \\textbf{without using any eigen--decomposition}. Having found the local coordinate system, the problem is reduced to the classical problem of approximating a function on a cube in a relatively low dimensional Euclidean space. For the next two layers, we may now apply the powerful techniques from approximation theory to approximate the target function $f$, given the samples on the training data set ${\\mathcal C}$. We describe two approaches to construct the basis functions using multi--layered neural networks, and to construct other networks to use these basis functions in the next layer to accomplish the desired function approximation. \n\nWe summarize some of the highlights of our paper. \n\\begin{itemize}\n\\item We give a very simple learning method for learning the local coordinate chart near each point. The subsequent approximation process is then entirely local to each coordinate patch.\n\\item Our method allows us to solve the pre--image problem easily; i.e., to generate a point on the manifold corresponding to a given local coordinate description.\n\\item The learning method itself \\textbf{does not involve} any optimization based technique, except probably for reducing the noise in the values of the function itself.\n\\item We provide optimal error bounds on approximation based on the smoothness of the function, while the method itself \\emph{does not require an a priori knowledge of such smoothness.}\n\\item Our methods can solve easily the \nout--of--sample extension problem. Unlike the Nystr\\\"om extension process, our method does not require any elaborate construction of kernels defined on the ambient space and commuting with certain differential operators on the unknown manifold.\n\\item Our method is designed to control the growth of the out--of--sample extension in a tubular neighborhood of ${\\mathbb X}$, and is local to each coordinate patch.\n\\end{itemize}\n\n This paper is organized as follows. In Section~\\ref{mainsect}, we describe the main ideas in our approach. The local coordinate system is described in detail in Section~\\ref{loccordsect}. Having thus found a local coordinate chart around the input, the problem of function approximation reduces to the classical one. In Section~\\ref{locbasissect}, we demonstrate how the popular basis functions used in this theory can be implemented using neural networks with one or more hidden layers. The function approximation methods which work with unstructured data without using optimization are described in Section~\\ref{approxsect}. In Section~\\ref{extsect}, we explain how our method can be used to solve both the pre--image problem and the out--of--sample extension problem.\n\n\n\n\\bhag{Main ideas and results}\\label{mainsect}\nThe purpose of this paper is to develop a deep learning algorithm to learn a function $f:{\\mathbb X}\\to{\\mathbb R}$, where ${\\mathbb X}$ is a \n$d$ dimensional compact Riemannian sub--manifold of a Euclidean space ${\\mathbb R}^D$, with $d\\ll D$, given \\emph{training data} of the form $\\{({\\bf x}_j, f({\\bf x}_j))\\}_{j=1}^M$, ${\\bf x}_j\\in{\\mathbb X}$. It is important to note that ${\\mathbb X}$ itself is not known in advance; the points ${\\bf x}_j$ are known only as $D$--dimensional vectors, presumed to lie on ${\\mathbb X}$. In Sub--section~\\ref{ideasect}, we explain our main idea briefly. In Sub--section~\\ref{loccordsect}, we derive a simple construction of the local coordinate chart for ${\\mathbb X}$. In Sub--section~\\ref{locbasissect}, we describe the construction of a neural network with one or more hidden layers to implement two of the basis functions used commonly in function approximation. While the well known classical approximation algorithms require a specific placement of the training data, one has no control on the location of the data in the current problem. In Section~\\ref{approxsect}, we give algorithms suitable for the purpose of solving this problem.\n\n\n\\subsection{Outline of the main idea}\\label{ideasect}\n\nOur approach is the following.\n\\begin{enumerate}\n\\item \\label{loccoord} ${\\mathbb X}$ is a finite union of local coordinate neighborhoods, and ${\\bf x}$ belongs to one of them, say $\\mathbb{U}$. We find a local coordinate system for this neighborhood in terms of Euclidean distances on ${\\mathbb R}^D$, say $\\Phi : \\mathbb{U}\\to [-1,1]^d$, where $d$ is the dimension of the manifold. Let ${\\bf y}=\\Phi({\\bf x})$, and with a relabeling for notational convenience, $\\{{\\bf x}_j\\}_{j=1}^K$ be the points in $\\mathbb{U}$, ${\\bf y}_j=\\Phi({\\bf x}_j)$. This way, we have reduced the problem to approximating $g=f\\circ\\Phi :[-1,1]^d\\to{\\mathbb R}$ at ${\\bf y}$, given the values\n$\\{({\\bf y}_j,g({\\bf y}_j))\\}_{j=1}^K$, where $g({\\bf y}_j)=f({\\bf x}_j)$. We note that $\\{{\\bf y}_j\\}$ is a subset of the unit cube of low dimensional Euclidean space, representing a local coordinate patch on ${\\mathbb X}$. Thus, the problem of approximation of $f$ on this patch is reduced that of approximation of $g$, a well studied classical approximation problem.\n\\item \\label{spline} We will summarize the solution to this problem using neural networks with one or more hidden layers, e.g., an implementation of multivariate tensor product spline approximation using multi--layerd neural network. \n\\end{enumerate}\nThus, the layers of our deep learning networks will have three main layers.\n\\begin{enumerate}\n\\item The bottom layer receives the input ${\\bf x}$, figures out which of the points ${\\bf x}_j$ are in the coordinate neighborhood of ${\\bf x}$, and computes the local coordinates ${\\bf y}$, ${\\bf y}_j$.\n\\item The next several layers compute the local basis functions necessary for the approximation, for example, the $B$--splines and their translates using the multi--layered neural network as in \\cite{multilayer}.\n\\item The last layer receives the data $\\{({\\bf y}_j,g({\\bf y}_j))\\}_{j=1}^K$, and computes the approximation described in Step~\\ref{spline} above.\n\\end{enumerate}\n\n\\subsection{Local coordinate learning}\\label{loccordsect}\nWe assume that $1\\le d\\le D$ are integers, ${\\mathbb X}$ is a \n$d$ dimensional smooth, compact, connected, Riemannian sub--manifold of a Euclidean space ${\\mathbb R}^D$, with geodesic distance $\\rho$. \n\nBefore we discuss our own construction of a local coordinate chart on ${\\mathbb X}$, we wish to motivate the work by describing a result from \\cite{jones2010universal}. Let $\\{\\lambda_k^2\\}_{k=0}^\\infty$ be the sequence of eigenvalues of the (negative of the) Laplace--Beltrami operator on ${\\mathbb X}$, and for each $k\\ge 0$, $\\phi_k$ be the eigenfunction corresponding to the eigenvalue $\\lambda_k^2$.\nWe define a formal ``heat kernel'' by\n\\begin{equation}\\label{heatkerndef}\nK_t({\\bf x},{\\bf y})=\\sum_{k=0}^\\infty \\exp(-\\lambda_k^2t)\\phi_k({\\bf x})\\phi_k({\\bf y}).\n\\end{equation} \nThe following result is a paraphrasing of the heat triangulation theorem proved in \\cite[Theorem~2.2.7]{jones2010universal} under weaker assumptions on ${\\mathbb X}$.\n\n\\begin{theorem}\\label{jmstheo}{\\rm (cf. \\cite[Theorem~2.2.7]{jones2010universal})}\n Let ${\\bf x}_0^*\\in{\\mathbb X}$. There exist constants $R>0$, $c_1,\\cdots, c_6>0$ depending on ${\\bf x}_0^*$ with the following property. Let $\\mathbf{p}_1,\\cdots,\\mathbf{p}_d$ be $d$ linearly independent vectors in ${\\mathbb R}^d$, and ${\\bf x}_j^*\\in{\\mathbb X}$ be chosen so that ${\\bf x}_j^*-{\\bf x}_0^*$ is in the direction of $\\mathbf{p}_j$, $j=1,\\cdots,d$, and \n$$\nc_1R\\le \\rho({\\bf x}_j^*,{\\bf x}_0^*)\\le c_2R, \\qquad j=1,\\cdots,d,\n$$ \nand $t=c_3R^2$. Let $B\\subset{\\mathbb X}$ be the geodesic ball of radius $c_4R$, centered at ${\\bf x}_0^*$, and \n\\begin{equation}\\label{jmsphidef}\n\\Phi_{\\mbox{jms}}({\\bf x})=R^d(K_t({\\bf x},{\\bf x}_1^*), \\cdots, K_t({\\bf x},{\\bf x}_d^*)), \\qquad {\\bf x}\\in B.\n\\end{equation}\nThen\n\\begin{equation}\\label{jmsdistpreserve}\n\\frac{c_5}{R}\\rho({\\bf x}_1,{\\bf x}_2)\\le \\|\\Phi_{\\mbox{jms}}({\\bf x}_1)-\\Phi_{\\mbox{jms}}({\\bf x}_2)\\|_d\\le \n\\frac{c_6}{R}\\rho({\\bf x}_1,{\\bf x}_2), \\qquad {\\bf x}_1,{\\bf x}_2\\in B.\n\\end{equation}\n\\end{theorem}\nSince the paper \\cite{jones2010universal} deals with a very general manifold, the mapping $\\Phi_{\\mbox{jms}}$ is not claimed to be a diffeomorphism, although it is obviously one--one on $B$.\n\nWe note that even in the simple case of a Euclidean sphere, an explicit expression for the heat kernel is not known. In practice, the heat kernel has to be approximated using appropriate Gaussian networks \\cite{lafon}. In this section, we aim to obtain a local coordinate chart that is computed directly in terms of Euclidean distances on ${\\mathbb R}^D$, and depends upon $d+2$ trainable \nparameters. The construction of this chart constitutes the first hidden layer of our deep learning process. As explained in the introduction, once this chart is in place, the question of function extension on the manifold reduces locally to the well studied problem of function extension on a $d$ dimensional unit cube. \n\nTo describe our constructions,we first develop some notation.\n\nIn this section, it is convenient to use the notation ${\\bf x}=(x^1,\\cdots,x^D)\\in{\\mathbb R}^D$ rather than ${\\bf x}=(x_1,\\cdots,x_D)$, which we will use in the rest of the sections. If $1\\le d\\le D$ is an integer, and ${\\bf x}\\in{\\mathbb R}^d$, $\\|{\\bf x}\\|_d$ denotes the Euclidean norm of ${\\bf x}$. If ${\\bf x}\\in{\\mathbb R}^D$, we will write $\\pi_c({\\bf x})=(x^1,\\cdots,x^d)$, $\\|{\\bf x}\\|_d=\\|\\pi_c({\\bf x})\\|_d$. If ${\\bf x}\\in{\\mathbb R}^d$, $r>0$,\n$$\nB({\\bf x},r)=\\{{\\bf y}\\in{\\mathbb R}^d : \\|{\\bf x}-{\\bf y}\\|_d\\le r\\}.\n$$\n\nThere exists $\\delta^*>0$ with the following properties. The manifold is covered by finitely many geodesic balls such that for the center ${\\bf x}_0^*\\in{\\mathbb X}$ of any of these balls, there exists a diffeomorphism, namely, the exponential coordinate map $u=(u^1,\\cdots,u^D)$ from $B_d(0,\\delta^*)$ to the geodesic ball around ${\\bf x}_0^*=u(0)$ \\cite[p.~65]{docarmo_riemannian}. If $J$ is the Jacobian matrix for $u$, given by $J_{i,j}({\\bf y})=D_iu^j({\\bf y})$, ${\\bf y}\\in B_d(0,\\delta^*)$, then \n\\begin{equation}\\label{finaljacobiatzero}\nJ(0)=[I_d|0_{d,D-d}].\n\\end{equation}\nFurther, there exists $\\kappa>0$ (independent of ${\\bf x}^*$) such that\n\\begin{equation}\\label{jacobimodcont}\n\\|J(\\mathbf{q})-J(0)\\|\\le \\kappa\\|\\mathbf{q}\\|_d, \\qquad \\mathbf{q}\\in B_d(0,\\delta^*).\n\\end{equation}\nLet $\\eta^*:=\\min(\\delta^*, 1\/(2\\kappa))$. Then \\eref{jacobimodcont} implies that \n\\begin{equation}\\label{jacobinormest}\n1\/2\\le 1-\\kappa\\|\\mathbf{q}\\|_d\\le \\|J(\\mathbf{q})\\|\\le 1+\\kappa\\|\\mathbf{q}\\|_d\\le 2, \\qquad \\mathbf{q}\\in B_d(0,\\eta^*).\n\\end{equation}\nIn turn, this leads to\n\\begin{equation}\\label{rhoeuccomp1}\n(1\/2)\\rho(u(\\mathbf{p}),u(\\mathbf{q}))\\le \\|\\mathbf{p}-\\mathbf{q}\\|_d\\le 2\\rho(u(\\mathbf{p}), u(\\mathbf{q})), \\qquad \\mathbf{p},\\mathbf{q}\\in B_d(0,\\eta^*).\n\\end{equation}\n\n\nLet ${\\bf x}_\\ell^*=u(\\mathbf{q}_\\ell)$, $\\ell=1,\\cdots,d$, be chosen with the following properties:\n\\begin{equation}\\label{nbdcond}\n\\|\\mathbf{q}_\\ell\\|_d\\le \\eta^*, \\qquad \\ell=1,\\cdots,d,\n\\end{equation}\nand, with the matrix function $U$ defined by \n\\begin{equation}\\label{umatrixdef}\nU_{i,j}(\\mathbf{q})=u^i(\\mathbf{q})-({\\bf x}_j^*)^i,\n\\end{equation}\n we have\n\\begin{equation}\\label{indepcond}\n\\|J(0)U(0){\\bf y}\\|_d\\ge \\gamma>0, \\qquad \\|{\\bf y}\\|_d= 1.\n\\end{equation}\nAny set $\\{{\\bf x}_\\ell^*\\}$ with these properties will be called \\emph{coordinate stars} around ${\\bf x}^*$. We note that the matrix $J(0)U(0)$ has columns given by $\\pi_c({\\bf x}^*-{\\bf x}_j^*)$, $j=1,\\cdots,d$, and hence, can be computed without reference to the map $u$. Let \n\\begin{equation}\\label{betastardef}\n\\beta^* := (1\/2)\\min\\left(\\frac{1}{2\\kappa}, \\delta^*,\\frac{\\gamma}{8\\sqrt{d}}\\right).\n\\end{equation} \n\n\\begin{theorem}\\label{loccordtheo}\nLet $\\Psi(\\mathbf{q}):=(\\|u(\\mathbf{q})-u(\\mathbf{q}_\\ell)\\|^2_D)_{\\ell=1}^d\\in{\\mathbb R}^d$. Then \\\\\n{\\rm (a)} $\\Psi$ is a diffeomorphism on $B_d(0,2\\beta^*)$. If $\\mathbf{p}, \\mathbf{q}\\in B_d(0,2\\beta^*)$, ${\\bf x}=u(\\mathbf{p})$, ${\\bf y}=u(\\mathbf{q})$, then\n\\begin{equation}\\label{distortest}\n\\frac{\\gamma}{2}\\rho({\\bf x},{\\bf y})\\le \\|\\Psi(\\mathbf{p})-\\Psi(\\mathbf{q})\\|_d \\le 32\\sqrt{d}\\eta^*\\rho({\\bf x},{\\bf y}).\n\\end{equation}\n{\\rm (b)} The function $\\Psi$ is a diffeomorphism from $B_d(0,\\beta^*)$ onto $B_d(\\Psi(0),\\beta^*)$. \n \\end{theorem}\n \n\\begin{rem}\\label{maptocuberem}\n{\\rm\nLet $\\mathbb{B}=u(B_d(0,\\beta^*))\\subset {\\mathbb X}$ be a geodesic ball around ${\\bf x}_0^*$. For ${\\bf x}\\in \\mathbb{B}$, we define\n$$\n\\phi({\\bf x})=\\Psi(u^{-1}({\\bf x}))=(\\|{\\bf x}-{\\bf x}_\\ell^*\\|_D^2).\n$$\nThen Theorem~\\ref{loccordtheo}(b) shows that \n$\\phi$ is a diffeomorphism from $\\mathbb{B}$ onto $B_d(\\Psi(0),\\beta^*)$. Since $\\Psi(0)=(\\|{\\bf x}_0^*-{\\bf x}_\\ell^*\\|_D^2)$, \n$$\n\\Phi({\\bf x})=\\frac{\\sqrt{d}}{\\beta^*}(\\phi({\\bf x})-\\Psi(0)), \\qquad {\\bf x}\\in \\mathbb{B}\n$$\nmaps $\\mathbb{B}$ diffeomorphically onto $B_d(0,\\sqrt{d})\\supset [-1,1]^d$. Let $\\mathbb{U}=\\Phi^{-1}([-1,1]^d)$. Then $\\mathbb{U}$ is a neighborhood of ${\\bf x}_0^*$ and $\\Phi$ maps $\\mathbb{U}$ diffeomorphically onto $[-1,1]^d$. We oberve that ${\\mathbb X}$ is a union of finitely many neighborhoods of the form $\\mathbb{U}$, so that any ${\\bf x}\\in{\\mathbb X}$ belongs to at least one such neighborhood. Moreover, $\\Phi({\\bf x})$ can be computed entirely in terms of the description of ${\\bf x}$ in terms of its $D$--dimensional coordinates. \\hfill$\\Box$\\par\\medskip\n}\n\\end{rem}\n\n\\begin{rem}\\label{trainrem}\n{\\rm The trainable parameters are thus $\\beta^*$, and the points ${\\bf x}_0^*, \\cdots, {\\bf x}_d^*$. Since $\\|J(0)\\|=1$, the condition \\eref{indepcond} is satisfied if ${\\bf x}_\\ell^*-{\\bf x}_0^*$ are along linearly independent directions as in Theorem~\\ref{jmstheo}.\n\\hfill$\\Box$\\par\\medskip}\n\\end{rem}\n\n\\begin{rem}\\label{networkimprem}\n{\\rm\nSince the mapping $\\Phi$ in Remark~\\ref{maptocuberem} is a quadratic polynomial in ${\\bf x}$, it can be implemented as a neural network with a single hidden layer using the activation function given in \\eref{requdef} as described in Sub--Section~\\ref{polysubsect}.\n}\n\\end{rem}\n\n\\begin{uda}\\label{helixexample}\n{\\rm\nLet $0