diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkwsc" "b/data_all_eng_slimpj/shuffled/split2/finalzzkwsc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkwsc" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\noindent This paper considers a sample-selection model \\citep[e.g.,][]{Heckman79} given by\n\\begin{eqnarray}\nY^* &=&\\theta_0+\\bm{X}^{\\top}\\bm{\\beta}_0+U,\\label{outcome}\\\\\nD &=& 1\\left\\{\\bm{Z}^{\\top}\\bm{\\gamma}_0\\ge V\\right\\},\\label{selection}\\\\\nY &=& DY^*,\\label{observation}\n\\end{eqnarray}\nwhere $[\\begin{array}{cccc} D & \\bm{X}^{\\top} & \\bm{Z}^{\\top} & Y\\end{array}]$ is an observed random vector, \nwhere \n$[\\begin{array}{cc} U & V \\end{array}]$ is an unobserved random vector such that $E\\left[U^2\\right]<\\infty$ and \n$E[U|\\bm{X}]=0$ and $E[U|\\bm{Z}]=E[U]$ almost surely. Equations~(\\ref{outcome}) and \n(\\ref{selection}) are typically referred to as the {\\it outcome} and {\\it selection} equations, respectively. \nVariants of the model given in (\\ref{outcome})--(\\ref{observation}) have been considered at least since the \ncontributions of \\citet{Gronau74,Heckman74} and \\citet{Lewis74}. These authors were primarily concerned with the \nissue of selectivity bias in empirical analyses of individual labour-force participation decisions, particularly \nfor women. This bias, which is present in least-squares estimates of the parameter \n$[\\begin{array}{cc} \\theta_0 & \\bm{\\beta}^{\\top}_0\\end{array}]$ appearing above in (\\ref{outcome}), arises from \nthe assumption that the observed wages of workers are affected by the self selection of \nthose workers into the workforce. In particular, one can only observe a wage that exceeds the reservation wage \nof the individual in question. In terms of the model given in (\\ref{outcome})--(\\ref{observation}), the wage \noffer is represented by the variable $Y^*$, while the observed wage is denoted by $Y$. The employment status of \nthe individual in question is denoted by the binary variable $D$, which takes a value of one if the unobserved \ndifference $\\bm{Z}^{\\top}\\bm{\\gamma}_0-V$ between the wage offer and the reservation wage is positive; $D$ is \notherwise equal to zero. Observed variables influencing individual participation decisions are collected in the \nvector $\\bm{Z}$, while observed determinants of individual wage offers are collected in $\\bm{X}$. Variants of the \nsample-selection model appearing in (\\ref{outcome})--(\\ref{observation}) have been found useful for a wide variety of \napplied problems apart from the analysis of indidivual labour supply decisions \\citep[e.g.,][]{Vella98}. Relatively \nrecent economic applications include those of \\citet{HelpmanMelitzRubinstein08}, \\citet{MulliganRubinstein08} and \n\\citet{JimenezOngenaPeydroSaurina14}.\n\nThis paper focuses on statistical inference regarding the intercept $\\theta_0$ appearing above in (\\ref{outcome}). \nThe intercept in the outcome equation is often of inherent interest in various applications of the model given by \n(\\ref{outcome})--(\\ref{observation}). For example, suppose that (\\ref{selection}) accurately describes the \nselection of individuals into some treatment group. In this case, the difference between the intercepts in the \noutcome equations for treated and non-treated individuals may be interpreted as the causal effect of treatment \nwhen selection to treatment is mean independent of the unobservable $U$ in the outcome equation \\citep[e.g.,][p. 500]{AndrewsSchafgans98}. The intercept in the outcome \nequation is similarly crucial in computing the average wage difference in problems where the sample-selection \nmodel given above is applied to the analysis of wage differences between workers in two different socioeconomic \ngroups, or between unionized and non-unionized workers \n\\citep[e.g.,][]{Oaxaca73,Lewis86,Heckman90}. Finally, the intercept in the outcome equation permits the \nevaluation of the net benefits of a social program in terms of the differences between the observed \noutcomes of participants and their counterfactual expected outcomes had they chosen not to participate \n\\citep[e.g.,][]{HeckmanRobb85}. \n\nEarly applied work generally proceeded from the assumption that the unobservables \n$[\\begin{array}{cc} U & V\\end{array}]$ appearing above in (\\ref{outcome})--(\\ref{selection}) are bivariate normal \nmean-zero with an unknown covariance matrix and independent of \n$[\\begin{array}{cc} \\bm{X}^{\\top} & \\bm{Z}^{\\top}\\end{array}]$. This assumption in turn allowed for the \nestimation of the parameters appearing in (\\ref{outcome})--(\\ref{selection}) via the method of maximum likelihood \nor the related two-step procedure of \\citet{Heckman74,Heckman76}. These estimates, however, are generally \ninconsistent under departures from the assumed bivariate normality of $[\\begin{array}{cc} U & V\\end{array}]$ \n\\citep[e.g.,][]{ArabmazarSchmidt82,Goldberger83,Schafgans04}. The desirability of not imposing a parametric \nspecification on the joint distribution of the unobservables in the outcome and selection equations has led in turn \nto the development of distribution-free methods of estimating the parameters appearing above in \n(\\ref{outcome})--(\\ref{selection}). Distribution-free methods of estimating the intercept $\\theta_0$ in \n(\\ref{outcome}) include the proposals of \\citet{GallantNychka87,Heckman90,AndrewsSchafgans98} and \\citet{Lewbel07}. \n\nEstimators of the intercept of the outcome equation implemented by distribution-free procedures have \nlarge-sample behaviours that vary depending on the extent of endogeneity in the selection mechanism, \ni.e., on the nature and extent of any dependence between the random variables $U$ and $V$ appearing above in (\\ref{outcome}) \nand (\\ref{selection}), respectively. Given that these features of the joint distribution of \n$[\\begin{array}{cc} U & V \\end{array}]$ are typically unknown \nin empirical practice, the dependence of the asymptotic behaviour of intercept estimators on these features potentially \ncomplicates statistical inference regarding $\\theta_0$. This issue is easily and starkly illustrated in the case of \nordinary least squares (OLS). In particular, suppose that selection in the model is based strictly on observables, which \nis equivalent to assuming that the unobservable $U$ appearing in (\\ref{outcome}) and the selection indicator $D$ are conditionally \nmean independent given $\\bm{X}$ and $\\bm{Z}$, i.e., $P\\left[E[U|D=1,\\bm{X},\\bm{Z}]=E[U|D=0,\\bm{X},\\bm{Z}]\\right]=1$. In this case $\\theta_0$ can be \nconsistently estimated at a parametric rate with no additional assumptions \nimposed on the joint distribution of $[\\begin{array}{ccc} U & V & \\bm{Z}^{\\top}\\bm{\\gamma}_0\\end{array}]$ by applying OLS \nto the outcome equation using only those observations for which $D=1$. On the other hand, the OLS estimate of $\\theta_0$ \nobtained in this way is inconsistent if the difference $1-P\\left[E[U|D=1,\\bm{X},\\bm{Z}]=E[U|D=0,\\bm{X},\\bm{Z}]\\right]$ \nis positive, even if arbitrarily small. It follows that OLS generates inferences regarding $\\theta_0$ that vary drastically with respect \nto the degree to which $E[U|D=1,\\bm{X},\\bm{Z}]$ may differ from $E[U|D=0,\\bm{X},\\bm{Z}]$.\n\nThis paper develops a distribution-free estimator of the intercept $\\theta_0$ in the outcome equation that is \nconsistent and asymptotically normal with a rate of convergence that is \nthe same regardless of the joint distribution of $[\\begin{array}{ccc} U & V & \\bm{Z}^{\\top}\\bm{\\gamma}_0\\end{array}]$. I show that there exists an \nimplementation of the proposed estimator of $\\theta_0$ that converges uniformly at the rate $n^{-p\/(2p+1)}$, where $n$ denotes the sample size and where $p\\ge 2$ is an integer \nthat indexes the strength of certain smoothness assumptions described below. The uniformity of this convergence involves uniformity over a class of joint distributions of \n$[\\begin{array}{ccc} U & V & \\bm{Z}^{\\top}\\bm{\\gamma}_0\\end{array}]$ satisfying necessary conditions for the identification of $\\theta_0$. In other words, the estimator developed below is {\\it adaptive} to the nature of \nselection in the model. \n\nThis paper also shows that the uniform $n^{-p\/(2p+1)}$-rate attainable by the proposed estimator is in fact \nthe optimal rate of convergence of an estimator of $\\theta_0$ in terms of a minimax criterion. It follows that \nthe proposed estimator may be implemented in such a way as to converge in probability to \n$\\theta_0$ at the fastest possible minimax rate.\n \nThe estimator developed below differs from earlier proposals of \\citet{Heckman90} and \\citet{AndrewsSchafgans98} \nthat involve the {\\it rate-adaptive} estimation of the intercept in the outcome equation. These proposals also \ninvolve estimators that are consistent and asymptotically normal regardless of the extent to which selection is \nendogenous, but converge to the limiting normal distribution at unknown rates; in particular, see \\citet[Theorems~1--2]{SchafgansZindeWalsh02} for the \nestimator of \\citet{Heckman90} and \\citet[Theorems~2, 3, 5 and 5*]{AndrewsSchafgans98}. \n\nThe estimator developed below also differs from estimators of $\\theta_0$ that take the form of averages weighted by the \nreciprocal of an estimate of the density of the selection index $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ appearing above in (\\ref{selection}). Such \nestimators \\citep[e.g.,][]{Lewbel07}, in common with the estimators of \\citet{Heckman90} and \n\\citet{AndrewsSchafgans98}, are known to converge generically at unknown rates. In addition, estimators \ntaking the form of inverse density-weighted averages may have sampling distributions that are not even \nasymptotically normal \\citep{KhanTamer10,KhanNekipelov13,ChaudhuriHill16}. In general, the estimators of \n\\citet{Heckman90} and \\citet{AndrewsSchafgans98} and of estimators in the class of \ninverse density-weighted averages converge at rates that depend critically on conditions involving the relative tail \nthicknesses of the distributions of the selection index $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ and of the latent selection \nvariable $V$ appearing in (\\ref{selection}) \\citep{KhanTamer10}. These conditions may be difficult to verify in \napplications. The estimator developed below converges by contrast at a known rate under conditions implied by the \nidentification of $\\theta_0$ to a normal distribution uniformly over the underlying parameter space regardless of the \nrelative tail behaviours of $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ and $V$. This facilitates statistical inference regarding $\\theta_0$.\n\nThe remainder of this paper proceeds as follows. The following section discusses identification of the \nintercept parameter in the outcome equation and presents the new estimator along with its first-order asymptotic \nproperties. Section~\\ref{opt} derives the minimax rate optimality of the new estimator. Section~\\ref{mc} \npresents the results of simulation experiments that investigate the behaviour in finite samples of the proposed \nestimator in relation to other methods. Section~\\ref{ee} considers an application of the new \nestimator to the analysis of gender wage gaps in Malaysia. Section~\\ref{concl} concludes. Proofs of \nall theoretical results are collected in the appendix.\n\n\\section{The New Estimator}\\label{est}\n\n\\noindent This section presents the new estimator of the intercept $\\theta_0$ appearing in (\\ref{outcome}) \nand describes its asymptotic behaviour to first order. Let $\\hat{\\bm{\\beta}}_n$ and \n$\\hat{\\bm{\\gamma}}_n$ denote $\\sqrt{n}$-consistent estimators of the parameters $\\bm{\\beta}_0$ and \n$\\bm{\\gamma}_0$ appearing above in (\\ref{outcome}) and (\\ref{selection}), respectively, where $\\bm{\\gamma}_0$ is \nassumed to be identified up to a location and scale normalization. The existence of such \nestimators has long been established; see e.g., the proposals of \\citet{Han87,Robinson88,PowellStockStoker89,Andrews91,IchimuraLee91,Ichimura93,KleinSpady93,Powell01} or \n\\citet{Newey09}. In addition, suppose that\n$\\left\\{[\\begin{array}{cccc} D_i & \\bm{X}_i^{\\top} & Y_i & \\bm{Z}_i^{\\top} \\end{array}]:\\, i=1,\\ldots,n\\right\\}$ are iid \ncopies of the random vector $[\\begin{array}{cccc} D & \\bm{X}^{\\top} & Y & \\bm{Z}^{\\top} \\end{array}]$. Let \n$\\bm{z}\\in\\mathbb{R}^{l}$ be an arbitrary vector, and define \n\\begin{equation}\\label{etahatn}\n\\hat{\\eta}_n(\\bm{z})\\equiv \\frac{1}{n}\\sum_{i=1}^n 1\\left\\{(\\bm{Z}_i-\\bm{z})^{\\top}\\hat{\\bm{\\gamma}}_n\\le 0\\right\\}\n\\end{equation}\n\nLet \n\\begin{equation}\\label{what}\n\\hat{W}_i\\equiv D_i\\left(Y_i-\\bm{X}^{\\top}_i\\hat{\\bm{\\beta}}_n\\right)\n\\end{equation}\nfor each $i\\in\\{1,\\ldots,n\\}$. This paper proposes to estimate $\\theta_0$ via a locally linear smoother of the form\n\\begin{equation}\\label{mhat}\n\\hat{\\theta}_n\\equiv \\bm{e}_1^{\\top}\\left(\\sum_{i=1}^n \\bm{S}_i K_i\\bm{S}_i^{\\top}\\right)^{-1}\\sum_{i=1}^n\\bm{S}_i K_i\\hat{W}_i,\n\\end{equation}\nwhere $\\bm{e}_1=[\\begin{array}{cc} 1 & 0\\end{array}]^{\\top}$ and where \n\\begin{equation}\\label{si}\n\\bm{S}_i=[\\begin{array}{cc} 1 & \\hat{\\eta}_n(\\bm{Z}_i)-1\\end{array}]^{\\top}\n\\end{equation} \nand \n\\begin{equation}\\label{ki}\nK_i= K\\left(h_n^{-1}\\left(\\hat{\\eta}_n(\\bm{Z}_i)-1\\right)\\right)\n\\end{equation} \nfor $i=1,\\ldots,n$. The quantity $h_n$ appearing in each \n$K_i$ denotes a bandwidth $h_n>0$ such that $h_n\\to 0$ with $nh_n^{3}\\to\\infty$ as $n\\to\\infty$, while for some \n$p\\ge 2$, $K(\\cdot)$ denotes a smoothing kernel of order $p$, i.e., one where \n$\\int_{-\\infty}^{\\infty} K(u) du=1$, \n$\\int_{-\\infty}^{\\infty} u^r K(u)=0$ for all $r\\in \\{1,\\ldots,p-1\\}$ and \n$\\int_{-\\infty}^{\\infty} u^{p} K(u)<\\infty$.\n\nAssume that the disturbance $U$ in the outcome equation (\\ref{outcome}) satisfies $E[|U|]<\\infty$ with \n$P\\left[E[U|\\bm{Z}]=0\\right]=1$. In addition, let the selection index $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ be \ndistributed with distribution $F_0$, assumed to be absolutely continuous. The estimator of $\\theta_0$ given in (\\ref{mhat}) exploits \nthe fact that identification of $\\theta_0$ occurs ``at infinity'' \\citep{Chamberlain86}, or in any case depends crucially on the \nselection index $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ being able to take values sufficiently large so that the corresponding \nconditional probabilities of selection take values close to one. In particular, $\\theta_0$ is characterized by the equalities\n\\begin{eqnarray}\n & & E\\left[\\left.D\\left(Y-\\bm{X}^{\\top}\\bm{\\beta}_0\\right)\\right|F_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)=1\\right]\\nonumber\\\\\n &=& E\\left[\\left.1\\left\\{F_0(V)\\le 1\\right\\}\\left(\\theta_0+U\\right)\\right|F_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)=1\\right]\\nonumber\\\\\n &=&\\theta_0\\label{theta0_id}\n\\end{eqnarray} \nThe proposed estimator of $\\theta_0$ exploits the representation of the estimand in (\\ref{theta0_id}), which suggests the estimation of $\\theta_0$ by direct \nestimation of the quantity \n$E\\left[\\left.D\\left(Y-\\bm{X}^{\\top}\\bm{\\beta}_0\\right)\\right|F_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)=1\\right]$.\n\nOne can view the estimator given in (\\ref{mhat}) as an extension of the Yang--Stute symmetrized nearest-neighbours \n(SNN) estimator of a conditional mean \\citep{Yang81,Stute84} to the problem of estimating the intercept \nin the outcome equation (\\ref{outcome}). SNN estimators are characterized by asymptotic behaviours that \nare asymptotically ``design adaptive'' in the sense that their asymptotic normality can generally be established without technical conditions on the \nprobability of the design variable taking values in regions of low density \\citep{Stute84}. In the present context, it is shown under certain conditions \nthat the estimator $\\hat{\\theta}_n$ in (\\ref{mhat}) is asymptotically normal with a \nrate of convergence that depends neither on the extent to which \nthe unobservables $U$ and $V$ are dependent, nor on the relationship between the behaviours of the selection index\n$\\bm{Z}^{\\top}\\bm{\\gamma}_0$ and the unobservable $V$ in the right tails of their respective marginal distributions. \nThis is essentially accomplished by transforming the estimated indices $\\bm{Z}^{\\top}_i\\hat{\\bm{\\gamma}}_n$ into random \nvariables $\\hat{\\eta}_n(\\bm{Z}_i)$ that are approximately uniformly distributed on $[0,1]$. \n\nIt is worth noting that estimators of $\\theta_0$ that take the form of inverse density-weighted averages \n\\citep[e.g.,][]{Lewbel07} have rates of convergence that generally vary with the extent to which $U$ and $V$ are \ndependent, as well as with the relative right-tail behaviours of $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ and $V$ \n\\citep{KhanTamer10}. These estimators rely on an alternative representation of (\\ref{theta0_id}) and are consistent \nunder additional regularity conditions. In particular, suppose \n$m_0\\left(\\bm{z}^{\\top}\\bm{\\gamma}_0\\right)\\equiv E\\left[\\left. D\\left(Y-\\bm{X}^{\\top}\\bm{\\beta}_0\\right)\\right|\\bm{Z}^{\\top}\\bm{\\gamma}_0=\\bm{z}^{\\top}\\bm{\\gamma}_0\\right]$ \nis everywhere differentiable in $\\bm{z}^{\\top}\\bm{\\gamma}_0$ with derivative given by \n$m_0^{(1)}\\left(\\bm{z}^{\\top}\\bm{\\gamma}_0\\right)\\equiv \\partial m_0\\left(\\bm{z}^{\\top}\\bm{\\gamma}_0\\right)\/\\partial\\bm{z}^{\\top}\\bm{\\gamma}_0$. \nSuppose in addition that the distribution of $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ is absolutely continuous with density \n$f_0(\\cdot)$ such that \n$E\\left[\\left|m_0^{(1)}\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)\/f_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)\\right|\\right]<\\infty$. One \ncan then write\n\\begin{eqnarray}\n\\theta_0 &=& \\lim_{F_0(\\bm{z}^{\\top}\\bm{\\gamma}_0)\\to 1} E\\left[\\left. D\\left(Y-\\bm{X}^{\\top}\\bm{\\beta}_0\\right)\\right|\\bm{Z}=\\bm{z}\\right]\\nonumber\\\\\n\t&=&\\int_{-\\infty}^{\\infty} m^{(1)}_0\\left(\\bm{z}^{\\top}\\bm{\\gamma}_0\\right) d\\bm{z}^{\\top}\\bm{\\gamma}_0\\nonumber\\\\\n\t&=&\\int_{-\\infty}^{\\infty} \\frac{m_0^{(1)}\\left(\\bm{z}^{\\top}\\bm{\\gamma}_0\\right)}{f_0\\left(\\bm{z}^{\\top}\\bm{\\gamma}_0\\right)}\\cdot f_0\\left(\\bm{z}^{\\top}\\bm{\\gamma}_0\\right)d\\bm{z}^{\\top}\\bm{\\gamma}_0\\nonumber\\\\\n\t&=& E\\left[\\frac{m_0^{(1)}\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)}{f_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)}\\right],\\label{id5}\n\\end{eqnarray}\nwhich suggests estimating $\\theta_0$ via its approximate sample analogue in which $m_0^{(1)}(\\cdot)$ and \n$f_0(\\cdot)$ are replaced by suitable preliminary estimates, and in which the systematic trimming of observations \ncorresponding to small values of $f_0\\left(\\bm{Z}^{\\top}_i\\bm{\\gamma}_0\\right)$ may be required. This approach \nto estimating $\\theta_0$ follows that proposed by \\citet{Lewbel97} for estimating a binary choice model arising \nfrom a latent linear model in which a mean restriction is imposed on the latent error term, and is the \napproach to estimating $\\theta_0$ considered in more recent work by \\citet{Lewbel07,KhanTamer10} and \n\\citet{KhanNekipelov13}. Consistent estimators of $\\theta_0$ that exploit (\\ref{id5}) in this way naturally \ndepend critically on the assumed finiteness of \n$E\\left[\\left|m_0^{(1)}\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)\/f_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)\\right|\\right]$, \nan assumption that in turn leads the rates at which they converge to their limiting distributions to depend on the \nrelative tail behaviours of the variables in the selection equation or the extent to which selection is endogenous \n\\citep{KhanTamer10}. The non-uniformity in the rate of convergence as one varies the relative tail behaviours of the \ndeterminants of selection or the dependence between the disturbance terms $U$ and $V$ is a feature that is also shared by \nestimators of $\\theta_0$ that involve locally constant or polynomial regressions of $\\hat{W}_i$ on the \nuntransformed estimated selection indices \n$\\bm{Z}^{\\top}_i\\hat{\\bm{\\gamma}}_n$ \\citep[e.g.,][]{Heckman90,AndrewsSchafgans98}. Results of this \nnature significantly complicate the task of statistical inference regarding $\\theta_0$. By way of contrast, the \ntransformations to $\\hat{\\eta}_n(\\bm{Z}_i)$ of the selection indices used in the locally linear \nSNN estimator given in (\\ref{mhat}) permit the locally linear SNN estimator to enjoy asymptotic normality with a rate \nof convergence that varies neither with the endogeneity of selection nor with the relative tail behaviours of the \nvariables appearing in the selection equation.\n\nIt should also be noted that the result given above in (\\ref{theta0_id}), in which the estimand is identified as\n$\\theta_0=E\\left[\\left.D\\left(Y-\\bm{X}^{\\top}\\bm{\\beta}_0\\right)\\right|F_0(\\bm{Z}^{\\top}\\bm{\\gamma}_0)=1\\right]$, \nmotivates the formulation of the estimator in (\\ref{mhat}) as the intercept in a locally linear regression. \nOne could as easily in this context use (\\ref{theta0_id}) to motivate an estimator of $\\theta_0$ as the corresponding \nvariant of a Nadaraya--Watson (i.e., locally constant regression) estimator; see in particular the approach taken \nin \\citet{StuteZhu05}. The focus on a locally linear regression estimator of $\\theta_0$ is purely to improve the \nrate at which the bias of the proposed estimator vanishes in large samples, given that locally linear regression estimators \nhave biases that converge at the same rate regardless of whether the conditioning variable is evaluated at an interior or at a limit point of \nits support \\citep[e.g.,][]{FanGijbels92}. Nadaraya--Watson estimators, on the other hand, have biases that \nconverge relatively slowly when the conditioning variable is evaluated at a limit point of its support.\n\nAssumptions underlying the first-order asymptotic behaviour of the estimator given by $\\hat{\\theta}_n$ in \n(\\ref{mhat}) are given as follows.\n\n\\begin{assumption}\\label{a1}\n\\begin{enumerate}\n\\item\\label{a1a} \n\\begin{enumerate}\n\\item $\\bm{X}$ is $k$-variate, with support not contained in any proper linear subspace of \n$\\mathbb{R}^k$;\n\\item $E\\left[\\left\\|\\bm{X}\\right\\|\\right]<\\infty$.\n\\end{enumerate} \n\\item\\label{a1b}\n\\begin{enumerate} \n\\item $\\bm{Z}$ is $l$-variate;\n\\item the support of $\\bm{Z}$ is not contained in any proper linear subspace of $\\mathbb{R}^{l}$,;\n\\item the first component of $\\bm{\\gamma}_0$ is equal to one;\n\\item $\\bm{Z}$ does not contain a non-stochastic component;\n\\item\\label{a1b4} the distribution $F_0$ of the selection index $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ is absolutely \ncontinuous, with a density function $f_0$ that is differentiable on the support of \n$\\bm{Z}^{\\top}\\bm{\\gamma}_0$.\n\\end{enumerate}\n\\item\\label{a1c} The set $\\left\\{(D_i,\\bm{X}^{\\top}_i, \\bm{Z}^{\\top}_i, Y_i):\\,i=1,\\ldots,n\\right\\}$ consists of \nindependent observations each with the same distribution as the random vector \n$(D, \\bm{X}^{\\top}, \\bm{Z}^{\\top}, Y)$, which is generated according to the model given above in \n(\\ref{outcome})--(\\ref{observation}), and where \n\\begin{eqnarray*}\nE\\left[U^2\\right] &<& \\infty;\\\\\nP\\left[E[U|\\bm{X}]=0\\right] &=& 1;\\\\\nP\\left[E[U|\\bm{Z}] = E[U]\\right] &=& 1.\n\\end{eqnarray*} \n\\item\\label{a1d} \n\\begin{enumerate}\n\\item The joint conditional distribution given \n$[\\begin{array}{cc} \\bm{X}^{\\top} & \\bm{Z}^{\\top}\\end{array}]=[\\begin{array}{cc} \\bm{x}^{\\top} & \\bm{z}^{\\top}\\end{array}]$ of the \ndisturbances $[\\begin{array}{cc} U & V\\end{array}]$ appearing in (\\ref{outcome}) and (\\ref{selection}) is absolutely continuous for all \n$[\\begin{array}{cc} \\bm{x}^{\\top} & \\bm{z}^{\\top}\\end{array}]$ in the support of $[\\begin{array}{cc} \\bm{X}^{\\top} & \\bm{Z}^{\\top}\\end{array}]$; \nmoreover the corresponding joint density $g_{U,V|\\bm{x},\\bm{z}}(\\cdot,\\cdot)$ is continuously differentiable in both arguments almost everywhere on $\\mathbb{R}^2$.\n\\item\\label{a1d2} The conditional distribution given $[\\begin{array}{cc} \\bm{X}^{\\top} & \\bm{Z}^{\\top}\\end{array}]=[\\begin{array}{cc} \\bm{x}^{\\top} & \\bm{z}^{\\top}\\end{array}]$ \nof $V$ is absolutely continuous for all points $[\\begin{array}{cc} \\bm{x}^{\\top} & \\bm{z}^{\\top}\\end{array}]$ in the support of \n$[\\begin{array}{cc} \\bm{X}^{\\top} & \\bm{Z}^{\\top}\\end{array}]$; the corresponding density $g_{V|\\bm{x},\\bm{z}}(\\cdot)$ is\ndifferentiable almost everywhere on $\\mathbb{R}$.\n\\end{enumerate} \n\\end{enumerate}\n\\end{assumption} \n\n\\begin{assumption}\\label{a2}\n\\begin{enumerate}\n\\item\\label{a2a} There exist estimators $\\hat{\\bm{\\beta}}_n$ and $\\hat{\\bm{\\gamma}}_n$ such that \n$\\left\\|\\hat{\\bm{\\beta}}_n-\\bm{\\beta}_0\\right\\|=O_p\\left(n^{-1\/2}\\right)$ \nand $\\left\\|\\hat{\\bm{\\gamma}}_n-\\bm{\\gamma}_0\\right\\|=O_p\\left(n^{-1\/2}\\right)$. \n\\item\\label{a2b} The smoothing kernel $K(\\cdot)$ is bounded and twice continuously differentiable with \n$K(u)>0$ on $[0,1]$, $K(u)=0$ for all $u\\not\\in [0,1]$, with $\\int K(u) du=1$ and where $\\int K^2(u) du<\\infty$.\n\\item\\label{a2c} The bandwidth sequence $\\left\\{h_n\\right\\}$ satisfies $h_n>0$ with $h_n\\to 0$ as $n\\to\\infty$, \nand $nh_n^3\\to\\infty$.\n\\item\\label{a2d} \n\\begin{enumerate}\n\\item \\label{a2d1} There exists $p\\ge 2$ such that $\\int u^j K(u) du=0$ for all $j\\in\\{1,\\ldots,p-1\\}$ and $\\int u^p K(u) du <\\infty$.\n\\item\\label{a2d11} The following hold for $p^*$ equal to the smallest odd integer greater than or equal to the constant $p+1$ specified \nin part~\\ref{a2d1} of this \nassumption: \n\\begin{enumerate}\n\\item\\label{a2d2} The joint conditional density $g_{U,V|\\bm{x},\\bm{z}}(\\cdot,\\cdot)$ specified in Assumption~\\ref{a1}.\\ref{a1d} is $p^*$-times \ncontinuously differentiable in both arguments almost everywhere on $\\mathbb{R}$ for all \n$[\\begin{array}{cc} \\bm{x}^{\\top} & \\bm{z}^{\\top}\\end{array}]$ in the support of $[\\begin{array}{cc} \\bm{X}^{\\top} & \\bm{Z}^{\\top}\\end{array}]$.\n\\item\\label{a2d3} Similarly, the conditional density $g_{V|\\bm{x},\\bm{z}}(\\cdot)$ specified in Assumption~\\ref{a1}.\\ref{a1d} is $p^*$-times \ncontinuously differentiable almost everywhere on $\\mathbb{R}$ for all $[\\begin{array}{cc} \\bm{x}^{\\top} & \\bm{z}^{\\top}\\end{array}]$ in the support of \n$[\\begin{array}{cc} \\bm{X}^{\\top} & \\bm{Z}^{\\top}\\end{array}]$.\n\\end{enumerate}\n\\end{enumerate}\n\\end{enumerate}\n\\end{assumption}\n\n\\noindent The conditions of Assumption~\\ref{a1} are largely standard and notably suffice for the selection parameter $\\bm{\\gamma}_0$ \nto be identified up to the particular location and scale normalization imposed by Assumption~\\ref{a1}.\\ref{a1b}. \nAssumption~\\ref{a1} also does not restrict the components $V$ and $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ of the selection equation to be \nindependent. \n\nAssumption~\\ref{a1} plays a crucial role in controlling the asymptotic bias of the proposed estimator $\\hat{\\theta}_n$. In \nparticular, identification of $\\bm{\\gamma}_0$ subject to the conditions of Assumption~\\ref{a1}, along with the \ndifferentiability conditions of Assumption~\\ref{a2}.\\ref{a2d2}--\\ref{a2}.\\ref{a2d3}, imply a smoothness restriction on \nthe conditional mean function\n\\begin{equation}\\label{m0F}\nm_{F_0}(q)\\equiv E\\left[D\\left(Y-\\bm{X}^{\\top}\\bm{\\beta}_0\\right)\\left|F_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)=q\\right.\\right],\n\\end{equation}\nThis smoothness restriction takes the form of differentiability of $m_{F_0}(q)$ for $q\\in (0,1)$ up to order no less than $p$, where $p\\ge 2$ is the \nconstant specified in Assumption~\\ref{a2}.\\ref{a2d}, along with finiteness of the left-hand limit of \n$\\left(d^p\/dq^p\\right) E\\left[D\\left(Y-\\bm{X}^{\\top}\\bm{\\beta}_0\\right)\\left|F_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)=q\\right.\\right]$ \nat $q=1$. This smoothness restriction, in other words, corresponds to a standard assumption in the literature on kernel estimation of \nconditional mean functions. On the other hand, the differentiability to $p^*$-order in the second argument of \n$g_{U,V|\\bm{x},\\bm{z}}(\\cdot,\\cdot)$ is slightly stronger than the usual assumption of differentiability to \norder $p$. This slight strengthening of the standard differentiability condition is used in the rate optimality arguments developed below in \nSection~\\ref{opt}. Details are contained in the proof of Theorem~\\ref{ubthm} below. \n\nThe smoothness restriction on $m_{F_0}(q)$ given in (\\ref{m0F}) can also be seen to be implied by the identification of \n$\\bm{\\gamma}_0$ up to a location and scale normalization and by the smoothness conditions imposed in Assumption~\\ref{a2}.\\ref{a2d} on \nthe conditional densities $g_{U,V|\\bm{x},\\bm{z}}(\\cdot,\\cdot)$ and $g_{V|\\bm{x},\\bm{z}}(\\cdot,\\cdot)$ for any \n$[\\begin{array}{cc} \\bm{x}^{\\top} & \\bm{z}^{\\top}\\end{array}]$ in the support of $[\\begin{array}{cc} \\bm{X}^{\\top} & \\bm{Z}^{\\top}\\end{array}]$. In \nparticular, one can show that under the conditions of Assumption~\\ref{a1} and \\ref{a2}.\\ref{a2d}, $U$ has a conditional distribution given \n$F_0(V)=q$ for any $q\\in [0,1]$ that is absolutely continuous with density\n\\begin{equation}\\label{condl_dens}\nr_{U|Q}(u|q)\\equiv\\frac{g_{UV}\\left(u,F_0^{-1}(q)\\right)}{g_V\\left(F^{-1}_0(q)\\right)},\n\\end{equation}\nwhere $g_{UV}(\\cdot,\\cdot)$ and $g_V(\\cdot)$ are respectively the joint density of $[\\begin{array}{cc} U & V\\end{array}]$ and the marginal density of \n$V$. The conditional density $r_{U|Q}(u|q)$ is, given the absolute continuity of $F_0$ and the differentiability conditions on $g_{UV}$ and $g_V$ \nimplied by Assumption~\\ref{a2}.\\ref{a2d}, $(p+1)$-times differentiable in $q$ on $(0,1)$ for any $u\\in\\mathbb{R}$. The $(p+1)$-times \ndifferentiability of $r_{U|Q}(u|q)$ in $q$ on $(0,1)$ in turn implies the finiteness of \n$\\left.\\left(\\partial^p\/\\partial q^p\\right) r_{U|Q}(u|q)\\right|_{q=1}$ for any $u\\in\\mathbb{R}$. It is the finiteness of \n$\\left.\\left(\\partial^p\/\\partial q^p\\right) r_{U|Q}(u|q)\\right|_{q=1}$ that implies the smoothness restrictions on $m_{F_0}(q)$ mentioned above. \nFurther details are supplied below in Appendix~\\ref{lemmabound}.\n\nLet $\\sigma^2_{U|F_0(\\bm{Z}^{\\top}\\bm{\\gamma}_0)}(q)\\equiv E\\left[U^2\\left|F_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)=q\\right.\\right]$, where $U$ is \nthe disturbance in the outcome equation (\\ref{outcome}). The following result summarizes the large-sample behaviour to first order of the proposed \nestimator:\n\n\\begin{theorem}\\label{mainthm}\nUnder the conditions of Assumptions~\\ref{a1} and \\ref{a2}, the estimator $\\hat{\\theta}_n$ given above in \n(\\ref{mhat}) satisfies\n\\[\n\\sqrt{nh_n}\\left(\\hat{\\theta}_n-\\theta_0-\\frac{h_n^p}{p!}\\int u^p K(u) du\\cdot m^{(p)}_{F_0}(1)\\right)\\stackrel{d}{\\to} N\\left(0,\\sigma^2_{U|F_0(\\bm{Z}^{\\top}\\bm{\\gamma}_0)}(1)\\int K^2(u) du\\right)\n\\]\nas $n\\to\\infty$, where $m^{(p)}_{F_0}(1)=\\lim_{q\\uparrow 1}\\left.\\left(d^p\/(dq^{\\prime p}\\right) m_{F_0}(q^{\\prime})\\right|_{q^{\\prime}=q}$ for $m_{F_0}(\\cdot)$ as given above in (\\ref{m0F}).\n\\end{theorem}\n\nIt follows from Theorem~\\ref{mainthm} that the rate of convergence of $\\hat{\\theta}_n$ to its limiting normal distribution \nis unaffected by the dependence, if any, between the disturbance terms $U$ and $V$ in (\\ref{outcome}) and (\\ref{selection}), \nrespectively. The rate of convergence of $\\hat{\\theta}_n$ is also unaffected \nby the relative upper tail thicknesses of the distributions of $V$ and of the selection index \n$\\bm{Z}^{\\top}\\bm{\\gamma}_0$. \n\nWe also have from the statement of Theorem~\\ref{mainthm} that a necessary condition for the consistency of \n$\\hat{\\theta}_n$ is the finiteness of the derivative $m^{(p)}_{F_0}(1)$. \nThe finiteness of $m^{(p)}_{F_0}(1)$, as discussed above, is implied by the identification of $\\bm{\\gamma}_0$ up to a location \nand scale normalization as well as by the differentiability conditions \nspecified in Assumptions~\\ref{a2}.\\ref{a2d2}--\\ref{a2}.\\ref{a2d3}. From this it follows that the consistency of the \nproposed estimator is implied by natural restrictions on the joint distribution of \n$[\\begin{array}{ccc} U & V & \\bm{Z}^{\\top}\\bm{\\gamma}_0\\end{array}]$. These distributional restrictions correspond \ncollectively to a standard smoothness restriction in the literature on kernel estimation of conditional mean functions.\n\nThe presence of $m^{(p)}_{F_0}(1)$ in the bias term appearing in Theorem~\\ref{mainthm}, however, \nindicates that the approximate large-sample bias of $\\hat{\\theta}_n$ does depend on the extent to which $U$ is mean \ndependent on $V$. In particular, the conditional mean derivative $m^{(p)}_{F_0}(1)$ depends on the smoothness of the conditional \nmean $E\\left[U\\left|F_0(V)=q\\right.\\right]$ as a function of $q$ for values of $q$ near one; Appendix~\\ref{lemmabound} below\ncontains further discussion. It is worth noting in this connection that $m^{(p)}_{F_0}(1)=0$ when the selection mechanism is exogenous to the extent \nthat $U$ is mean independent of $V$, i.e., when $P\\left[E[U|V]= E[U]\\right]=1$. \n\nThe dependence of the approximate large-sample bias of $\\hat{\\theta}_n$ on the joint distribution of \n$[\\begin{array}{ccc} U & V & \\bm{Z}^{\\top}\\bm{\\gamma}_0\\end{array}]$ through the conditional mean derivative \n$m^{(p)}_{F_0}(1)$ can be ameliorated in practice by a judicious choice of variable\nbandwidth; see Corollary~\\ref{cormain} below and the corresponding discussion and simulation \nevidence presented in Section~\\ref{mc}. \nTheorem~\\ref{mainthm} in any case indicates that the approximate large-sample bias, but not the variance, of the proposed estimator \ndepends on the joint distribution of $[\\begin{array}{ccc} U & V & \\bm{Z}^{\\top}\\bm{\\gamma}_0\\end{array}]$. \nTheorem~\\ref{mainthm} as such distinguishes the asymptotic behaviour of $\\hat{\\theta}_n$ from those of existing estimators \nof $\\theta_0$ \\citep[e.g.,][]{Heckman90,Lewbel07} whose biases and variances both depend on the joint distribution \nof $[\\begin{array}{ccc} U & V & \\bm{Z}^{\\top}\\bm{\\gamma}_0\\end{array}]$.\n\nThe following corollary is immediate from Theorem~\\ref{mainthm}:\n\n\\begin{corollary}\\label{cormain}\nThe following hold under the conditions of Theorem~\\ref{mainthm}:\n\\begin{enumerate}\n\\item\\label{cor1} If the additional condition that $nh_n^{2p+1}\\to 0$ holds, we have\n\\[\n\\sqrt{nh_n}\\left(\\hat{\\theta}_n-\\theta_0\\right)\\stackrel{d}{\\to} N\\left(0,\\sigma^2_{U|F_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)}(1)\\int K^2(u) du\\right)\n\\]\nas $n\\to\\infty$.\n\\item\\label{cor2} The theoretical bandwidth $h^*_n$ minimizing the asymptotic mean-squared error of $\\hat{\\theta}_n$ is given by\n\\[\nh^*_n=\\left[\\frac{(p!)^2\\sigma^2_{U|F_0\\left(\\bm{Z}^{\\top}\\bm{\\gamma}_0\\right)}(1)\\int K^2(u) du}{2p\\left(\\int u^p K(u) du\\right)^2\\left(m^{(p)}_{F_0}(1)\\right)^2 \\cdot n}\\right]^{\\frac{1}{2p+1}}.\n\\]\n\\end{enumerate}\n\\end{corollary}\n\n\\section{Rate Optimality}\\label{opt}\n\n\\noindent Continue to let $p\\ge 2$ be as specified above in Assumption~\\ref{a2} in the \nprevious section, and $n$ the sample size. This section shows that under the conditions of Assumptions~\\ref{a1} \nand \\ref{a2}, the rate $n^{-p\/(2p+1)}$ is the fastest achievable, or {\\it optimal}, rate of convergence of an \nestimator of the intercept $\\theta_0$ in (\\ref{outcome}). The optimality in question is relative to the \nconvergence rates of all other estimators of $\\theta_0$ and excludes by definition those estimators that are asymptotically \nsuperefficient at particular points in the underlying parameter space. The exclusion of \nsuperefficient estimators in this context notably rules out estimators that converge at the parametric \nrate of $n^{-1\/2}$ under conditions stronger than those given above in Assumptions~\\ref{a1} and \\ref{a2}. For \nexample, the OLS estimator of $\\theta_0$ based on observations for which $D_i=1$ is superefficient for \nspecifications of (\\ref{outcome})--(\\ref{observation}) over submodels in which the disturbance $U$ and the selection indicator $D$ are conditionally \nmean independent given $\\bm{X}$ and $\\bm{Z}$, i.e., models where $P\\left[E[U|D=1,\\bm{X},\\bm{Z}]=E[U|D=0,\\bm{X},\\bm{Z}]=0\\right]=1$. As noted in the \nIntroduction, the OLS estimator of $\\theta_0$ converges at the standard rate of $n^{-1\/2}$ when $U$ and $D$ are conditionally mean independent given \n$\\bm{X}$ and $\\bm{Z}$ but is otherwise inconsistent. \n\nThe approach to optimality taken here follows that of \\citet{Horowitz93}, which was in turn based on the \napproach of \\citet{Stone80}. In particular, let \n$\\left\\{\\Psi_n:\\,n=1,2,3,\\ldots\\right\\}$ denote a sequence of sets of the form \n\\begin{equation}\\label{Psin}\n\\Psi_n=\\left\\{\\psi:\\,\\psi=(\\bm{\\psi}_1,g)\\right\\},\n\\end{equation} \nwhere $\\bm{\\psi}_1=(\\theta,\\bm{\\beta}^{\\top},\\bm{\\gamma}^{\\top})^{\\top}\\in\\mathbb{R}^{1+k+l}$ and where \n$g$ denotes the joint conditional density given $\\bm{X}$ and $\\bm{Z}$ of the disturbances $U$ and $V$ appearing above in (\\ref{outcome}) \nand (\\ref{selection}), respectively. The quantity $\\psi$ may depend generically on $n$. \n\nConsider the observable random variables $D$, $Y$, $\\bm{X}$ and $\\bm{Z}$ appearing above in \n(\\ref{outcome})--(\\ref{observation}). Suppose that for each $n$, the joint conditional distribution of the \nvector $(D,Y)$ given $\\bm{X}$ and $\\bm{Z}$ is indexed by some $\\psi\\in\\Psi_n$. Let \n$P_{\\psi}[\\cdot]\\equiv P_{(\\bm{\\psi}_1,g)}[\\cdot]$ denote the corresponding probability measure. Following \n\\citet{Stone80}, one may in this context define a constant $\\rho>0$ to be an {\\it upper bound} on the rate of \nconvergence of estimators of the intercept parameter $\\theta_0$ if for every estimator sequence \n$\\left\\{\\theta_n\\right\\}$,\n\\begin{equation}\\label{ub1}\n\\liminf_{n\\to\\infty}\\sup_{\\psi\\in\\Psi_n} P_{\\psi}\\left[\\left|\\theta_n-\\theta\\right|> sn^{-\\rho}\\right]>0\n\\end{equation}\nfor all $s>0$, and if \n\\begin{equation}\\label{ub2}\n\\lim_{s\\to 0}\\liminf_{n\\to\\infty}\\sup_{\\psi\\in\\Psi_n} P_{\\psi}\\left[\\left|\\theta_n-\\theta\\right|>sn^{-\\rho}\\right]=1,\n\\end{equation}\nwhere $\\theta$ as it appears in (\\ref{ub1}) and (\\ref{ub2}) refers to the first component of the \nfinite-dimensional component $\\bm{\\psi}_1$ of $\\psi$. \n\nIn addition, define $\\rho>0$ to be an {\\it achievable} rate of convergence for the intercept parameter if there \nexists an estimator sequence $\\left\\{\\theta_n\\right\\}$ such that\n\\begin{equation}\\label{ach}\n\\lim_{s\\to\\infty}\\limsup_{n\\to\\infty}\\sup_{\\psi\\in\\Psi_n} P_{\\psi}\\left[\\left|\\theta_n-\\theta\\right|> sn^{-\\rho}\\right]=0.\n\\end{equation}\nOne calls $\\rho>0$ the {\\it optimal} rate of convergence for estimation of the intercept parameter if it is both \nan upper bound on the rate of convergence and achievable. In what follows, I first show that for large $n$, \n$p\/(2p+1)$ is an upper bound on the rate of convergence. I then show that there exists an implementation of \nthe estimator given in (\\ref{mhat}) that attains the $n^{-p\/(2p+1)}$-rate of convergence uniformly \nover $\\Psi_n$ as $n\\to\\infty$. \n\nThe approach taken first involves the specification for each $n$ of a subset $\\Psi^*_n$ of the parameter set \n$\\Psi_n$ in which the finite-dimensional component $\\bm{\\psi}_1\\equiv\\bm{\\psi}_{1n}$ lies in a shrinking \nneighbourhood $\\Psi^*_{1n}$ of some point \n$[\\begin{array}{ccc} \\theta_0 & \\bm{\\beta}_0^{\\top} & \\bm{\\gamma}_0^{\\top}\\end{array}]^{\\top}\\in\\mathbb{R}^{1+k+l}$. In addition, the \ninfinite-dimensional component $g$ is embedded in a curve (i.e., parametrization) indexed by a scalar $\\psi_{2n}$ on a shrinking \nneighbourhood $\\Psi^*_{2n}$ of a bivariate density function $g_0$ satisfying all relevant conditions of Assumptions~\\ref{a1} and \\ref{a2} for a \nconditional density of $U$ and $V$ given $\\bm{X}$ and $\\bm{Z}$. \n\nIn particular, consider a parametrization of the conditional joint density $g$ of $(U,V)$ given $\\bm{X}$ and $\\bm{Z}$ given by $g_{\\psi_{2n}}$ for \n$\\psi_2\\in\\Psi^*_{2n}$, where for some $\\psi_{2n0}\\in\\Psi^*_{2n}$, we have $g_{\\psi_{2n0}}(u,v|\\bm{x},\\bm{z})=g_0(u,v|\\bm{x},\\bm{z})$ for each \n$[\\begin{array}{cccc} u & v & \\bm{x}^{\\top} & \\bm{z}^{\\top}\\end{array}]\\in\\mathbb{R}^{2+k+l}$; i.e., the curve on $\\Psi^*_{2n}$ given by \n$\\psi_{2n}\\to g_{\\psi_{2n}}$ passes through the true conditional joint density $g_0$ at some point $\\psi_{2n0}\\in\\Psi^*_{2n}$.\n\nNow let $\\Psi^*_n\\equiv \\Psi^*_{1n}\\times\\Psi^*_{2n}$. Let $s>0$ be arbitrary, and let $\\left\\{\\theta_n\\right\\}$ denote an arbitrary sequence of estimators of \n$\\theta_0$. Consider that if\n\\begin{equation}\\label{ub1b}\n\\liminf_{n\\to\\infty}\\sup_{\\psi_{n}\\in\\Psi^*_{n}} P_{\\psi_{n}}\\left[n^{\\frac{p}{2p+1}}\\left|\\theta_n-\\theta\\right|>s\\right]>0,\n\\end{equation}\nthen (\\ref{ub1}) holds with $\\rho=p\/(2p+1)$. This is because the set $\\Psi_n$ in (\\ref{ub1}) contains the \nset over which the supremum is taken in (\\ref{ub1b}). Similarly, if \n\\begin{equation}\\label{ub2b}\n\\lim_{s\\to 0}\\liminf_{n\\to\\infty}\\sup_{\\psi_{n}\\in\\Psi^*_{n}} P_{\\psi_{n}}\\left[n^{\\frac{p}{2p+1}}\\left|\\theta_n-\\theta\\right|>s\\right]=1\n\\end{equation}\nholds, then so does (\\ref{ub2}).\n\nIt follows that proving (\\ref{ub1b}) and (\\ref{ub2b}) suffices to show that $p\/(2p+1)$ is an upper bound on \nthe rate of convergence; the key step in the proof is the specification of a suitable parametrization \n$\\psi_{2n}\\to g_{\\psi_{2n}}$ for $\\psi_{2n}\\in\\Psi^*_{2n}$. This is in fact the approach taken \nin Appendix~\\ref{ubthmpf}, which contains a proof of the following result:\n\n\\begin{theorem}\\label{ubthm}\n\nUnder the conditions of Assumptions~\\ref{a1} and \\ref{a2}, (\\ref{ub1b}) and (\\ref{ub2b}) hold.\n\\end{theorem}\nTheorem~\\ref{ubthm} implies that $p\/(2p+1)$ is an upper bound on the rate of convergence of an estimator \nsequence $\\left\\{\\theta_n\\right\\}$ in the minimax sense of (\\ref{ub1}) and (\\ref{ub2}) above.\n\nNext, it is shown that $p\/(2p+1)$ is an achievable rate of convergence in the sense of (\\ref{ach}) by \nexhibiting an estimator sequence $\\left\\{\\theta_n\\right\\}$ such that (\\ref{ach}) holds with $\\rho=p\/(2p+1)$. \nIn this connection, let $\\hat{\\theta}_n^*$ denote the proposed estimator given above in (\\ref{mhat}) implemented \nwith a bandwidth $h_n^*=cn^{-1\/(2p+1)}$ for some constant $c>0$. In this case, (\\ref{ach}) is satisfied with \n$\\theta_n=\\hat{\\theta}^*_n$ and $\\rho=p\/(2p+1)$:\n\n\\begin{theorem}\\label{achthm}\n\nSuppose Assumptions~\\ref{a1} and \\ref{a2} hold. Then (\\ref{ach}) holds with $\\theta_n=\\hat{\\theta}_n^*$ and \n$\\rho=p\/(2p+1)$, where $\\hat{\\theta}_n^*$ denotes the estimator given above in (\\ref{mhat}) implemented with a bandwidth \n$h_n^*=c n^{-1\/(2p+1)}$ for some constant $c>0$. \n\\end{theorem}\n\nTheorems~\\ref{ubthm} and \\ref{achthm} jointly imply that $p\/(2p+1)$ is the optimal rate of convergence for \nestimation of $\\theta_0$.\n\n\\section{Numerical Evidence}\\label{mc}\n\n\\noindent This section reports the results of simulation experiments that compare the finite-sample \nbehaviour of the estimator in (\\ref{mhat}) to the behaviours of alternative estimators. The simulations involved:\n\\begin{itemize}\n\\item variation in the correlation between the unobservable terms in the outcome and selection equations;\n\\item variation in the relative upper tail thicknesses of the selection index and the unobservable term in the \nselection equation, thus implying variation in the degree to which the parameter of interest is identified;\n\\item and the imposition of two different parametric families for the joint distribution of \n$[\\begin{array}{ccc} U & V & \\bm{Z}^{\\top}\\bm{\\gamma}_0\\end{array}]$, where $U$ is the unobservable term in the \noutcome equation, $V$ is the unobservable term in the selection equation and $\\bm{Z}^{\\top}\\bm{\\gamma}_0$ is the \nselection index.\n\\end{itemize}\nEach simulation experiment involved 1000 replicated samples of sizes $n\\in\\{100,400\\}$ from the model given above in \n(\\ref{outcome})--(\\ref{observation}), where the parameter of interest was fixed at $\\theta_0=1$, the variance \nof the unobservable term in the selection equation was fixed at $Var[V]=1$ and where for some constant \n$\\rho\\in [-1,1]$, the unobservable term in the outcome equation was specified as $U=\\rho V+E$ for a random variable \n$E$ independent of $V$, where $E\\sim N\\left(0,1-\\rho^2\\right)$. The parameter $\\rho$ in this case is by construction \nthe correlation coefficient between $U$ and $V$. The simulations considered the settings \n$\\rho\\in\\{0,.25,.50,.75,.95\\}$. \n\nIn addition, the vector $\\bm{Z}$ of observable predictors of selection was taken to be $l$-variate with \n$\\bm{Z}=[\\begin{array}{ccc} Z_1 & \\cdots & Z_l\\end{array}]^{\\top}$, \nwhile the vector $\\bm{X}$ of outcome predictors was specified to be $k$-variate with $k0$ used in the specification of \nboth models is defined so as to index the degree to which the parameter of interest $\\theta_0$ is identified. In particular, $\\alpha\\ge 1$ can be \nseen in this context to be a necessary condition for the identification of $\\theta_0$, with $\\alpha\\ge 1$ corresponding to the case where the \ntransformation $F_0(V)$ has an absolutely continuous distribution with density given by the ratio $r_Q(q)$ defined in (\\ref{marg}) below. Values of \n$\\alpha<1$, on the other hand, correspond to the non-identifiability of $\\theta_0$, with $\\alpha\\in (0,1)$ in the context of \neither of the following two DGPs implying failure of the condition that $F_0(V)$ have an absolutely continuous \ndistribution supported on $[0,1]$. In particular, $\\alpha\\in (0,1)$ in the following two DGPs implies that the quantity \n$r_Q(q)$ given in (\\ref{marg}) below has the property that $r_Q(1)=\\infty$:\n \n\\begin{itemize}\n\\item (DGP1) I take\n\\[\n\\left[\\begin{array}{cc} \\bm{Z} & \\bm{0} \\\\ \\bm{0} & V \\end{array}\\right]\\sim N\\left(\\bm{0},\\bm{I}_{l+1}\\right).\n\\]\nIn addition, the selection parameter $\\bm{\\gamma}_0$ is set to $\\bm{\\gamma}_0=[\\begin{array}{ccc} \\sqrt{\\alpha\/l} & \\cdots & \\sqrt{\\alpha\/l}\\end{array}]^{\\top}$ for \na constant $\\alpha>0$. In this way we have \n\\[\n\\left[\\begin{array}{c}\\bm{Z}^{\\top}\\bm{\\gamma}_0 \\\\ V\\end{array}\\right]\\sim N\\left(\\bm{0},\\left[\\begin{array}{cc} \\alpha & 0 \\\\ 0 & 1\\end{array}\\right]\\right).\n\\] \n\\item (DGP2) $Z_1,\\ldots,Z_l$ are iid standard Cauchy and jointly mutually independent of $V$, while \n$V$ is absolutely continuous on $[1,\\infty)$ with Pareto type-I density given by \n\\[\ng_{0V}(v)=\\alpha v^{-\\alpha-1},\n\\] \nwhere $\\alpha>0$ is a constant. In addition, the selection parameter $\\bm{\\gamma}_0$ is set to \n$\\bm{\\gamma}_0=[\\begin{array}{cc} \\bm{0}_{l-1}^{\\top} & 1\\end{array}]^{\\top}$. In this way the selection index \n$\\bm{Z}^{\\top}\\bm{\\gamma}_0$ is standard Cauchy and independent of $V$.\n\\end{itemize}\nThe simulations under both DGPs considered the settings $\\alpha\\in\\{2.00,1.50,1.25,1.00\\}$. The effect of variation in \nthe correlation coefficient $\\rho$ and the parameter $\\alpha$ on estimation of the intercept was a primary focus of \nthese simulations. The outcome-equation nuisance parameter $\\bm{\\beta}_0$ and the selection parameter $\\bm{\\gamma}_0$ \nwere accordingly fixed at their true values in these simulations in order to provide a clearer picture of the effects of \nvariation in $\\rho$ and $\\alpha$ on the behaviour of the various intercept estimators considered.\n\nThe proposed intercept estimator given above in (\\ref{mhat}) was implemented with a standard \n(i.e., second-order) Epanechnikov kernel. The bandwidth used to implement $\\hat{\\theta}_n$ was initially \nset to the sample analogue of the theoretical asymptotic MSE-optimal bandwidth $h^*_n$ specified above in \nCorollary~\\ref{cormain}. In particular, the simulations involved the bandwidth $\\hat{h}^*_n$, where $\\hat{h}^*_n$ was taken to be \nthe sample analogue of $h^*_n$. As such, $\\hat{h}^*_n$ was set \nto decay at the MSE-optimal rate of $n^{-1\/5}$ corresponding to the order of kernel employed (i.e., $p=2$), while its \nleading constant was specified as the sample analogue of the leading constant appearing in $h^*_n$. The unknown \nparameters appearing in the leading constant of $h^*_n$ were estimated via auxiliary locally cubic regressions as \ndescribed in \\citet[\\S 4.3]{FanGijbels96}. The sensitivity of the proposed estimator's sampling behaviour to the choice \nof bandwidth was also assessed by considering implementations using the bandwidth settings $h_n=(2\/3)\\hat{h}_{n}^*$ and \n$h_n=(3\/2)\\hat{h}_{n}^*$. \n\nComparisons of the corresponding sampling behaviours in terms of squared bias, standard deviation and root mean-squared \nerror (RMSE) over 1000 Monte Carlo replications for values of \n$(\\rho,\\alpha)\\in\\{0,.25,.50,.75,.95\\}\\times\\{2.00,1.50,1.25,1.00\\}$ are presented below for samples of size $n=100$ in \nTables~\\ref{dgp1n100mhat1.p2} and \\ref{dgp2n100mhat1.p2} for DGP1 and DGP2, respectively. The corresponding results for samples of size $n=400$ \nappear in Tables~\\ref{dgp1n400mhat1.p2} and \\ref{dgp2n400mhat1.p2}. The RMSE figures displayed \nin these tables are multiplied by $\\sqrt{n}$ in order to provide a clearer indication of \nthe rate of convergence of the proposed estimator. The increases in $\\sqrt{n}\\times$ RMSE as one moves from simulated \nsamples of size $n=100$ to those of $n=400$ indicate the slower-than-parametric rate of convergence of the proposed \nestimator regardless of the precise setting of $(\\rho,\\alpha)$. It is also clear that the rate of convergence of \nthe proposed estimator is slower for settings of $(\\rho,\\alpha)$ with one or both of $\\rho$ and $\\alpha$ close to one. \nIn addition, and as predicted by Theorem~\\ref{mainthm} above, one can see that the effect of variation in \n$(\\rho,\\alpha)$ on the squared bias of the proposed estimator is more pronounced than the corresponding effect \non the variance; indeed the standard deviation of the proposed estimator tends to be relatively stable \nover the various settings of $(\\rho,\\alpha)$ used in the simulations, particularly for values of \n$(\\rho,\\alpha)\\in\\{.25,.50,.75\\}\\times\\{2.00,1.50,1.25\\}$. Finally, the tabulated results indicate that the sampling performance of $\\hat{\\theta}_n$ \nis not sensitive to moderate variations in bandwidth. \n\n\\FloatBarrier\n\\begin{table}[H]\n\\pagenumbering{gobble}\n{\\scriptsize\n\\centering\n\\caption{DGP1 (bivariate normal), $n=100$, 1000 replications. Proposed estimator with second-order Epanechnikov kernel ($p=2$). (RMSE is multiplied by $\\sqrt{n}$.)}\\label{dgp1n100mhat1.p2}\n\\begin{tabular}{ccccccccccccc}\n \\hline\n\\multirow{2}{*}{$\\rho$} &\\multicolumn{3}{c}{$\\alpha=2.00$} &\\multicolumn{3}{c}{$\\alpha=1.50$} &\\multicolumn{3}{c}{$\\alpha=1.25$} &\\multicolumn{3}{c}{$\\alpha=1.00$}\\\\ \n & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE \\\\ \n \\hline\n\\multicolumn{13}{c}{(optimal bandwidth)} \\\\\n0.0000 & 0.0081 & 0.1859 & 2.0645 & 0.0018 & 0.1872 & 1.9196 & 0.0008 & 0.1951 & 1.9723 & 0.0006 & 0.1900 & 1.9154 \\\\ \n0.2500 & 0.0003 & 0.1890 & 1.8985 & 0.0004 & 0.1857 & 1.8674 & 0.0023 & 0.1911 & 1.9695 & 0.0064 & 0.1802 & 1.9724 \\\\ \n0.5000 & 0.0014 & 0.1812 & 1.8503 & 0.0071 & 0.1861 & 2.0432 & 0.0127 & 0.1802 & 2.1253 & 0.0227 & 0.1779 & 2.3324 \\\\ \n0.7500 & 0.0092 & 0.1768 & 2.0125 & 0.0216 & 0.1784 & 2.3125 & 0.0353 & 0.1761 & 2.5750 & 0.0459 & 0.1687 & 2.7272 \\\\ \n0.9500 & 0.0215 & 0.1821 & 2.3373 & 0.0404 & 0.1665 & 2.6105 & 0.0548 & 0.1730 & 2.9114 & 0.0812 & 0.1626 & 3.2811 \\\\ \n\\multicolumn{13}{c}{($2\/3\\times$ optimal bandwidth)} \\\\\n0.0000 & 0.0081 & 0.1860 & 2.0657 & 0.0018 & 0.1873 & 1.9206 & 0.0008 & 0.1952 & 1.9733 & 0.0006 & 0.1901 & 1.9164 \\\\ \n0.2500 & 0.0003 & 0.1891 & 1.8996 & 0.0004 & 0.1858 & 1.8682 & 0.0023 & 0.1912 & 1.9702 & 0.0064 & 0.1803 & 1.9727 \\\\ \n0.5000 & 0.0014 & 0.1813 & 1.8507 & 0.0071 & 0.1862 & 2.0430 & 0.0126 & 0.1803 & 2.1243 & 0.0227 & 0.1780 & 2.3314 \\\\ \n0.7500 & 0.0091 & 0.1770 & 2.0109 & 0.0215 & 0.1786 & 2.3100 & 0.0351 & 0.1762 & 2.5724 & 0.0457 & 0.1689 & 2.7242 \\\\ \n0.9500 & 0.0212 & 0.1822 & 2.3336 & 0.0401 & 0.1666 & 2.6058 & 0.0545 & 0.1732 & 2.9068 & 0.0809 & 0.1628 & 3.2764 \\\\ \n\\multicolumn{13}{c}{($3\/2\\times$ optimal bandwidth)} \\\\\n0.0000 & 0.0081 & 0.1858 & 2.0640 & 0.0018 & 0.1872 & 1.9192 & 0.0008 & 0.1951 & 1.9719 & 0.0006 & 0.1900 & 1.9150 \\\\ \n0.2500 & 0.0003 & 0.1890 & 1.8980 & 0.0004 & 0.1857 & 1.8670 & 0.0023 & 0.1910 & 1.9692 & 0.0064 & 0.1802 & 1.9723 \\\\ \n0.5000 & 0.0014 & 0.1811 & 1.8502 & 0.0071 & 0.1861 & 2.0433 & 0.0127 & 0.1802 & 2.1257 & 0.0228 & 0.1779 & 2.3328 \\\\ \n0.7500 & 0.0093 & 0.1768 & 2.0132 & 0.0217 & 0.1784 & 2.3136 & 0.0354 & 0.1760 & 2.5762 & 0.0460 & 0.1687 & 2.7285 \\\\ \n0.9500 & 0.0216 & 0.1820 & 2.3389 & 0.0406 & 0.1664 & 2.6126 & 0.0550 & 0.1729 & 2.9134 & 0.0814 & 0.1625 & 3.2831 \\\\ \n\\hline\n\\end{tabular}\n} \n\\end{table}\n\n\\begin{table}[H]\n\\pagenumbering{gobble}\n{\\scriptsize\n\\centering\n\\caption{DGP2 (non-normal), $n=100$, 1000 replications. Proposed estimator with second-order Epanechnikov kernel ($p=2$) and optimal bandwidth. (RMSE is multiplied by $\\sqrt{n}$.)}\\label{dgp2n100mhat1.p2}\n\\begin{tabular}{ccccccccccccc}\n \\hline\n\\multirow{2}{*}{$\\rho$} &\\multicolumn{3}{c}{$\\alpha=2.00$} &\\multicolumn{3}{c}{$\\alpha=1.50$} &\\multicolumn{3}{c}{$\\alpha=1.25$} &\\multicolumn{3}{c}{$\\alpha=1.00$}\\\\ \n & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE \\\\ \n \\hline\n\\multicolumn{13}{c}{(optimal bandwidth)} \\\\\n0.0000 & 0.1541 & 0.1719 & 4.2854 & 0.1892 & 0.1805 & 4.7097 & 0.2268 & 0.1708 & 5.0593 & 0.2714 & 0.1612 & 5.4535 \\\\ \n0.2500 & 0.0202 & 0.2119 & 2.5523 & 0.0394 & 0.2070 & 2.8669 & 0.0445 & 0.2229 & 3.0684 & 0.0636 & 0.2280 & 3.3994 \\\\ \n0.5000 & 0.0079 & 0.2391 & 2.5503 & 0.0071 & 0.2741 & 2.8687 & 0.0006 & 0.2942 & 2.9521 & 0.0000 & 0.3010 & 3.0102 \\\\ \n0.7500 & 0.1113 & 0.2796 & 4.3536 & 0.0972 & 0.3386 & 4.6030 & 0.0735 & 0.3288 & 4.2615 & 0.0585 & 0.4767 & 5.3448 \\\\ \n0.9500 & 0.2747 & 0.3174 & 6.1273 & 0.2510 & 0.3605 & 6.1721 & 0.2317 & 0.4509 & 6.5956 & 0.1950 & 0.4976 & 6.6528 \\\\ \n\\multicolumn{13}{c}{($2\/3\\times$ optimal bandwidth)} \\\\\n0.0000 & 0.1522 & 0.1725 & 4.2652 & 0.1871 & 0.1813 & 4.6906 & 0.2247 & 0.1715 & 5.0405 & 0.2693 & 0.1619 & 5.4357 \\\\ \n0.2500 & 0.0190 & 0.2125 & 2.5336 & 0.0376 & 0.2078 & 2.8422 & 0.0426 & 0.2242 & 3.0466 & 0.0613 & 0.2293 & 3.3752 \\\\ \n0.5000 & 0.0091 & 0.2396 & 2.5796 & 0.0084 & 0.2754 & 2.9025 & 0.0010 & 0.2960 & 2.9769 & 0.0000 & 0.3032 & 3.0319 \\\\ \n0.7500 & 0.1178 & 0.2799 & 4.4284 & 0.1036 & 0.3404 & 4.6850 & 0.0789 & 0.3307 & 4.3392 & 0.0632 & 0.4805 & 5.4233 \\\\ \n0.9500 & 0.2874 & 0.3178 & 6.2324 & 0.2633 & 0.3623 & 6.2816 & 0.2435 & 0.4536 & 6.7023 & 0.2055 & 0.5015 & 6.7598 \\\\\n\\multicolumn{13}{c}{($3\/2\\times$ optimal bandwidth)} \\\\ \n0.0000 & 0.1550 & 0.1716 & 4.2942 & 0.1902 & 0.1801 & 4.7180 & 0.2277 & 0.1705 & 5.0675 & 0.2724 & 0.1609 & 5.4612 \\\\ \n0.2500 & 0.0208 & 0.2116 & 2.5607 & 0.0401 & 0.2066 & 2.8777 & 0.0453 & 0.2224 & 3.0780 & 0.0646 & 0.2274 & 3.4100 \\\\ \n0.5000 & 0.0073 & 0.2389 & 2.5382 & 0.0066 & 0.2736 & 2.8547 & 0.0005 & 0.2934 & 2.9420 & 0.0001 & 0.3000 & 3.0015 \\\\ \n0.7500 & 0.1086 & 0.2795 & 4.3217 & 0.0945 & 0.3379 & 4.5683 & 0.0712 & 0.3280 & 4.2285 & 0.0565 & 0.4750 & 5.3115 \\\\ \n0.9500 & 0.2693 & 0.3173 & 6.0825 & 0.2458 & 0.3598 & 6.1255 & 0.2268 & 0.4497 & 6.5502 & 0.1906 & 0.4960 & 6.6072 \\\\ \n \\hline\n\\end{tabular}\n} \n\\end{table}\n\n\\FloatBarrier\n\\begin{table}[H]\n\\pagenumbering{gobble}\n{\\scriptsize\n\\centering\n\\caption{DGP1 (bivariate normal), $n=400$, 1000 replications. Proposed estimator with second-order Epanechnikov kernel ($p=2$). (RMSE is multiplied by $\\sqrt{n}$.)}\\label{dgp1n400mhat1.p2}\n\\begin{tabular}{ccccccccccccc}\n \\hline\n\\multirow{2}{*}{$\\rho$} &\\multicolumn{3}{c}{$\\alpha=2.00$} &\\multicolumn{3}{c}{$\\alpha=1.50$} &\\multicolumn{3}{c}{$\\alpha=1.25$} &\\multicolumn{3}{c}{$\\alpha=1.00$}\\\\ \n & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE \\\\ \n \\hline\n\\multicolumn{13}{c}{(optimal bandwidth)} \\\\\n0.0000 & 0.0073 & 0.0972 & 2.5875 & 0.0021 & 0.0917 & 2.0524 & 0.0007 & 0.0942 & 1.9547 & 0.0000 & 0.0933 & 1.8683 \\\\ \n0.2500 & 0.0007 & 0.0924 & 1.9180 & 0.0001 & 0.0966 & 1.9371 & 0.0017 & 0.0916 & 2.0082 & 0.0052 & 0.0920 & 2.3367 \\\\ \n0.5000 & 0.0011 & 0.0962 & 2.0330 & 0.0057 & 0.0908 & 2.3585 & 0.0112 & 0.0959 & 2.8557 & 0.0199 & 0.0917 & 3.3640 \\\\ \n0.7500 & 0.0082 & 0.0919 & 2.5766 & 0.0194 & 0.0924 & 3.3416 & 0.0293 & 0.0894 & 3.8594 & 0.0460 & 0.0889 & 4.6434 \\\\ \n0.9500 & 0.0178 & 0.0922 & 3.2404 & 0.0345 & 0.0878 & 4.1101 & 0.0505 & 0.0840 & 4.7995 & 0.0731 & 0.0845 & 5.6663 \\\\ \n\\multicolumn{13}{c}{($2\/3\\times$ optimal bandwidth)} \\\\\n0.0000 & 0.0073 & 0.0972 & 2.5890 & 0.0021 & 0.0918 & 2.0535 & 0.0007 & 0.0942 & 1.9557 & 0.0000 & 0.0934 & 1.8694 \\\\ \n0.2500 & 0.0007 & 0.0924 & 1.9204 & 0.0001 & 0.0966 & 1.9382 & 0.0017 & 0.0916 & 2.0077 & 0.0052 & 0.0921 & 2.3354 \\\\ \n0.5000 & 0.0011 & 0.0963 & 2.0315 & 0.0056 & 0.0908 & 2.3546 & 0.0111 & 0.0960 & 2.8519 & 0.0198 & 0.0918 & 3.3594 \\\\ \n0.7500 & 0.0080 & 0.0920 & 2.5692 & 0.0192 & 0.0924 & 3.3330 & 0.0291 & 0.0894 & 3.8510 & 0.0458 & 0.0891 & 4.6352 \\\\ \n0.9500 & 0.0175 & 0.0923 & 3.2283 & 0.0342 & 0.0879 & 4.0973 & 0.0502 & 0.0841 & 4.7867 & 0.0727 & 0.0846 & 5.6532 \\\\ \n\\multicolumn{13}{c}{($3\/2\\times$ optimal bandwidth)} \\\\ \n0.0000 & 0.0073 & 0.0971 & 2.5869 & 0.0021 & 0.0917 & 2.0520 & 0.0007 & 0.0941 & 1.9542 & 0.0000 & 0.0933 & 1.8678 \\\\ \n0.2500 & 0.0007 & 0.0924 & 1.9170 & 0.0001 & 0.0965 & 1.9367 & 0.0017 & 0.0916 & 2.0084 & 0.0052 & 0.0920 & 2.3372 \\\\ \n0.5000 & 0.0011 & 0.0962 & 2.0337 & 0.0057 & 0.0908 & 2.3602 & 0.0112 & 0.0959 & 2.8574 & 0.0199 & 0.0917 & 3.3660 \\\\ \n0.7500 & 0.0082 & 0.0919 & 2.5799 & 0.0195 & 0.0923 & 3.3453 & 0.0293 & 0.0893 & 3.8632 & 0.0461 & 0.0889 & 4.6471 \\\\ \n0.9500 & 0.0179 & 0.0921 & 3.2458 & 0.0346 & 0.0878 & 4.1158 & 0.0507 & 0.0840 & 4.8052 & 0.0733 & 0.0844 & 5.6721 \\\\ \n\\hline\n\\end{tabular}\n} \n\\end{table}\n\n\\begin{table}[H]\n\\pagenumbering{gobble}\n{\\scriptsize\n\\centering\n\\caption{DGP2 (non-normal), $n=400$, 1000 replications. Proposed estimator with second-order Epanechnikov kernel ($p=2$). (RMSE is multiplied by $\\sqrt{n}$.)}\\label{dgp2n400mhat1.p2}\n\\begin{tabular}{ccccccccccccc}\n \\hline\n\\multirow{2}{*}{$\\rho$} &\\multicolumn{3}{c}{$\\alpha=2.00$} &\\multicolumn{3}{c}{$\\alpha=1.50$} &\\multicolumn{3}{c}{$\\alpha=1.25$} &\\multicolumn{3}{c}{$\\alpha=1.00$}\\\\ \n & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE \\\\ \n \\hline\n\\multicolumn{13}{c}{(optimal bandwidth)} \\\\\n0.0000 & 0.1460 & 0.0908 & 7.8539 & 0.1864 & 0.0884 & 8.8133 & 0.2179 & 0.0881 & 9.5014 & 0.2658 & 0.0831 & 10.4445 \\\\ \n0.2500 & 0.0184 & 0.1045 & 3.4218 & 0.0317 & 0.1053 & 4.1369 & 0.0426 & 0.1153 & 4.7277 & 0.0651 & 0.1158 & 5.6037 \\\\ \n0.5000 & 0.0095 & 0.1255 & 3.1783 & 0.0048 & 0.1261 & 2.8790 & 0.0025 & 0.1498 & 3.1582 & 0.0004 & 0.1898 & 3.8157 \\\\ \n0.7500 & 0.1296 & 0.1433 & 7.7487 & 0.1058 & 0.1769 & 7.4044 & 0.0912 & 0.1956 & 7.1956 & 0.0686 & 0.2002 & 6.5943 \\\\ \n0.9500 & 0.2897 & 0.1551 & 11.2027 & 0.2707 & 0.1828 & 11.0287 & 0.2632 & 0.2245 & 11.1992 & 0.2507 & 0.2913 & 11.5847 \\\\ \n\\multicolumn{13}{c}{($2\/3\\times$ optimal bandwidth)} \\\\\n0.0000 & 0.1428 & 0.0913 & 7.7763 & 0.1829 & 0.0891 & 8.7369 & 0.2143 & 0.0888 & 9.4263 & 0.2621 & 0.0837 & 10.3754 \\\\ \n0.2500 & 0.0163 & 0.1050 & 3.3076 & 0.0289 & 0.1061 & 4.0066 & 0.0392 & 0.1164 & 4.5947 & 0.0611 & 0.1170 & 5.4683 \\\\ \n0.5000 & 0.0120 & 0.1259 & 3.3398 & 0.0068 & 0.1271 & 3.0284 & 0.0040 & 0.1515 & 3.2815 & 0.0010 & 0.1918 & 3.8896 \\\\ \n0.7500 & 0.1426 & 0.1434 & 8.0783 & 0.1180 & 0.1784 & 7.7409 & 0.1022 & 0.1968 & 7.5083 & 0.0780 & 0.2022 & 6.8948 \\\\ \n0.9500 & 0.3146 & 0.1552 & 11.6389 & 0.2946 & 0.1837 & 11.4600 & 0.2865 & 0.2260 & 11.6209 & 0.2723 & 0.2939 & 11.9783 \\\\ \n\\multicolumn{13}{c}{($3\/2\\times$ optimal bandwidth)} \\\\ \n0.0000 & 0.1473 & 0.0905 & 7.8874 & 0.1879 & 0.0882 & 8.8462 & 0.2195 & 0.0878 & 9.5336 & 0.2674 & 0.0828 & 10.4742 \\\\ \n0.2500 & 0.0193 & 0.1042 & 3.4714 & 0.0329 & 0.1049 & 4.1929 & 0.0440 & 0.1149 & 4.7847 & 0.0668 & 0.1153 & 5.6615 \\\\ \n0.5000 & 0.0085 & 0.1254 & 3.1136 & 0.0041 & 0.1256 & 2.8209 & 0.0020 & 0.1491 & 3.1118 & 0.0002 & 0.1890 & 3.7895 \\\\ \n0.7500 & 0.1243 & 0.1432 & 7.6116 & 0.1009 & 0.1763 & 7.2650 & 0.0868 & 0.1950 & 7.0661 & 0.0649 & 0.1993 & 6.4701 \\\\ \n0.9500 & 0.2797 & 0.1551 & 11.0222 & 0.2610 & 0.1824 & 10.8502 & 0.2537 & 0.2239 & 11.0246 & 0.2419 & 0.2902 & 11.4212 \\\\ \n \\hline\n\\end{tabular}\n} \n\\end{table}\n\nI next consider the simulated performances over 1000 Monte Carlo replications across DGPs, sample sizes and settings \nof $(\\rho,\\alpha)$ of several alternative estimators of the intercept $\\theta_0$. The results for samples of size $n=100$ are summarized below in \nTables~\\ref{dgp1n100balt} and \\ref{dgp2n100balt} for DGPs 1 and 2, respectively. The corresponding results for samples of size $n=400$ \nappear in Tables~\\ref{dgp1n400balt} and \\ref{dgp2n400balt}. The standard Heckman 2-step estimator is found under DGP1 to have a performance in terms of RMSE that is comparable to that of the \nproposed estimator in (\\ref{mhat}). The proposed estimator under DGP2, on the other hand, is found to dominate in terms of RMSE the performance \nof the following alternative estimators under all combinations of $(\\rho,\\alpha)$ considered: \n\n\\begin{itemize}\n\\item (OLS) The ordinary least squares estimator of the intercept parameter using only those \nobservations for which $D=1$. \n\nThese results are consistent with well established theory. In particular, Tables~\\ref{dgp1n100balt}--\\ref{dgp2n100balt} indicate the good performance \nof OLS when $\\rho=0$ and the poor performance of OLS when $\\rho>0$. In addition, the decrease in $\\sqrt{n}\\times$ RMSE for the OLS estimator when $\\rho=0$ as \none moves from $n=100$ to $n=400$ is suggestive of superefficiency, while at the same time the increases in $\\sqrt{n}\\times$RMSE \nwhen $\\rho>0$ is consistent with OLS being inconsistent under $\\rho>0$.\n\n\\item (2-step) The estimator of the intercept based on the well known procedure of \n\\citet{Heckman76,Heckman79}, which is known to be $\\sqrt{n}$-consistent if $[\\begin{array}{cc} U & V\\end{array}]$ is \nbivariate normal (i.e., generated according to DGP1). \n\nThe results for DGP1 given in Table~\\ref{dgp1n100balt} below are consistent with expectations; in particular, the \n2-step procedure exhibits an RMSE that is stable across the various configurations of $(\\rho,\\alpha)$ that were tried. In addition, a comparison \nof the relevant sections of Table~\\ref{dgp1n100balt} and Table~\\ref{dgp1n400balt} highlights the stability \nof $\\sqrt{n}\\times$ RMSE as one moves from $n=100$ to $n=400$, which is consistent with the $\\sqrt{n}$-consistency of the procedure under DGP1.\n\nThe results for DGP2 appearing in Table~\\ref{dgp2n100balt}, on the other hand, show that the performance of \nthe 2-step procedure can vary dramatically with $(\\rho,\\alpha)$. A comparison of the $\\sqrt{n}\\times$RMSE figures in \nTable~\\ref{dgp2n400balt} with those in Table~\\ref{dgp2n100balt} also suggests that the 2-step procedure under DGP2 is \nsuperefficient at $\\rho=0$ and converges at a slower-than-parametric rate for model specifications with $\\rho>0$.\n\n\\item (H90) The intercept estimator suggested by \\citet{Heckman90}, which in the context of the model specified in (\\ref{outcome})--(\\ref{observation}) \nhas the form\n\\begin{equation}\\label{h90}\n\\hat{\\theta}_{H90}\\equiv\\frac{\\sum_{i=1}^n D_i\\left(Y_i-\\bm{X}^{\\top}_i\\hat{\\bm{\\beta}}\\right)1\\left\\{\\bm{Z}_i\\hat{\\bm{\\gamma}}>b_n\\right\\}}{\\sum_{i=1}^n D_i 1\\left\\{\\bm{Z}_i\\hat{\\bm{\\gamma}}>b_n\\right\\}}\n\\end{equation}\nfor some sequence of positive constants $\\{b_n\\}$ with $b_n\\to\\infty$ as $n\\to\\infty$. I present the results of \nsimulations in which the nuisance-parameter estimators $\\hat{\\bm{\\beta}}$ and $\\hat{\\bm{\\gamma}}$ are fixed at \nthe true values of the corresponding estimands. These results appear below in Tables~\\ref{dgp1n100balt}, \\ref{dgp2n100balt}, \\ref{dgp1n400balt} and \n\\ref{dgp2n400balt} for $b_n$ equal to the sample .95-quantile of $\\bm{Z}^{\\top}_i\\bm{\\gamma}_0$. \n\nTable~\\ref{dgp1n100balt} below indicates that the performance of $\\hat{\\theta}_{H90}$ is comparable to that of the 2-step procedure under DGP1 in that its \nRMSE is stable over changes in $(\\rho,\\alpha)$. The stability in the $\\sqrt{n}\\times$RMSE figures in these tables as one moves from $n=100$ to \n$n=400$, which is evident from a comparison of Table~\\ref{dgp1n100balt} with Table~\\ref{dgp1n400balt}, also suggests that \n$\\hat{\\theta}_{H90}$ may be $\\sqrt{n}$-consistent under DGP1.\n\nTable~\\ref{dgp2n100balt}, on the other hand, shows that the performance of $\\hat{\\theta}_{H90}$ can deteriorate dramatically as $\\rho$ moves away from \nzero, although its performance under DGP2 appears to be unaffected by variation in $\\alpha$ for any given value of $\\rho$. The $\\sqrt{n}\\times$RMSE \nfigures in Table~\\ref{dgp2n100balt} and Table~\\ref{dgp2n400balt} indicate that $\\hat{\\theta}_{H90}$ has a \nslower-than-parametric rate of convergence under DGP2 that is highly sensitive to variation in $\\rho$ but relatively insensitive to variation in \n$\\alpha$.\n\n\\item (AS98) The intercept estimator developed by \\citet{AndrewsSchafgans98} as a generalization of the procedure of \n\\citet{Heckman90}. The AS98 estimator in the context of the \nmodel given above in (\\ref{outcome})--(\\ref{observation}) has the form\n\\begin{equation}\\label{as98}\n\\hat{\\theta}_{AS98}\\equiv\\frac{\\sum_{i=1}^n D_i\\left(Y_i-\\bm{X}^{\\top}_i\\hat{\\bm{\\beta}}\\right) s\\left(\\bm{Z}_i^{\\top}\\hat{\\bm{\\gamma}}-b_n\\right)}{\\sum_{i=1}^n D_i s\\left(\\bm{Z}_i^{\\top}\\hat{\\bm{\\gamma}}_n-b_n\\right)},\n\\end{equation} \nwhere, following \\citet[eq. (4.1)]{AndrewsSchafgans98}, I set\n\\begin{equation}\\label{as98s}\ns(u)=\\left\\{\\begin{array}{ccc} 1-\\exp\\left(-\\frac{u}{\\tau-u}\\right) &,&\\,x\\in (0,\\tau)\\\\\n\t\t\t\t\t\t\t\t0 &,&\\,x\\le 0\\\\\n\t\t\t\t\t\t\t\t1 &,&\\,x\\ge \\tau\\end{array}.\\right.\n\\end{equation}\nNote that the setting $\\tau=0$ reduces $\\hat{\\theta}_{AS98}$ to $\\hat{\\theta}_{H90}$ as given earlier in (\\ref{h90}). \nIn addition, the tuning parameter $b_n$ in (\\ref{as98}), as it does for $\\hat{\\theta}_{H90}$ in (\\ref{h90}) above, refers \nto a sequence of positive constants with $b_n\\to\\infty$ as $n\\to\\infty$.\n\nI present, in common with other simulations reported here, results for $\\hat{\\theta}_{AS98}$ in which the \nnuisance-parameter estimators $\\hat{\\bm{\\beta}}$ and $\\hat{\\bm{\\gamma}}$ are fixed at the true values of the corresponding \nestimands. These simulations also involve setting the nuisance parameter $\\tau$ in (\\ref{as98s}) to the sample \nmedian of $\\bm{Z}^{\\top}_i\\bm{\\gamma}_0$ and the smoothing parameter $b_n$ in (\\ref{as98}) to the sample .95-quantile \nof $\\bm{Z}^{\\top}_i\\bm{\\gamma}_0$. The corresponding results appear below in \nTables~\\ref{dgp1n100balt}--\\ref{dgp2n100balt} and also in Tables~\\ref{dgp1n400balt}--\\ref{dgp2n400balt}.\n\nIt is clear from Tables~\\ref{dgp1n100balt}--\\ref{dgp2n100balt} below that $\\hat{\\theta}_{AS}$ is numerically \nunstable under DGP1 but numerically stable under DGP2. Table~\\ref{dgp2n100balt} also indicates the \nsensitivity of the performance of $\\hat{\\theta}_{AS}$ to variation in $(\\rho,\\alpha)$. A comparison of Table~\\ref{dgp2n100balt} with \nTable~\\ref{dgp2n400balt} also underscores the slower-than-parametric rate of convergence of $\\hat{\\theta}_{AS98}$.\n\\end{itemize}\n\n\\FloatBarrier\n\\begin{table}[H]\n\\pagenumbering{gobble}\n{\\scriptsize\n\\centering\n\\caption{DGP1 (bivariate normal), $n=100$, 1000 replications. Alternative estimators. (RMSE is multiplied by $\\sqrt{n}$.)}\\label{dgp1n100balt}\n\\begin{tabular}{ccccccccccccc}\n \\hline\n\\multirow{2}{*}{$\\rho$} &\\multicolumn{3}{c}{$\\alpha=2.00$} &\\multicolumn{3}{c}{$\\alpha=1.50$} &\\multicolumn{3}{c}{$\\alpha=1.25$} &\\multicolumn{3}{c}{$\\alpha=1.00$}\\\\ \n & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE \\\\ \n \\hline\n\\multicolumn{13}{c}{(OLS)} \\\\\n0.0000 & 0.0000 & 0.1788 & 1.7882 & 0.0000 & 0.1693 & 1.6936 & 0.0000 & 0.1728 & 1.7280 & 0.0001 & 0.1746 & 1.7495 \\\\ \n0.2500 & 0.0219 & 0.1727 & 2.2745 & 0.0256 & 0.1698 & 2.3326 & 0.0265 & 0.1795 & 2.4239 & 0.0306 & 0.1651 & 2.4053 \\\\ \n0.5000 & 0.0936 & 0.1656 & 3.4796 & 0.1029 & 0.1593 & 3.5820 & 0.1152 & 0.1605 & 3.7550 & 0.1218 & 0.1565 & 3.8253 \\\\ \n0.7500 & 0.2162 & 0.1582 & 4.9117 & 0.2378 & 0.1609 & 5.1352 & 0.2559 & 0.1464 & 5.2658 & 0.2712 & 0.1431 & 5.4004 \\\\ \n0.9500 & 0.3443 & 0.1458 & 6.0461 & 0.3887 & 0.1362 & 6.3815 & 0.4124 & 0.1362 & 6.5651 & 0.4361 & 0.1325 & 6.7356 \\\\ \n\\multicolumn{13}{c}{(Heckman 2-step)} \\\\\n0.0000 & 0.0000 & 0.3007 & 3.0068 & 0.0001 & 0.3299 & 3.3007 & 0.0000 & 0.3503 & 3.5025 & 0.0003 & 0.3886 & 3.8898 \\\\ \n0.2500 & 0.0011 & 0.3143 & 3.1596 & 0.0023 & 0.3265 & 3.2994 & 0.0009 & 0.3564 & 3.5757 & 0.0016 & 0.3650 & 3.6722 \\\\ \n0.5000 & 0.0031 & 0.2963 & 3.0150 & 0.0046 & 0.3320 & 3.3892 & 0.0058 & 0.3338 & 3.4248 & 0.0064 & 0.3712 & 3.7973 \\\\ \n0.7500 & 0.0083 & 0.2969 & 3.1056 & 0.0106 & 0.3148 & 3.3116 & 0.0111 & 0.3270 & 3.4354 & 0.0111 & 0.3509 & 3.6635 \\\\ \n0.9500 & 0.0171 & 0.2678 & 2.9806 & 0.0169 & 0.2952 & 3.2264 & 0.0131 & 0.3221 & 3.4179 & 0.0207 & 0.3052 & 3.3747 \\\\ \n\\multicolumn{13}{c}{(\\citet{Heckman90} ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\bm{\\gamma}_0}(.95)$))} \\\\\n0.0000 & 0.0003 & 0.4509 & 4.5124 & 0.0002 & 0.4524 & 4.5261 & 0.0004 & 0.4557 & 4.5611 & 0.0001 & 0.4689 & 4.6895 \\\\ \n0.2500 & 0.0004 & 0.4488 & 4.4929 & 0.0006 & 0.4507 & 4.5135 & 0.0003 & 0.4488 & 4.4918 & 0.0005 & 0.4513 & 4.5180 \\\\ \n0.5000 & 0.0000 & 0.4481 & 4.4815 & 0.0011 & 0.4454 & 4.4661 & 0.0015 & 0.4423 & 4.4403 & 0.0002 & 0.4342 & 4.3443 \\\\ \n0.7500 & 0.0011 & 0.4516 & 4.5283 & 0.0001 & 0.4372 & 4.3732 & 0.0025 & 0.4379 & 4.4076 & 0.0040 & 0.4315 & 4.3615 \\\\ \n0.9500 & 0.0030 & 0.4531 & 4.5641 & 0.0020 & 0.4363 & 4.3863 & 0.0030 & 0.4326 & 4.3604 & 0.0060 & 0.4327 & 4.3953 \\\\ \n\\multicolumn{13}{c}{(\\citet{AndrewsSchafgans98} ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\bm{\\gamma}_0}(.95)$))} \\\\\n0.0000 & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf \\\\ \n0.2500 & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf \\\\ \n0.5000 & 0.0000 & 0.5650 & 5.6499 & Inf & Inf & Inf & Inf & Inf & Inf & 0.0000 & 0.5579 & 5.5796 \\\\ \n0.7500 & 0.0019 & 0.5757 & 5.7739 & Inf & Inf & Inf & 0.0025 & 0.5465 & 5.4883 & Inf & Inf & Inf \\\\ \n0.9500 & Inf & Inf & Inf & 0.0017 & 0.5772 & 5.7865 & 0.0007 & 0.5559 & 5.5656 & 0.0036 & 0.5365 & 5.3982 \\\\ \n \\hline\n\\end{tabular}\n} \n\\end{table}\n\n\\begin{table}[H]\n\\pagenumbering{gobble}\n{\\scriptsize\n\\centering\n\\caption{DGP2 (non-normal), $n=100$, 1000 replications. Alternative estimators. (RMSE is multiplied by $\\sqrt{n}$.)}\\label{dgp2n100balt}\n\\begin{tabular}{ccccccccccccc}\n \\hline\n\\multirow{2}{*}{$\\rho$} &\\multicolumn{3}{c}{$\\alpha=2.00$} &\\multicolumn{3}{c}{$\\alpha=1.50$} &\\multicolumn{3}{c}{$\\alpha=1.25$} &\\multicolumn{3}{c}{$\\alpha=1.00$}\\\\ \n & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE \\\\ \n \\hline\n\\multicolumn{13}{c}{(OLS)} \\\\\n0.0000 & 0.0001 & 0.4821 & 4.8221 & 0.0000 & 0.3909 & 3.9091 & 0.0014 & 1.0831 & 10.8368 & 0.0005 & 0.6892 & 6.8959 \\\\ \n0.2500 & 0.1510 & 0.3288 & 5.0909 & 0.1736 & 0.4121 & 5.8603 & 0.2394 & 0.7670 & 9.0977 & 0.2633 & 2.1317 & 21.9257 \\\\ \n0.5000 & 0.5682 & 0.3763 & 8.4250 & 0.7799 & 0.7149 & 11.3620 & 0.8475 & 0.4896 & 10.4266 & 1.1118 & 1.1903 & 15.9015 \\\\ \n0.7500 & 1.2924 & 0.2869 & 11.7248 & 1.7087 & 0.4380 & 13.7860 & 1.8913 & 1.8977 & 23.4365 & 2.3993 & 1.4258 & 21.0529 \\\\ \n0.9500 & 2.1300 & 0.3085 & 14.9169 & 2.6775 & 0.5903 & 17.3954 & 3.2066 & 1.2800 & 22.0111 & 3.6936 & 4.8277 & 51.9616 \\\\ \n\\multicolumn{13}{c}{(Heckman 2-step)} \\\\\n0.0000 & 0.0000 & 0.5162 & 5.1620 & 0.0002 & 0.5989 & 5.9913 & 0.0020 & 1.6850 & 16.8557 & 0.0055 & 1.1861 & 11.8844 \\\\ \n0.2500 & 0.1849 & 0.5086 & 6.6596 & 0.2433 & 0.6251 & 7.9627 & 0.3461 & 0.7657 & 9.6563 & 0.4278 & 1.4736 & 16.1227 \\\\ \n0.5000 & 0.7718 & 0.6221 & 10.7649 & 1.2359 & 1.9152 & 22.1445 & 1.3865 & 0.9240 & 14.9673 & 2.3721 & 2.9882 & 33.6172 \\\\ \n0.7500 & 1.6186 & 0.5040 & 13.6844 & 1.0515 & 16.7408 & 167.7214 & 3.2286 & 2.1692 & 28.1672 & 4.6267 & 2.7391 & 34.8276 \\\\ \n0.9500 & 2.7113 & 0.5127 & 17.2457 & 3.8929 & 0.9294 & 21.8101 & 5.4307 & 3.6028 & 42.9077 & 8.1794 & 4.7219 & 55.2045 \\\\\n\\multicolumn{13}{c}{(\\citet{Heckman90} ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\bm{\\gamma}_0}(.95)$))} \\\\\n0.0000 & 0.0001 & 0.4388 & 4.3889 & 0.0001 & 0.4741 & 4.7418 & 0.0004 & 0.4621 & 4.6254 & 0.0000 & 0.4743 & 4.7428 \\\\ \n0.2500 & 0.2121 & 0.4677 & 6.5640 & 0.3145 & 0.5099 & 7.5793 & 0.4265 & 0.6050 & 8.9023 & 0.5743 & 0.6945 & 10.2790 \\\\ \n0.5000 & 0.8606 & 0.4819 & 10.4540 & 1.3574 & 0.8578 & 14.4677 & 1.5614 & 0.8493 & 15.1087 & 2.1035 & 0.9589 & 17.3869 \\\\ \n0.7500 & 1.8488 & 0.5264 & 14.5806 & 2.9232 & 1.0078 & 19.8465 & 3.4719 & 0.9811 & 21.0580 & 5.0749 & 1.8515 & 29.1596 \\\\ \n0.9500 & 3.0237 & 0.7279 & 18.8508 & 4.3979 & 1.0647 & 23.5192 & 5.6603 & 1.6073 & 28.7121 & 7.8382 & 1.9172 & 33.9323 \\\\\n\\multicolumn{13}{c}{(\\citet{AndrewsSchafgans98} ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\bm{\\gamma}_0}(.95)$))} \\\\\n0.0000 & 0.0000 & 0.9234 & 9.2336 & 0.0015 & 1.0125 & 10.1326 & 0.0017 & 1.0037 & 10.0452 & 0.0001 & 0.9807 & 9.8079 \\\\ \n0.2500 & 0.2035 & 1.0138 & 11.0963 & 0.3754 & 1.1914 & 13.3967 & 0.6937 & 1.8209 & 20.0236 & 1.0490 & 2.1058 & 23.4170 \\\\ \n0.5000 & 0.8978 & 1.0810 & 14.3748 & 2.0336 & 2.9706 & 32.9515 & 2.3156 & 3.0879 & 34.4245 & 3.2494 & 2.6734 & 32.2432 \\\\ \n0.7500 & 1.8968 & 1.1576 & 17.9912 & Inf & Inf & Inf & 4.9692 & 2.9053 & 36.6200 & 10.1655 & 7.9701 & 85.8419 \\\\ \n0.9500 & 3.3558 & 2.1412 & 28.1792 & 5.0293 & 2.8442 & 36.2197 & 7.8162 & 3.3294 & 43.4752 & 13.5506 & 7.1477 & 80.3995 \\\\ \n \\hline\n\\end{tabular}\n} \n\\end{table}\n\n\\begin{table}[H]\n\\pagenumbering{gobble}\n{\\scriptsize\n\\centering\n\\caption{DGP1 (bivariate normal), $n=400$, 1000 replications. Alternative estimators. (RMSE is multiplied by $\\sqrt{n}$.)}\\label{dgp1n400balt}\n\\begin{tabular}{ccccccccccccc}\n \\hline\n\\multirow{2}{*}{$\\rho$} &\\multicolumn{3}{c}{$\\alpha=2.00$} &\\multicolumn{3}{c}{$\\alpha=1.50$} &\\multicolumn{3}{c}{$\\alpha=1.25$} &\\multicolumn{3}{c}{$\\alpha=1.00$}\\\\ \n & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE \\\\ \n \\hline\n\\multicolumn{13}{c}{(OLS)} \\\\\n0.0000 & 0.0000 & 0.0866 & 1.7318 & 0.0000 & 0.0809 & 1.6194 & 0.0000 & 0.0833 & 1.6676 & 0.0000 & 0.0756 & 1.5130 \\\\ \n0.2500 & 0.0232 & 0.0840 & 3.4805 & 0.0258 & 0.0823 & 3.6111 & 0.0289 & 0.0829 & 3.7818 & 0.0295 & 0.0779 & 3.7712 \\\\ \n0.5000 & 0.0935 & 0.0805 & 6.3227 & 0.1071 & 0.0772 & 6.7232 & 0.1149 & 0.0771 & 6.9534 & 0.1211 & 0.0750 & 7.1190 \\\\ \n0.7500 & 0.2163 & 0.0758 & 9.4248 & 0.2390 & 0.0712 & 9.8808 & 0.2530 & 0.0712 & 10.1604 & 0.2719 & 0.0684 & 10.5179 \\\\ \n0.9500 & 0.3417 & 0.0695 & 11.7732 & 0.3830 & 0.0664 & 12.4488 & 0.4087 & 0.0634 & 12.8487 & 0.4366 & 0.0624 & 13.2739 \\\\ \n\\multicolumn{13}{c}{(Heckman 2-step)} \\\\\n0.0000 & 0.0000 & 0.1552 & 3.1042 & 0.0000 & 0.1640 & 3.2810 & 0.0000 & 0.1784 & 3.5672 & 0.0000 & 0.1832 & 3.6645 \\\\ \n0.2500 & 0.0000 & 0.1522 & 3.0456 & 0.0001 & 0.1591 & 3.1876 & 0.0001 & 0.1720 & 3.4472 & 0.0001 & 0.1797 & 3.6009 \\\\ \n0.5000 & 0.0002 & 0.1525 & 3.0627 & 0.0005 & 0.1646 & 3.3198 & 0.0003 & 0.1709 & 3.4357 & 0.0002 & 0.1770 & 3.5502 \\\\ \n0.7500 & 0.0009 & 0.1435 & 2.9301 & 0.0006 & 0.1559 & 3.1585 & 0.0003 & 0.1641 & 3.2977 & 0.0011 & 0.1708 & 3.4785 \\\\ \n0.9500 & 0.0007 & 0.1368 & 2.7857 & 0.0008 & 0.1468 & 2.9915 & 0.0012 & 0.1558 & 3.1939 & 0.0010 & 0.1614 & 3.2917 \\\\ \n\\multicolumn{13}{c}{(\\citet{Heckman90} ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\bm{\\gamma}_0}(.95)$))} \\\\\n0.0000 & 0.0000 & 0.1623 & 3.2457 & 0.0000 & 0.1627 & 3.2548 & 0.0000 & 0.1653 & 3.3058 & 0.0000 & 0.1608 & 3.2164 \\\\ \n0.2500 & 0.0001 & 0.1585 & 3.1769 & 0.0001 & 0.1594 & 3.1950 & 0.0002 & 0.1566 & 3.1470 & 0.0005 & 0.1557 & 3.1449 \\\\ \n0.5000 & 0.0005 & 0.1582 & 3.1931 & 0.0010 & 0.1556 & 3.1755 & 0.0016 & 0.1647 & 3.3916 & 0.0028 & 0.1696 & 3.5550 \\\\ \n0.7500 & 0.0007 & 0.1569 & 3.1826 & 0.0016 & 0.1607 & 3.3140 & 0.0032 & 0.1557 & 3.3125 & 0.0070 & 0.1502 & 3.4401 \\\\ \n0.9500 & 0.0009 & 0.1539 & 3.1347 & 0.0027 & 0.1458 & 3.0967 & 0.0052 & 0.1487 & 3.3033 & 0.0098 & 0.1515 & 3.6168 \\\\ \n\\multicolumn{13}{c}{(\\citet{AndrewsSchafgans98} ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\bm{\\gamma}_0}(.95)$))} \\\\\n0.0000 & Inf & Inf & Inf & 0.0000 & 0.3118 & 6.2369 & Inf & Inf & Inf & 0.0000 & 0.2991 & 5.9817 \\\\ \n0.2500 & 0.0000 & 0.3043 & 6.0860 & 0.0000 & 0.3056 & 6.1114 & Inf & Inf & Inf & Inf & Inf & Inf \\\\ \n0.5000 & Inf & Inf & Inf & 0.0001 & 0.2973 & 5.9497 & Inf & Inf & Inf & 0.0004 & 0.3065 & 6.1438 \\\\ \n0.7500 & 0.0001 & 0.2888 & 5.7792 & 0.0000 & 0.3041 & 6.0817 & Inf & Inf & Inf & Inf & Inf & Inf \\\\ \n0.9500 & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & Inf & 0.0015 & 0.3021 & 6.0908 \\\\ \n \\hline\n\\end{tabular}\n} \n\\end{table}\n\n\\begin{table}[H]\n\\pagenumbering{gobble}\n{\\scriptsize\n\\centering\n\\caption{DGP2 (non-normal), $n=400$, 1000 replications. Alternative estimators. (RMSE is multiplied by $\\sqrt{n}$.)}\\label{dgp2n400balt}\n\\begin{tabular}{ccccccccccccc}\n \\hline\n\\multirow{2}{*}{$\\rho$} &\\multicolumn{3}{c}{$\\alpha=2.00$} &\\multicolumn{3}{c}{$\\alpha=1.50$} &\\multicolumn{3}{c}{$\\alpha=1.25$} &\\multicolumn{3}{c}{$\\alpha=1.00$}\\\\ \n & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE & sq bias & sd & RMSE \\\\ \n \\hline\n\\multicolumn{13}{c}{(OLS)} \\\\\n0.0000 & 0.0000 & 0.1225 & 2.4496 & 0.0000 & 0.1295 & 2.5902 & 0.0000 & 0.1356 & 2.7151 & 0.0000 & 0.1423 & 2.8469 \\\\ \n0.2500 & 0.1464 & 0.1253 & 8.0533 & 0.1855 & 0.1377 & 9.0428 & 0.2246 & 0.1549 & 9.9707 & 0.2653 & 0.1651 & 10.8174 \\\\ \n0.5000 & 0.5746 & 0.1254 & 15.3660 & 0.7346 & 0.1398 & 17.3680 & 0.8895 & 0.2137 & 19.3406 & 1.0931 & 0.2617 & 21.5551 \\\\ \n0.7500 & 1.3338 & 0.1182 & 23.2184 & 1.6652 & 0.1752 & 26.0457 & 1.9547 & 0.2239 & 28.3186 & 2.3994 & 0.2910 & 31.5215 \\\\ \n0.9500 & 2.1328 & 0.1128 & 29.2955 & 2.6396 & 0.1762 & 32.6839 & 3.1089 & 0.2503 & 35.6178 & 4.0168 & 0.4762 & 41.1997 \\\\ \n\\multicolumn{13}{c}{(Heckman 2-step)} \\\\\n0.0000 & 0.0000 & 0.1900 & 3.8004 & 0.0000 & 0.2107 & 4.2133 & 0.0002 & 0.2368 & 4.7430 & 0.0000 & 0.2641 & 5.2828 \\\\ \n0.2500 & 0.1966 & 0.1881 & 9.6325 & 0.2873 & 0.2342 & 11.6982 & 0.4031 & 0.3386 & 14.3913 & 0.6158 & 0.4610 & 18.2023 \\\\ \n0.5000 & 0.7901 & 0.1930 & 18.1916 & 1.1622 & 0.2667 & 22.2114 & 1.6124 & 0.5121 & 27.3834 & 2.4130 & 0.6840 & 33.9460 \\\\ \n0.7500 & 1.8144 & 0.2124 & 27.2728 & 2.6411 & 0.4291 & 33.6168 & 3.4999 & 0.5283 & 38.8794 & 5.4115 & 0.8450 & 49.4998 \\\\ \n0.9500 & 2.8707 & 0.2172 & 34.1632 & 4.1789 & 0.4178 & 41.7300 & 5.6244 & 0.6344 & 49.0992 & 9.1834 & 1.2123 & 65.2785 \\\\ \n\\multicolumn{13}{c}{(\\citet{Heckman90} ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\bm{\\gamma}_0}(.95)$))} \\\\\n0.0000 & 0.0000 & 0.1625 & 3.2499 & 0.0000 & 0.1668 & 3.3373 & 0.0001 & 0.1698 & 3.4017 & 0.0000 & 0.1703 & 3.4069 \\\\ \n0.2500 & 0.1881 & 0.1647 & 9.2789 & 0.2496 & 0.1838 & 10.6470 & 0.3165 & 0.2183 & 12.0686 & 0.4023 & 0.2231 & 13.4475 \\\\ \n0.5000 & 0.7417 & 0.1732 & 17.5695 & 0.9928 & 0.2013 & 20.3307 & 1.2631 & 0.3301 & 23.4275 & 1.6212 & 0.4082 & 26.7415 \\\\ \n0.7500 & 1.7112 & 0.1877 & 26.4303 & 2.2915 & 0.3216 & 30.9514 & 2.7843 & 0.3655 & 34.1634 & 3.5582 & 0.4415 & 38.7456 \\\\ \n0.9500 & 2.7126 & 0.1885 & 33.1547 & 3.6019 & 0.2978 & 38.4220 & 4.4380 & 0.4261 & 42.9862 & 6.0276 & 0.7106 & 51.1176 \\\\ \n\\multicolumn{13}{c}{(\\citet{AndrewsSchafgans98} ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\bm{\\gamma}_0}(.95)$))} \\\\\n0.0000 & 0.0000 & 0.4867 & 9.7345 & 0.0001 & 0.5337 & 10.6766 & 0.0001 & 0.5335 & 10.6730 & 0.0004 & 0.5566 & 11.1378 \\\\ \n0.2500 & 0.2272 & 0.5277 & 14.2224 & 0.4256 & 0.7807 & 20.3484 & 0.7890 & 1.5216 & 35.2385 & 1.1932 & 1.4466 & 36.2534 \\\\ \n0.5000 & 0.9464 & 0.7039 & 24.0155 & 1.6530 & 1.0790 & 33.5688 & 2.9148 & 2.8347 & 66.1824 & 5.3105 & 3.1842 & 78.6128 \\\\ \n0.7500 & 2.2403 & 0.9842 & 35.8267 & 3.9629 & 2.3050 & 60.9133 & 6.0330 & 2.7449 & 73.6684 & 10.2806 & 3.3502 & 92.7458 \\\\ \n0.9500 & 3.5332 & 0.9977 & 42.5610 & 5.7880 & 1.8579 & 60.7943 & 9.2207 & 3.1294 & 87.2098 & 21.6148 & 5.9786 & 151.4702 \\\\ \n \\hline\n\\end{tabular}\n} \n\\end{table}\n\nIn summary, the simulations presented here show the potential of the proposed estimator to exhibit good performance in \nterms of RMSE across two different parametric families of data-generating process, across variation in the degree to \nwhich the errors $U$ and $V$ are dependent and across variation in the extent to which the parameter of interest is identified. This assessment is \nunaffected by moderate variation in the estimated MSE-optimal bandwidth used to \nimplement the proposed estimator. Tables~\\ref{dgp1n100mhat1.p2} and \\ref{dgp2n100mhat1.p2} also support the conclusion of Theorem~\\ref{mainthm} \nin indicating the sensitivity of the bias of the proposed estimator to variation in the parameter $(\\rho,\\alpha)$ \nunder both DGP1 and DGP2. These results also show that the estimated MSE-optimal bandwidth used to implement the proposed \nestimator was effective in limiting the extent to which the RMSE of the proposed estimator was sensitive to variation in \n$(\\rho,\\alpha)$. In particular, the RMSE of the proposed estimator was found under DGP2 to dominate those of the other \nestimators considered.\n\n\n\\section{Empirical Example}\\label{ee}\n\n\\noindent This section reconsiders individual labour-market data from Malaysia that were originally \nanalyzed by \\citet{Schafgans00}. The estimator developed above is applied to the problem of \nestimating the extent of plausible gender wage discrimination in Malaysia using data from \nthe Second Malaysian Family Life Survey (MFLS2) conducted between August 1988 and January 1989. Inferences \navailable from the proposed estimator are compared with those obtained via \nthe same alternative estimators considered in Section~\\ref{mc}. The proposed estimator was found to \ngenerate inferences that differ significantly from those obtainable via the alternative estimators used in \nthe simulation experiments described in Section~\\ref{mc}. All estimators applied to the MFLS2 data considered \nin this section were implemented in precisely the same way in which they were implemented in the simulation \nexperiments presented earlier.\n\nI consider a decomposition of the female--male log-wage difference for ethnic Malay workers. The \nconsideration of gender wage gaps for Malaysian workers of the same ethnicity is potentially important because of the differential treatment of \nMalays in the labour force after 1970 \\citep[see e.g.,][and references cited]{Schafgans00}. I follow \\citet{Schafgans00} by analyzing gender wage gaps in the MFLS2 using the basic decomposition \ntechnique of \\citet{Oaxaca73}. In particular, suppose that the generic model given above in \n(\\ref{outcome})--(\\ref{observation}) holds for both men and women, \ni.e., \n\\begin{eqnarray}\nY^*_j &=& \\theta_{j}+\\bm{X}^{\\top}_j\\bm{\\beta}_{j}+U_j,\\label{outcomej}\\\\\nD_j &=& 1\\left\\{\\bm{Z}^{\\top}_j\\bm{\\gamma}_{j}\\ge V_j\\right\\},\\label{selectionj}\\\\\nY_j &=& DY^*_j\\label{observationj}\n\\end{eqnarray}\nwhere $Y^*_j$ is the natural logarithm of the offered average hourly wage, and where the index $j\\in\\{0,1\\}$ denotes a given gender. For $j\\in\\{0,1\\}$ let \n$\\bar{Y}_j\\equiv E\\left[\\left.Y_j\\right|D_j=1\\right]$, and let $\\bar{\\bm{X}}_j$ denote the average \n``endowments'' of wage-determining attributes for workers of gender $j$. The observed log-wage gap \n$\\bar{Y}_1-\\bar{Y}_0$ between the two genders can then be decomposed as\n\\begin{eqnarray}\n\\bar{Y}_1-\\bar{Y}_0 &=&\\left[\\left(\\theta_1-\\theta_0\\right)+\\bar{\\bm{X}}^{\\top}_0\\left(\\bm{\\beta}_1-\\bm{\\beta}_0\\right)\\right]+(\\bar{\\bm{X}}_1-\\bar{\\bm{X}}_0)^{\\top}\\bm{\\beta}_1\\nonumber\\\\\n\t& &+\\left(E\\left[\\left.U_1\\right|D_1=1\\right]-E\\left[\\left.U_0\\right|D_0=1\\right]\\right)\\label{decomp1}\\\\\n\t&=&\\left[\\left(\\theta_1-\\theta_0\\right)+\\bar{\\bm{X}}^{\\top}_1\\left(\\bm{\\beta}_1-\\bm{\\beta}_0\\right)\\right]+(\\bar{\\bm{X}}_1-\\bar{\\bm{X}}_0)^{\\top}\\bm{\\beta}_0\\nonumber\\\\\n\t& &+\\left(E\\left[\\left.U_1\\right|D_1=1\\right]-E\\left[\\left.U_0\\right|D_0=1\\right]\\right)\\label{decomp2}\\\\\n\t&\\equiv & A+B+C,\\label{decomp}\n\\end{eqnarray}\nwhere $A$ is that part of the gap due to differences in wage structures between genders;\n$B$ is due to observable differences between men and women in wage-determining \ncharacteristics and $C$ is the contribution of differential self-selection into the labour force. Following \n\\citet{Schafgans00} the quantity $\\bar{Y}_1-\\bar{Y}_0-C=A+B$ is referred to as the \n{\\it selection-corrected log-wage gap}.\n\nWage discrimination in favor of members of gender $j=1$ is empirically plausible if the overall log-wage gap \n$\\bar{Y}_1-\\bar{Y}_0$ cannot be entirely explained by differential self-selection into paid work, differences in \nobserved endowments or by differing returns to those endowments. Moreover, given the definitions of the \nquantities $A$ and $B$ appearing above in (\\ref{decomp}), the extent of plausible wage discrimination favoring gender 1 may be equated with the difference in \nintercepts $\\theta_1-\\theta_0$. \n\nThe analysis that follows considers a subset of the sample taken from the MFLS2 of 1988--89 that \nwas analyzed by \\citet{Schafgans00}. This particular dataset is publicly available from the {\\it Journal of Applied Econometrics} Data Archive \nat \\url{http:\/\/qed.econ.queensu.ca\/jae\/1998-v13.5\/schafgans\/}. Each observation in this sample corresponds to a member of the labour force. I specifically consider ethnic \nMalays residing in non-urban settings who were observed to have some level of unearned household income in terms of dividends, interests or rents, \nand who were also observed to have passed the highest level of schooling (i.e., primary on the one hand, or secondary or above) corresponding to the \nnumber of years of schooling observed. This subset of the MFLS2 consisted of 965 women and 878 men.\n\nI also use the same variable specifications used by \\citet{Schafgans00}. In particular, the outcome variable $Y^*_j$ is LWAGE, the log hourly real \nwage in the local currency deflated using the 1985 consumer price index. The selection variable $D_j$ is the indicator PAIDWORK for whether the \nindividual in question is in fact a wage worker. The exogenous variables appearing in the selection equations for each gender include UNEARN, a \nmeasure of household unearned income in terms of dividends, interest and rents; HOUSEH, the value of household \nreal estate owned, computed as the product of an indicator variable for house ownership and the cost of the house \nowned; and AMTLAND, the extent of household landholding in hundreds of acres. In addition, selection into wage \nwork is also assumed to be determined by AGE, in years; AGESQ, the square of AGE divided by 100; YPRIM, years of \nprimary schooling and YSEC, years of schooling at the secondary level or above. The variables appearing on the \nright-hand side of the outcome equations for each gender or ethnic group are AGE, AGESQ, YPRIM and YSEC. \n\\citet[Section~4]{Schafgans00} contains further details regarding variable definitions.\n\nFor each gender $j\\in\\{\\mbox{female, male}\\}$, the proposed estimator and that of \\citet{Heckman90} and \\citet{AndrewsSchafgans98} rely on \nthe preliminary procedure described in \\citet[p. 484--487]{Schafgans98} to estimate the nuisance parameters \n$\\bm{\\beta}_j$ and $\\bm{\\gamma}_j$ appearing in (\\ref{outcomej}) and (\\ref{selectionj}), respectively. This \ninvolves estimating the selection equation for each group via the method of \\citet{KleinSpady93} and estimating \nthe slope parameters in each outcome equation using the method of \\citet{Robinson88}. This is followed by \nestimation of the intercept parameter in each outcome equation via the proposed estimator. Standard errors are \ncalculated by bootstrapping with replacement with $B=200$ replications. \n\nEstimates of the outcome-equation \nparameters obtained via the proposed estimator are given in which the proposed estimator is implemented using the same \nkernel and estimated MSE-optimal bandwidth $\\hat{h}^*_n$ used in the simulations reported in Section~\\ref{mc}. In \ncommon with the results given earlier in Section~\\ref{mc}, I also considered implementations of the proposed \nestimator in which the bandwidth was set to $h_n=(2\/3)\\hat{h}^*_n$ and $h_n=(3\/2)\\hat{h}^*_n$.\n\nThe decomposition of the observed gender log-wage gaps for ethnic Malay workers is presented in Table~\\ref{eedecomp1110}. In keeping with the theory \ndeveloped above, the focus is on the extent of plausible gender wage discrimination, which is identified with the difference between the estimated \nintercepts. A striking result is the evidence provided by the proposed estimator of positive wage discrimination in \nfavor of women. In particular, Table~\\ref{eedecomp1110} indicates that all three implementations of the proposed estimator imply a large, positive \nand significant difference in intercepts, while the OLS and 2-step procedures generated estimated intercept differences \nthat were both insignificant. The implementation of the H90 procedure with $b_n$ set to the .90-quantile of the estimated \nselection index generated a similarly insignificant estimate of the difference in intercepts. The other implementation of the H90 procedure, \nalong with the AS98 procedure, proved to be numerically unstable. Table~\\ref{eedecomp1110} indicates that there exists a clear difference in \ninferences regarding the extent of gender wage discrimination amongst ethnic Malay workers between estimates generated by the \nproposed estimator and those generated by established procedures. \n\n\\FloatBarrier\n\\begin{landscape}\n\\begin{table}[H]\n\\pagenumbering{gobble}\n{\\scriptsize\n\\caption{Female--male log-wage decomposition, Malays. Standard errors in parentheses}\\label{eedecomp1110} \n\\centering\n\\hspace*{-30pt}\\setlength\\tabcolsep{3pt}\n\\begin{tabular}{l|c|c|c|c|c|c|c|c}\n \\hline\\hline\n\\rule{0pt}{4ex}\\multirow{2}{*}{Wage gap (overall)} & \\multicolumn{8}{c}{-0.2882} \\\\ \n & \\multicolumn{8}{c}{(0.0425)} \\\\[4pt] \\hline \n\\rule{0pt}{4ex}\\multirow{2}{*}{Female (endowment)} & \\multicolumn{8}{c}{-0.0638} \\\\ \n & \\multicolumn{8}{c}{(0.0337)} \\\\[4pt] \\hline \n\\rule{0pt}{4ex}\\multirow{2}{*}{Male (endowment)} & \\multicolumn{8}{c}{-0.0369} \\\\ \n & \\multicolumn{8}{c}{(0.0377)}\\\\[4pt] \n\\hline\n\\rule{0pt}{4ex}\\multirow{2}{*}{ } & \\multicolumn{3}{c|}{\\rule{0pt}{4ex} $\\hat{\\theta}_n$} & \\multirow{2}{*}{OLS} & \\multirow{2}{*}{2-step} & \\multicolumn{2}{c|}{H90} & AS98 \\\\[2pt] \\cline{2-4}\\cline{7-9} \n &\\rule{0pt}{4ex} ($h_n=\\hat{h}^*_n$) & ($h_n=(2\/3)\\hat{h}^*_n$) & ($h_n=(3\/2)\\hat{h}^*_n$) & & & ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\hat{\\bm{\\gamma}}_n}(.90)$) & ($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\hat{\\bm{\\gamma}}_n}(.95)$) &($b_n=\\hat{F}^{-1}_{\\bm{Z}^{\\top}\\hat{\\bm{\\gamma}}_n}(.95)$) \\\\[4pt]\n\\hline\\rule{0pt}{4ex}\\multirow{2}{*}{Wage gap (selection-corrected)} & 0.9283 & 0.9328 & 0.9263 & -1.337 & -10.1628 & -0.2709 & -0.0516 & -0.1015 \\\\ \n & (0.7516) & (0.7515) & (0.7517) & (0.0812) & (1.0385) & (1.0617) & (NaN) & (NaN) \\\\[4pt]\\hline \n\\rule{0pt}{4ex}\\multirow{2}{*}{Female (coefficients)} &\\multicolumn{3}{c|}{0.1098} & -1.2553 & -8.0955 &\\multicolumn{2}{c|}{0.1098} & 0.1098 \\\\ \n &\\multicolumn{3}{c|}{(0.6483)} & (0.6977) & (2.2356) &\\multicolumn{2}{c|}{(0.6483)} & (0.6483) \\\\[4pt] \\hline \n\\rule{0pt}{4ex}\\multirow{2}{*}{Male (coefficients)} &\\multicolumn{3}{c|}{0.0829} & -1.2951 & -7.988 &\\multicolumn{2}{c|}{0.0829} & 0.0829 \\\\ \n &\\multicolumn{3}{c|}{(0.6457)} & (0.6989) & (2.2296) &\\multicolumn{2}{c|}{(0.6457)} & (0.6457) \\\\[4pt] \\hline \n\\rule{0pt}{4ex}\\multirow{2}{*}{Difference in intercepts} & 0.8823 & 0.8867 & 0.8803 & -0.0364 & -2.1082 & -0.3169 & -0.0976 & -0.1475 \\\\ \n & (0.3787) & (0.3784) & (0.3789) & (0.7526) & (3.1712) & (0.8401) & (NaN) & (NaN) \\\\ \n \\hline\n\\end{tabular}\n}\n\\end{table}\n\\end{landscape}\n\n\\section{Conclusion}\\label{concl}\n\n\\noindent This paper has developed a new estimator of the intercept of a sample-selection model in which the \njoint distribution of the unobservables and the selection index is unspecified. It has been shown that the new \nestimator can be made under mild conditions to converge in probability at an $n^{-p\/(2p+1)}$-rate, where $p\\ge 2$ is an \ninteger that indexes the strength of certain smoothness assumptions as given above in Assumption~\\ref{a2}.\\ref{a2d}. \nThis rate of convergence is shown to be the optimal rate of convergence for estimation of the intercept parameter in terms \nof a minimax criterion. The new estimator is under mild conditions consistent and asymptotically normal with a rate of \nconvergence that is the same regardless of the joint distribution of the unobservables and the selection \nindex. This differs from other proposals in the literature and is convenient in practice, as the extent to which \nselection is endogenous is typically unknown in applications. In addition, the rate of convergence of the new estimator, \nunlike those of better known estimators, does not depend on assumptions regarding the relative tail behaviours of \nthe determinants of selection beyond those necessary for the identification of the estimand. This similarly \nfacilitates statistical inference regarding the intercept. Simulations presented above show the potential accuracy of \nthe proposed estimator relative to that of established procedures across different model specifications. An \nempirical example using individual labour-market data from Malaysia shows the potential of the proposed estimator to \ngenerate inferences regarding the extent of plausible gender wage discrimination that differ from those available from \nbetter known estimators. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nArtificial intelligence (AI) techniques require considerable training data to achieve satisfying performance, especially for industrial applications. Therefore, one of the major challenges in building AI models is collecting enough training data. Conventionally, the model developers could aggregate data by purchasing from multiple data owners or directly collecting data from the service users. However, these ways raise the risk of data privacy leakage and are forbidden by regulations. For example, the General Data Protection Regulation (GDPR)~\\cite{voss2016european} enacted by the European Union (EU) rectifies the usage of personal data and prohibits the transfer of personal data collected from users. The data privacy protection regulations pose a new challenge for AI developers to utilize the scattered data to build AI models. \n\nTo tackle this challenge, researchers propose a new distributed machine learning paradigm, dubbed Federated Learning (FL), which allows multiple participants to jointly train a machine learning model without directly sharing their private data \\cite{yang2019federated,mcmahan2017communication}. With a rapid development, FL are gaining widespread applications, such as language modeling \\cite{mcmahan2017learning8}, topic modeling \\cite{jiang2019federated103,jiang2021industrial}, speech recognition \\cite{Jiang2021GDPR}, healthcare~\\cite{chen2020fedhealth}, etc. Apart from the development of FL in research areas, the high commercial value of FL also motivates companies and institutions to explore industrial-level FL platforms, such as Tensorflow Federated~\\cite{tff}, PaddleFL~\\cite{ma2019paddlepaddle}, and FATE~\\cite{liu2021fate}, which provide comprehensive infrastructures for businesses to set up their FL prototypes and services. However, the application of such platforms requires the AI developers to be knowledgeable and familiar with not only the federated learning algorithms but also the applied platforms \\cite{zhuang2022easyfl}. Therefore, it turns out to be a high barrier for many potential industrial businesses. On the other hand, industrial users usually have their existing AI models that have been sufficiently evaluated on real-world businesses. In contrast, the existing FL algorithms and platforms require the participants to train federated models from scratch, bringing extra overheads and risks to the running AI services. \n\nHence, these come up with two consequent requirements for applying FL to practical industrial businesses. First, the insertion of FL should be lightweight, without increasing significant efforts and expert knowledge. Second, the federalization should be pluggable, which means the user can seamlessly switch its models between the federated and the non-federated versions.\n\n\n\\begin{figure*}[!htp]\n \\centering\n\t\\includegraphics[width=0.8\\linewidth]{framework.jpg}\n\t\\caption{The general framework of WrapperFL. The left part shows the existing local models that are deployed separately in different businesses. The right part depicts the federated system where the existing local models of different businesses are extended with WrapperFL.}\n\t\\label{fig:framework}\n\\end{figure*}\n\n\nMotivated by these, this paper proposes a novel, simple yet effective federated learning framework, dubbed WrapperFL, which can be easily plugged into an existing machine learning system with little extra effort. The framework is non-intrusive, simply attaching to and reusing the input and the output interfaces of an existing machine learning model. With the spirit of ensemble learning \\cite{sagi2018ensemble}, WrapperFL regards the existing machine learning models as the experts whose knowledge can be exchanged with the others in a federated manner. Different from existing federated learning algorithms, such as FedAvg\\cite{mcmahan2017communication}, which train a joint model from scratch, WrapperFL aims to exploit existing models, which further poses another challenge of dealing with the heterogeneity in data distributions and existing models \\cite{li2020federated-fedprox,karimireddy2020scaffold}. More specifically, we design two model-agnostic strategies of WrapperFL, i.e., StackingWrapper and BaggingWrapper. The former is suitable for centralized federated learning, where exists a central curator that coordinates all federated participants; the latter is suitable for decentralized federated learning without a central curator. The details are described in section~\\ref{sec:method}. \n\n\nAs the name states, the WrapperFL converts an existing non-federated model to the federated version by wrapping it with a highly integrated toolkit, which contains learning-related components, i.e., translator and aggregator, and communication-related components, i.e., scheduler and connection manager. As shown in Figure \\ref{fig:framework}, the WrapperFL wraps the existing model by attaching it with external components without changing the original data flow as well as the existing local model. This maintains the usability of the existing model in its production environment. Besides, WrapperFL preserves identical input and output interfaces as the original local model for the upstream and downstream services, avoiding extra costs of adjusting existing services. The scheduler and connection manager are responsible for coordinating the federated system as well as transferring model parameters, which are decoupled from the data flow and interface logic, and are agnostic to existing services. \n\nTo demonstrate the efficacy and feasibility of our proposed WrapperFL, we conduct comprehensive experiments on diverse tasks with different settings, including heterogeneous data distributions and heterogeneous models. The experimental results demonstrate that WrapperFL can be successfully applied to a wide range of applications under practical settings and improves the local model with federated learning. Meanwhile, the cost of switching a non-federated local model to a federated version is low in terms of both developing efforts and training time. \n\nIn summary, the major contribution of this paper is that we propose a pluggable federated learning toolkit, WrapperFL, which can painlessly convert an existing local model to a federated version and achieves performance gain. \n\n\n\n\\section{Related Work}\n\\subsection{Federated Learning Platforms}\nWith the huge demand from academia and industry, developers have proposed several FL platforms. The FL platforms can be roughly categorized into research-oriented and industry-oriented. Tensorflow Federated (TFF)~\\cite{tff} and FederatedScope~\\cite{federatedscope}, for example, focus on the support of deep learning models and allow flexible FL mechanism designs, which are favorable for FL research. However, they lack flow management and support for vertical federated learning \\cite{yang2019federated}, which makes them not feasible for real-world industrial applications. Federated AI Technology Enabler (FATE)~\\cite{liu2021fate} is the first open-source industrial FL platform that provides comprehensive APIs and flow controls for constructing federated applications, leading the trend of FL applications and inspiring other platforms such as PaddleFL~\\cite{ma2019paddlepaddle}. However, they are not user-friendly due to their complicated system architecture and high barriers. Recently, Zhuang et al. propose EasyFL~\\cite{zhuang2022easyfl}, a low-code platform that simplifies the development of FL models and allows for rapid testing. \n\nCompared to FL platforms, WrapperFL focuses on providing an easy-to-use FL plugin for industrial users with existing models for services. WrapperFL is not a platform to implement or develop new FL algorithms. Therefore, if one wants to propose or invent new FL algorithms, WrapperFL might be inferior. Instead, WrapperFL is a plug-and-play toolkit that allows users to rapidly switch an existing model to the federated version and achieve performance gain. \n\n\\subsection{Ensemble Learning}\nThe spirit of ensemble learning is to achieve higher prediction performance by combining multiple trained models, and the ensemble result is better than any of the individual model~\\cite{sagi2018ensemble}. Ensemble learning provides a feasible solution to utilize existing models from other clients. First, the existing local models are similar to the weak models in ensemble learning, and thus the appropriate fusion of their results could lead to a better prediction. Second, typically, when there is a lot of variation among the models, ensembles get superior outcomes \\cite{sollich1995learning,kuncheva2003measures}, implying that heterogeneous local models might bring extra benefits. \n\nBased on the idea of ensemble learning, WrapperFL adopts two ensemble strategies, stacking \\cite{breiman2001random} and bagging \\cite{buhlmann2012bagging}. The stacking strategy introduces a shared model across clients that takes the raw data and the output of the local model as the input. The shared model performs like a virtual bridge that connects local models to ensemble their knowledge. The bagging strategy adopts a simple linear regressor to re-weight the outputs from all local models. Notably, WrapperFL is different from conventionally federated learning algorithms that jointly learn a model from scratch; in contrast, WrapperFL aims to utilize the knowledge in other existing models to enhance the prediction of the existing local model. \n\n\n\n\n\n\n\n\\section{Methodology}\n\\label{sec:method}\nIn this section, we propose the methodology of WrapperFL. Unlike FL platforms, such as Tensorflow Federated~\\cite{tff}, FATE~\\cite{liu2021fate}, PaddleFL~\\cite{ma2019paddlepaddle}, EasyFL~\\cite{zhuang2022easyfl}, etc., WrapperFL is not designed to provide a comprehensive and fundamental toolkit for developing federated learning algorithms. In contrast, WrapperFL aims to help businesses achieve performance gains by federating their existing AI models at a minimum cost. Therefore, the WrapperFL does not involve complicated machine learning operators (MLOps), nor does it provides customized flow control. As this paper focuses on the learning strategy used in WrapperFL, we leave the engineering details, such as life cycle and communication, in the extended version. \n\n\\subsection{Framework}\nAs shown in Figure~\\ref{fig:framework}, WrapperFL is attached to the local model by extending to the input and output interfaces. The existing system can regard the WrapperFL (i.e., the light green block in Figure~\\ref{fig:framework}) as the original model as WrapperFL does not change the data flow logic. Inside the WrapperFL, the local model is empowered by two specific ensemble learning-based federated strategies, Stacking Wrapper and Bagging Wrapper. In abstraction, WrapperFL contains four components:\n\\begin{itemize}\n \\item \\textbf{Translator}: The translator is in charge of aligning heterogeneous models and producing the federated output.\n \\item \\textbf{Aggregator}: The aggregator fuses the output of the original model and the federated model\\footnote{Notably, in this paper, we do not use the term \\textit{global model}. Generally, the global model refers to the horizontal federated learning with homogeneous architecture. While in our settings, the federated participants might have distinct local models. Therefore, we use the term federated model to represent the non-local model(s).}.\n \\item \\textbf{Scheduler}: The scheduler controls the federated learning flow, including the status synchronization, life cycle management, federated update, etc.\n \\item \\textbf{Connection Manager}: The connection manager contains the functions of network connection, data transmission, authorization, etc. \n\\end{itemize}\n\n\n\\subsection{Stacking Wrapper}\n\\label{sec:stacking}\nIn this section, we introduce details of the stacking strategy for WrapperFL, named Stacking Wrapper. Stacking Wrapper is inspired by the stacking method in ensemble learning that combines the predictions from multiple machine learning models. As Figure \\ref{fig:stacking} shows, the Stacking Wrapper essentially stacks two models, the original local model and the translator. The raw data and its feature provided by the existing model (e.g., the hidden vector of the last layer in a neural network) are concatenated as the extended input for the translator. Notably, although the local models could be heterogeneous, the translators are homogeneous across participants, and thus we can apply the Secure Aggregation \\cite{bonawitz2017practical3} to the training of translators. This architecture implicitly conducts the feature space alignment \\cite{day2017survey} across different federated participants. \n\n\\begin{figure}[!htp]\n \\centering\n\t\\includegraphics[width=\\linewidth]{stacking.png}\n\t\\caption{The data flow of Stacking Wrapper.}\n\t\\label{fig:stacking}\n\\end{figure}\n\nStacking Wrapper requires the local models to be parametric models that naturally generate feature vectors. It is not compliant with non-parametric models such as Decision Trees and Support Vector Machines (SVMs). \n\n\n\\subsection{Bagging Wrapper}\n\\label{sec:bagging}\nBagging Wrapper leverages the idea of Bagging in ensemble learning \\cite{buhlmann2012bagging}, where we treat the dataset in each client as a random sample from a global data distribution and ensemble the predictions from the local models. As shown in Figure \\ref{fig:bagging}, the translator essentially embeds local models received from other participants, infers predictions from all models locally, and fuses the outputs. Compared to Stacking Wrapper, Bagging Wrapper does not pose any assumption on the architecture of local models and thus supports both parametric and non-parametric models. However, a major issue is that transferred local models might raise potential privacy risks. Notably, the risk of privacy leakage from models could be eliminated by incorporating with Trusted Execution Environment (TEE) or Fully Homomorphic Encryption (FHE). \n\n\\begin{figure}[!htp]\n \\centering\n\t\\includegraphics[width=0.9\\linewidth]{bagging.png}\n\t\\caption{The data flow of Bagging Wrapper.}\n\t\\label{fig:bagging}\n\\end{figure}\n\n\n\n\\begin{table*}[!htp]\n\\small\n\\centering\n\\caption{Results on bank telemarketing prediction of local models and WrapperFL models under imbalanced data settings.}\n\\label{tab:tab_imb}\n\\begin{tabular}{rrrrrrrrr}\n\\toprule\n\\multicolumn{1}{l}{\\multirow{2}{*}{\\#Clients}} & \\multicolumn{2}{c}{Accuracy} & \\multicolumn{2}{c}{Precision} & \\multicolumn{2}{c}{Recall} & \\multicolumn{2}{c}{F1} \\\\ \\cline{2-9}\n\\multicolumn{1}{l}{} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} \\\\ \\midrule\n5 & 0.8888$\\pm$0.01 & 0.9032$\\pm$0.01 & 0.4489$\\pm$0.07 & 0.5998$\\pm$0.01 & 0.1530$\\pm$0.01 & 0.2846$\\pm$0.01 & 0.1980$\\pm$0.02 & 0.3831$\\pm$0.01\\\\\n10 & 0.8588$\\pm$0.01 & 0.9120$\\pm$0.01 & 0.1350$\\pm$0.06 & 0.6132$\\pm$0.05 & 0.1321$\\pm$0.04 & 0.3172$\\pm$0.02 & 0.1049$\\pm$0.02 & 0.4110$\\pm$0.03 \\\\\n20 & 0.7991$\\pm$0.02 & 0.8983$\\pm$0.01 & 0.1048$\\pm$0.04 & 0.5145$\\pm$0.12 & 0.1864$\\pm$0.08 & 0.2556$\\pm$0.03 & 0.1096$\\pm$0.02 & 0.3215$\\pm$0.05\\\\\\bottomrule\n\\end{tabular}\n\\end{table*}\n\n\n\n\\begin{table*}[]\n \\small\n \\centering\n \\caption{Results on bank telemarking prediction of local models and WrapperFL models under non-IID data settings.}\n \\label{tab:tab_non}\n \\begin{tabular}{rrrrrrrrr}\n \\toprule\n \\multicolumn{1}{l}{\\multirow{2}{*}{\\#Clients}} & \\multicolumn{2}{c}{Accuracy} & \\multicolumn{2}{c}{Precision} & \\multicolumn{2}{c}{Recall} & \\multicolumn{2}{c}{F1} \\\\ \\cline{2-9} \n \\multicolumn{1}{l}{} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} \\\\ \\midrule\n 5 & 0.8968$\\pm$0.01 & 0.9064$\\pm$0.01 & 0.4753$\\pm$0.06 & 0.7173$\\pm$0.02 & 0.1523$\\pm$0.01 & 0.2532$\\pm$0.01 & 0.2097$\\pm$0.02 & 0.3642$\\pm$0.01\\\\\n 10 & 0.8637$\\pm$0.01 & 0.9040$\\pm$0.01 & 0.2079$\\pm$0.05 & 0.6573$\\pm$0.03 & 0.1768$\\pm$0.03 & 0.3105$\\pm$0.02 & 0.1752$\\pm$0.03 & 0.3862$\\pm$0.02\\\\\n 20 & 0.7967$\\pm$0.02 & 0.9056$\\pm$0.01 & 0.1286$\\pm$0.05 & 0.6592$\\pm$0.04 & 0.1769$\\pm$0.05 & 0.3504$\\pm$0.01 & 0.1052$\\pm$0.01 & 0.4375$\\pm$0.01\\\\\\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\n\\subsection{Examples of Using WrapperFL}\nThe WrapperFL is easy to use. To set up a federated client, the user only needs to configure very few parameters, e.g., the existing local model, the client id, client's and server's IP addresses and ports for connection, the type of translator, etc. The WrapperFL will automatically complete the registration, initialization, and building connections with clients. Once instantiating the WrapperFL class with a simple configuration, the user can begin training the federated model with one line of code. Hence, the WrapperFL model can be used the same way as the local model. \n\nThe following code block shows four examples of using WrapperFL, which remarkably demonstrate the ease of use of WrapperFL. Lines 6-12 and 25-30 exemplify Stacking Wrapper and Bagging Wrapper configurations. The user can register to a federated network by filling their identifier in \\texttt{client\\_id}, local address in \\texttt{client\\_addr}, server address in \\texttt{server\\_addr}, and other federated clients' ids in \\texttt{clients}. The fields of \\texttt{local\\_model} and \\texttt{train\\_dataset} are related to the local model and local training dataset. These are used to train the translator specified in the field \\texttt{translator}. \n\nApart from the configuration, the user can federalize an existing local model with only three lines of code (i.e., lines 14, 16, and 21), which provides superior convenience for developers compared to other FL toolkits. Notably, in the inference phase, the WrapperFL model performs identically to the original local model (shown in Example 3 and Example 4), significantly reducing the efforts to adjust the existing interfaces. \n\n{\\scriptsize\n\\begin{minted}[xleftmargin=20pt,linenos]{python}\nfrom wrapperfl import BaggingWrapper, StackingWrapper\n\n## Initialization\n\n# --- Example 1: Federalization with Stacking Wrapper ---\nconfigs = {\"local model\": local_model, \n \"train_dataset\":dataset, \n \"translator\":\"LR\", # translator structure\n 'client_id':'0', \n 'clients':['1', '2', ...], \n 'client_addr':{'ip':'x.x.x.x', 'port':'x'}, \n 'server_addr':{'ip':'x.x.x.x', 'port':'x'}}\n# registration and initialization\nwrapper = StackingWrapper(configs) \n#train the translator\nwrapper.start(status='train', \n train_config={'local_epochs':10, \n 'rounds':10, \n ...}) \n#swtich to the inference mode\nwrapper.start(status='infer', \n infer_config={'threshold':0.5, ...}) \n\n# --- Example 2: Federalization with Bagging Wrapper\nconfigs = {\"local model\": local_model, \n \"train_dataset\":dataset,\n 'clients':['1', '2', ...], \n 'client_id':'0', \n 'client_addr':{'ip':'x.x.x.x', 'port':'x'}, \n 'server_addr':{'ip':'x.x.x.x','port':'x'}}\nwrapper = BaggingWrapper(configs) \nwrapper.start(status = 'train', \n train_config={'local_epochs':10, ...})) \nwrapper.start(status = 'infer', \n infer_config={'threshold':0.5, ...}) \n\n## Usage of WrapperFL model \n\n# --- Example 3: Sklearn-style local model\n# local model inference\ny = local_model.predict(X) \n# WrapperFL model inference\ny = wrapper.model.predict(X) \n\n# --- Example 4: PyTroch-style local model\n# local model inference\ny = local_model(X) \n# WrapperFL model inference\ny = wrapper.model(X) \n\\end{minted}\n}\n\n\n\n\n\\section{Experiment}\nTo evaluate the effectiveness of our proposed WrapperFL, we conduct comprehensive experiments on diverse tasks with different settings, including heterogeneous data distributions and heterogeneous models. \n\n\\subsection{Datasets}\nWe verify our proposed WrapperFL with three different forms of data, including the bank telemarketing prediction \\cite{moro2014bankdata} (tabular data), news classification~\\cite{tianchidata} (textual data), image classification with CIFAR-10~\\cite{cifar10} (image data). These datasets are large-scale and thus allow us to manipulate their data distributions to form heterogeneous data distributions including \\textbf{imbalanced} and \\textbf{non-IID} data distributions. Notably, although non-IID data contains imbalanced data in some literature~\\cite{zhu2021federated-noniid-survey}, in this paper, we refer \\textbf{imbalanced} data to the setting that class distributions across clients are (almost) identical, while \\textbf{non-IID} data to the setting that class distributions are different. \n\n\n\n\nTo simulate a different number of clients, we partition the tabular data and text data into 5, 10, and 20 clients, and we split the image data into 5 and 10 clients; as in the case of 20 clients, the heterogeneity in data distribution across clients are not apparent. \n\nTo synthesize imbalanced datasets, we first assign the number of instances for each client following the Dirichlet Process~\\cite{Teh2010}, and uniformly sample the instances for each client from the global data distribution. For non-IID datasets, we directly assign the number of samples of each class in each client following the Dirichlet Process. A special case is the bank telemarketing prediction data, which has already been class-imbalanced. For this case, we only sample the number of positive samples and then pad the local datasets with negative samples to the identical size. Figure~\\ref{fig:tab_dist}-\\ref{fig:image_dist} illustrate the overview of class-aware distributions of each client of three datasets, which are shown in appendix \\ref{apdx:figure}\n\n\\subsection{Evaluation Metrics}\nWe compute the accuracy, precision, recall, and F1-score for all tasks for each client's local and WrapperFL models under a balanced global testing set. We report the means and standard deviations of the metrics under each setting. \n\n\\subsection{Experiments on Tabular Dataset}\nTabular datasets are common in real-world industries. We evaluate the effectiveness of WrapperFL on the bank telemarketing prediction data~\\cite{moro2014bankdata} with heterogeneous model structures. \n\nMore specifically, we set 40\\% of the clients to use Logistic Regression (LR) as the local modes, and 60\\% of the clients to use the three-layer MLP with various hidden sizes (16, 18, and 24 each for 20\\% of the clients). Table~\\ref{tab:tab_imb} shows the performance comparison of the local models and our WrapperFL models under imbalanced data settings. In this case, the WrapperFL adopts a three-layer MLP with the hidden size of 16 as the translator, which is trained with ten communication rounds using FedAvg~\\cite{mcmahan2017communication}. \n\nNotably, when the number of clients increases, the precision of local models drops dramatically due to the highly imbalanced local data. In contrast, extended with WrapperFL, the participants achieve significantly better performance. For example, when there are ten clients, the mean F1 score of WrapperFL models is 2.92x better than that of local models. We can make similar conclusions in the case of non-IID data settings, which are shown in Table~\\ref{tab:tab_non}. In this case, the WrapperFL model shows strong robustness against the local models. For example, the mean F1-score of WrapperFL is 3.16x higher than that of local models in the setting of 20 clients. \n\n\n\n\\begin{table*}[]\n \\small\n \\centering\n \\caption{Results on news classification of local models and WrapperFL models under imbalanced data settings.}\n \\label{tab:text_imb}\n \\begin{tabular}{rrrrrrrrr}\n \\toprule\n \\multicolumn{1}{l}{\\multirow{2}{*}{\\#Clients}} & \\multicolumn{2}{c}{Accuracy} & \\multicolumn{2}{c}{Precision} & \\multicolumn{2}{c}{Recall} & \\multicolumn{2}{c}{F1} \\\\ \\cline{2-9} \n \\multicolumn{1}{l}{} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} \\\\ \\midrule\n 5 & 0.2129$\\pm$0.08 & 0.2836$\\pm$0.05 & 0.1647$\\pm$0.10 & 0.1743$\\pm$0.08 & 0.1968$\\pm$0.06 & 0.2085$\\pm$0.05 & 0.1507$\\pm$0.07 & 0.1655$\\pm$0.06\\\\\n 10 & 0.2003$\\pm$0.08 & 0.2311$\\pm$0.07 & 0.1438$\\pm$0.07 & 0.1305$\\pm$0.06 & 0.1726$\\pm$0.04 & 0.1753$\\pm$0.04 & 0.1273$\\pm$0.05 & 0.1264$\\pm$0.05\\\\\n 20 & 0.2140$\\pm$0.03 & 0.3977$\\pm$0.01 & 0.0720$\\pm$0.03 & 0.4032$\\pm$0.01 & 0.1160$\\pm$0.01 & 0.2911$\\pm$0.01 & 0.0703$\\pm$0.02 & 0.2514$\\pm$0.01\n \\\\\\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\n\\begin{table*}[]\n \\small\n \\centering\n \\caption{Results on news classification of local models and WrapperFL models under non-IID data settings.}\n \\label{tab:text_non}\n \\begin{tabular}{rrrrrrrrr}\n \\toprule\n \\multicolumn{1}{l}{\\multirow{2}{*}{\\#Clients}} & \\multicolumn{2}{c}{Accuracy} & \\multicolumn{2}{c}{Precision} & \\multicolumn{2}{c}{Recall} & \\multicolumn{2}{c}{F1} \\\\ \\cline{2-9} \n \\multicolumn{1}{l}{} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} & \\multicolumn{1}{c}{Local} & \\multicolumn{1}{c}{WrapperFL} \\\\ \\midrule\n 5 & 0.6259$\\pm$0.01 & 0.6977$\\pm$0.01 & 0.4355$\\pm$0.01 & 0.4892$\\pm$0.01 & 0.3833$\\pm$0.01 & 0.4035$\\pm$0.01 & 0.3580$\\pm$0.01 & 0.3962$\\pm$0.01\\\\\n 10 & 0.5533$\\pm$0.01 & 0.6561$\\pm$0.01 & 0.3056$\\pm$0.01 & 0.3643$\\pm$0.01 & 0.3119$\\pm$0.01 & 0.3404$\\pm$0.01 & 0.2815$\\pm$0.01 & 0.3221$\\pm$0.01\\\\\n 20 & 0.5152$\\pm$0.01 & 0.6288$\\pm$0.01 & 0.2157$\\pm$0.01 & 0.3457$\\pm$0.01 & 0.2597$\\pm$0.01 & 0.3069$\\pm$0.01 & 0.2128$\\pm$0.01 & 0.2912$\\pm$0.01\n \\\\\\bottomrule\n \\end{tabular}\n\\end{table*}\n\n\n\\subsection{Experiments on Text Dataset}\n\n\n\nText data is also commonly used in the industry~\\cite{devlin2018bert55}. We select the news classification dataset~\\cite{tianchidata} to represent the NLP tasks as text classification is one of the most important tasks in NLP. Unlike the tabular data, the news classification dataset has 14 evenly distributed classes. Moreover, the feature distribution in news data (i.e., the word distributions) is more consistent across different classes. Therefore, it requires the classifier to capture the deep semantics of a sentence. \n\n\\begin{table}[!htp]\n\\small\n \\caption{Results on image classification of local models and WrapperFL models under imbalanced data settings.}\n \\label{tab:img_imb}\n \\begin{tabular}{ccrr}\n \\toprule\n \\multicolumn{2}{l}{\\#Clients} & \\multicolumn{1}{c}{5} & \\multicolumn{1}{c}{10} \\\\ \\midrule\n \\multirow{2}{*}{Accuracy} & Local & 0.7444$\\pm$0.0049 & 0.6301$\\pm$0.0182 \\\\ \\cline{2-2}\n & WrapperFL & 0.7798$\\pm$0.0018 & 0.8254$\\pm$0.0004 \\\\\\midrule\n \\multirow{2}{*}{Precision} & Local & 0.7412$\\pm$0.0055 & 0.6340$\\pm$0.0153 \\\\ \\cline{2-2}\n & WrapperFL & 0.7823$\\pm$0.0016 & 0.8284$\\pm$0.004 \\\\ \\midrule\n \\multirow{2}{*}{Recall} & Local & 0.7443$\\pm$0.0049 & 0.6304$\\pm$0.0177 \\\\ \\cline{2-2}\n & WrapperFL & 0.7793$\\pm$0.0018 & 0.8258$\\pm$0.0004 \\\\ \\midrule\n \\multirow{2}{*}{F1} & Local & 0.7416$\\pm$0.0054 & 0.6233$\\pm$ 0.0218 \\\\ \\cline{2-2}\n & WrapperFL & 0.7781$\\pm$0.0019 & 0.8241$\\pm$0.0005\\\\ \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\n\n\n\nIn this experiment, we evaluate the compatibility of WrapperFL with different deep learning models. We randomly assign 20\\% of the clients with TextCNN~\\cite{jacovi2018understanding}, 40\\% of the clients with Transformer \\cite{vaswani2017attention} (with three stacking encoders), and 40\\% of the clients with Transformer (with two stacking encoders) as their local models. WrapperFL adopts a single-encoder Transformer as the translator, which is trained with ten communication rounds using FedAvg~\\cite{mcmahan2017communication}. \n\n\nTable \\ref{tab:text_imb} and Table \\ref{tab:text_non} display the performance comparisons between local models and WrapperFL models, from which we can confirm the high utility of WrapperFL on complex deep learning models. For example, in the setting of 20 clients, the WrapperFL models perform much better than local models no matter of accuracy, precision, recall, or F1 score. \n\n\\subsection{Experiments on Image Dataset}\n\\label{sec:exp:img}\nProcessing image data is also a huge requirement in the industry, such as face recognition. We adopt one of the most widely used image datasets, CIFAR-10~\\cite{cifar10}, to verify our WrapperFL on CNN models with different scales. We randomly assign 20\\% of clients with ResNet18~\\cite{he2016deep}, 40\\% of clients with VGG13~\\cite{simonyan2014very}, and 40\\% of clients with VGG16. WrapperFL adopts a much smaller CNN model, VGG11, as the translator, which is trained with ten communication rounds using FedAvg~\\cite{mcmahan2017communication}. \n\nTable~\\ref{tab:img_imb} and Table~\\ref{tab:img_non} display the performances of local models and WrapperFL models. Consistently, we can observe significant performance gains comparing the WrapperFL models to the local models. Moreover, the experimental results also demonstrate that using a much smaller homogeneous translator can efficiently and effectively distill knowledge from well-trained large-scale local models. \n\n\n\n\\begin{table}[]\n\\small\n \\caption{Results on image classification of local models and WrapperFL models under non-IID data settings.}\n \\label{tab:img_non}\n \\begin{tabular}{ccrr}\n \\toprule\n \\multicolumn{2}{l}{\\#Clients} & \\multicolumn{1}{c}{5} & \\multicolumn{1}{c}{10} \\\\ \\midrule\n \\multirow{2}{*}{Accuracy} & Local & 0.6902$\\pm$0.0020 & 0.5747$\\pm$0.0031 \\\\ \\cline{2-2} \n & WrapperFL & 0.7405$\\pm$0.0026 & 0.7853$\\pm$0.0015 \\\\ \\hline\n \\multirow{2}{*}{Precision} & Local & 0.6617$\\pm$0.0066 & 0.5563$\\pm$0.0055 \\\\ \\cline{2-2} \n & WrapperFL & 0.7511$\\pm$0.0016 & 0.8017$\\pm$0.0005 \\\\ \\hline\n \\multirow{2}{*}{Recall} & Local & 0.6877$\\pm$0.0020 & 0.5737$\\pm$0.0027 \\\\ \\cline{2-2} \n & WrapperFL & 0.7404$\\pm$0.0025 & 0.7862$\\pm$0.0012 \\\\ \\hline\n \\multirow{2}{*}{F1} & Local & 0.6628$\\pm$0.0042 & 0.5377$\\pm$0.0048 \\\\ \\cline{2-2} \n & WrapperFL & 0.7249$\\pm$0.0053 & 0.7792$\\pm$0.0020 \\\\ \\bottomrule\n \\end{tabular}\n\\end{table}\n\n\n\n\\subsection{Quantitative Comparison of Training Cost}\nWhile it is hard to quantify the reduction in the required manpower and resources of WrapperFL compared to existing FL platforms, we can empirically validate its feasibility. Most importantly, WrapperFL utilizes the existing local model, significantly reducing the training time and tuning the local model, which is well-known for both time and labor-consuming. Besides, with a simple plug-and-play package, ML developers without background knowledge of federated learning can quickly adapt their services to the federated version, which eliminates the barrier of manpower of FL experts. \n\nTo further show the training cost (including the time cost and the potential performance loss), we conduct a quantitative comparison between WrapperFL and the FedAvg on the non-IID dataset mentioned in Section \\ref{sec:exp:img}. In Figure~\\ref{fig:time_cost}, we plot the accuracy points at training time intervals of WrapperFL and FedAvg, respectively, from which we can observe that WrapperFL achieves over 60\\% accuracy much faster than FedAvg. Moreover, we can also observe that the performance of WrapperFL is comparable to FedAvg when converged, which indicates that performance loss of WrapperFL is negligible. Notably, the dataset used in this experiment is much smaller than industrial-scale data and the training time reported here ignores the time cost of deployment and communication. Therefore, in practice, the time acceleration of WrapperFL could be much more significant. \n\n\\begin{figure}[!htp]\n \\centering\n\t\\includegraphics[width=0.9\\linewidth]{time_cost.pdf}\n\t\\caption{The Accuracy Curves of WrapperFL and FedAvg on Image Dataset}\n\t\\label{fig:time_cost}\n\\end{figure}\n\n\n\n\n\\section{Conclusion}\nThis paper focus on the problem of applying federated learning to the industry, where the businesses have already had a mature and existing model and have no experts on federated learning algorithms and platforms. The problem is challenging from two aspects. First, the federated learning process should be completely decoupled from the existing system, and the federalization should be lightweight, low-intrusive, and require as less development workload as possible. Second, the federated learning toolkit should be model-agnostic. Therefore, it can be compatible with various model architectures, and the users can seamlessly change the underlying machine learning models. To tackle these challenges, we propose a simple yet practical federated learning plug-in inspired by ensemble learning, dubbed WrapperFL, allowing participants to build\/join a federated system with existing models at minimal costs. The experimental results of tabular, text, and image data demonstrate the efficacy of our proposed WrapperFL on both model heterogeneous and data heterogeneous settings. \n\n\n\\input{ijcai22.bbl}\n\\onecolumn\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\newtheorem{definicao}{Definition}[section]\n\\newtheorem{teorema}{Theorem}[section]\n\\newtheorem{lema}{Lemma}[section]\n\\newtheorem{corolario}{Corolary}[section]\n\\newtheorem{proposicao}{Proposition}[section]\n\\newtheorem{axioma}{Axiom}[section]\n\\newtheorem{observacao}{Remark}[section]\n\n\tThe gauge field copy question is one of those unexpected phenomena that creep up in mathematical physics when we go from linear objects to their nonlinear extensions. Linear gauge fields, say, the electromagnetic field, admit a single potential over a nice neighborhood modulo gauge tranformations. However when we go from the linear to the nonlinear domain, that nice relation between fields and potentials breaks down. (We emphasize the adjective: the relation between potentials and fields in linear gauge fields is a `nice' one because it reflects the very deep $\\partial^{2} = 0$ relation in homological algebra and in algebraic topology; for a simple application of that relation to mathematical physics see \\cite{Doria-78}.)\n\n\tIn the nonlinear, nonabelian case, some gauge fields admit two or more potentials which cannot be made equivalent (even locally) modulo gauge transformations. Such an ambiguity is known as the gauge field copy problem and was discovered in 1975 by T.T. Wu and C.N. Yang \\cite{Wu}. Gauge copies fall into two cases:\n\\begin{itemize}\n\\item{{\\em True copies}: Here the gauge field can be derived from at least two different potentials that aren't even locally related by a gauge transformation;}\n\\item{{\\em False copies}: In this case the field can be derived from potentials that are always locally related by a gauge transformation.}\n\\end{itemize}\nFor a review of the geometric phenomena behind the copy problem see \\cite{Costa-Amaral}.\n\n\tLet $P(M,G)$ be a principal fiber bundle, where $M$ is a finite-dimensional smooth real manifold and $G$ is a\nfinite-dimensional semi-simple Lie group. We denote by $(P,\\alpha)$ the principal fiber bundle $P(M,G)$ endowed with the connection-form $\\alpha$, and by $L$ the field that corresponds to the potential $A$ associated to $\\alpha$. We mean by that that $A$ is a family $\\{A_{\\lambda}\\}_{\\lambda\\in \\Lambda}$ of $l(G)$-valued one-forms, where $l(G)$ is the Lie algebra associated to $G$, and for each $\\lambda\\in\\Lambda$, $A_{\\lambda}$ is defined on an open subset $U_{\\lambda}$ of $M$ with $D_{A_{\\lambda}} = dA_{\\lambda}+\\frac{1}{2}[A_{\\lambda},A_{\\lambda}] = L$ on $U_{\\lambda}$ and with $M = \\bigcup_{\\lambda\\in\\Lambda}U_{\\lambda}$. We consider also that $A_{\\lambda}$ and $A_{\\lambda'}$ are gauge equivalent for $\\lambda\\neq\\lambda'$. We recall that an automorphism of a principal fiber bundle $P(M,G)$ with projection $\\pi$ is a diffeomorphism $f:P\\rightarrow P$ such that $f(pg) = f(p)g$ for all $g\\in G$, $p\\in P$. A gauge transformation is an automorphism $f:P\\rightarrow P$ such that $\\overline{f} = 1_{M}$, where $\\overline{f}:M\\rightarrow M$ is the diffeomorphism induced by $f$ given by $\\overline{f}(\\pi(p)) = \\pi(f(p))$. Given a principal fibre bundle $P(M,G)$ and a Lie subgroup $G'$ of $G$, we say that $P(M,G)$ is reducible to the bundle $P'(M,G')$ if and only if there is a monomorphism $f':G'\\rightarrow G$ and an imbedding $f'':P'\\rightarrow P$ such that $f''(u'a') = f''(u')f'(a')$, for all $u'\\in P'$ and $a'\\in G'$.\n\n\\begin{definicao}\n\\begin{enumerate}\n\\item{The field $L$ or the potential $A$ are said to be {\\bf reducible} if the corresponding bundle $(P,\\alpha)$ is reducible.}\n\\item{If $U\\subset M$ is a nonvoid open set then $L$ or $A$ are said to be {\\bf locally reducible over} $U$ whenever $(P,\\alpha)\\mid _{U}$ is reducible.}\n\\item{$L$ or $A$ are said to be {\\bf fully irreducible} if they are not locally reducible.}$\\Box$\n\\end{enumerate}\\label{def-redutibilidade}\n\\end{definicao}\n\n\tOur main results are essentially based on a theorem that gives a topological condition for the existence of false copies and on the Atiyah-Singer index theorem. But before we state the topological condition for the existence of false gauge field copies, we find interesting to recall the Ambrose-Singer theorem, since it will be used to derive such a result:\n\n\\begin{proposicao}\n(Ambrose-Singer) Let $P(M,G)$ be a principal fiber bundle, where $M$ is connected and paracompact. Let $\\Gamma$ be a connection in $P$, $L$ the curvature form, $\\Phi(u)$ the holonomy group with reference point $u\\in P$ and $P(u)$ the holonomy bundle through $u$ of $\\Gamma$. Then the Lie algebra of $\\Phi(u)$ is equal to the subspace of $l(G)$ spanned by all elements of the form $L_{v}(X,Y)$, where $v\\in P(u)$ and $X$ and $Y$ are arbitrary horizontal vectors at $v$.\\label{Ambrose-Singer}\n\\end{proposicao}\n\n\tThe proof of this theorem (also known as the holonomy theorem), can be found in \\cite{Kobayashi}. It is considered that we may assume $P(u) = P$, which means that $\\Phi(u) = G$.\n\n\tNow we establish the topological condition for the existence of false gauge field copies, based on a result due to Doria \\cite{Doria-81}:\n\n\\begin{proposicao}\nLet $P(M,G)$ be as above together with the fact that $G$ be semi-simple and $M$ is connected and paracompact. $L$ is falsely copied, that is, $L$ has different potentials that are locally related by a gauge transformation if and only if $L$ is reducible.\\label{Teo-Doria}\n\\end{proposicao}\n\n\t{\\em Proof}: If $L$ is reducible, then $P(M,G)$ is reducible (definition \\ref{def-redutibilidade}). This means that $P(M,G)$ can be reduced to a nontrivial $P'(M,G')$, where $G'$ is the Ambrose-Singer holonomy group (it corresponds to the group $\\Phi(u)$ in proposition \\ref{Ambrose-Singer}). If we assume\n\\[A_{\\mu} = B_{\\mu} + \\partial_{\\mu}h',\\] \nwhere $\\partial_{\\mu}=_{def}\\partial\/\\partial x^{\\mu}$, $x^{\\mu}$ is a coordinate of a coordinate system at $U\\subset M$ (such that the bundle is trivial over $U$), $B_{\\mu}$ takes values in $l(G)$ and $h'$ takes values on $l(G')$, then:\n\\[F_{\\mu\\nu}(A) = \\partial_{\\mu}(B_{\\nu}+\\partial_{\\nu}h') - \\partial_{\\nu}(B_{\\mu} + \\partial_{\\mu}h') + (B_{\\mu} + \\partial_{\\mu}h')(B_{\\nu} + \\partial_{\\nu}h') - (B_{\\nu} + \\partial_{\\nu}h')(B_{\\mu} + \\partial_{\\mu}h'),\\]\nwhere $F_{\\mu\\nu}(A)$ denotes the components of the curvature form $F$ associated to the connection form $A$.\n\n\tHence:\n\n\\[F_{\\mu\\nu}(A) = F_{\\mu\\nu}(B) + 0_{field}.\\]\n\n\tThe sufficient condition is proved as follows: if the holonomy group is semi-simple as indicated, and if $A_{\\mu} = B_{\\mu} + \\partial_{\\mu}h'$ as indicated, then there is a reducibility $G\\oplus G'\\rightarrow G'$.$\\Box$\n\n\tWe now suppose that $X$ is a compact smooth manifold and that $G$ is a compact Lie group acting smoothly on $M$. The Atiyah-Singer index theorem can be stated as \\cite{Shanahan}:\n\n\\begin{proposicao}\nLet $\\chi$ and $\\vartheta$ be complex vector bundles defined over $X$. If $D:C^{\\infty}(X;\\chi)\\rightarrow C^{\\infty}(X;\\vartheta)$ is a $G$-invariant elliptic partial differential operator on $X$, which sends cross-sections of $\\chi$ to cross-sections of $\\vartheta$, then {\\rm index}$_{G}D = t_{{\\rm ind}_{G}^{X}}(\\sigma(D))$, where $t_{{\\rm ind}}$ is the topological index defined on $K_{G}(TX)$ and $\\sigma(D)$ is the symbol of $D$.$\\Box$\\label{Atiyah-Singer-D}\n\\end{proposicao}\n\n\t(For the proof and notational features see \\cite{Shanahan}.) Another version of the index theorem \\cite{Atiyah-Index-I} asserts:\n\\begin{proposicao}\nThe analytical index $a_{{\\rm ind}_{G}}$ and the topological index $t_{{\\rm ind}_{G}^{X}}$ coincide as homomorphisms $K_{G}(TX)\\rightarrow R(G)$.$\\Box$\n\\end{proposicao}\n(Proof in the reference.)\n\n\\section{A necessary condition for false copies}\n\n\tGauge fields and gauge potentials can be seen and defined as cross-sections of vector bundles associated to the principal fiber bundle $P(M,G)$. More specifically, potential space (or connection space) coincides with the space of all $C^{k}$ cross-sections of the vector bundle $E$ of $l(G)$-valued 1-forms on $M$, where $l(G)$ is the group's Lie algebra, while field space (or curvature space) coincides with the space of all $C^{k}$ cross-sections of the vector bundle ${\\bf E}$ of $l(G)$-valued 2-forms on $M$.\n\n\tLet $F$ and ${\\bf F}$ be manifolds on which $G$ acts on the left and such that $E = P\\times_{G}F$ and ${\\bf E} = P\\times_{G}{\\bf F}$, where $P$ is the total space of $P(M,G)$. In other words, $E$ is the quotient space of $P\\times F$ by the group action. Similarly, ${\\bf E}$ is the quotient space of $P\\times {\\bf F}$ by the action of the group $G$.\n\n\tTo prove the following proposition we use the Atiyah-Singer index theorem. So, we are still assuming that $M$ and $G$ are compact.\n\n\tTherefore:\n\\begin{proposicao}\nIf a gauge field (a cross-section of ${\\bf E}$) is associated to copied potentials that are locally gauge-equivalent, then there is: a non-trivial sub-group of $G$, denoted by $G'$; a $G'$-manifold $P'$; two $G'$-vector spaces $F'$ and $\\bf F'$; and two elliptic partial\ndifferential operators,\n\\begin{equation}\n{\\cal D}_{G}:C^{\\infty}(P;P\\times F)\\rightarrow C^{\\infty}(P;P\\times\n{\\bf F})\n\\end{equation}\nand\n\\begin{equation}\n{\\cal D}_{G'}:C^{\\infty}(P';P'\\times F')\\rightarrow\nC^{\\infty}(P';P'\\times {\\bf F'})\n\\end{equation}\nrespectively $G$-invariant and $G'$-invariant, such that the\n${\\rm index}\\; {\\cal D}_{G'}$ can be defined as a nontrivial function of the ${\\rm index}\\; {\\cal D}_{G}$.\\label{main}\n\\end{proposicao}\n\n\t{\\em Proof}: If a gauge field is associated to copied potentials that are locally gauge-equivalent, then such a field is reducible (Proposition \\ref{Teo-Doria}). Therefore the bundle $P(M,G)$ is reducible (Definition \\ref{def-redutibilidade}). So, there is a non-trivial sub-group $G'$ of $G$ and a monomorphism $\\varphi:G'\\rightarrow G$ such that one can define a reduced principal fiber bundle $P'(M,G')$ and a reduction $f:P'(M,G')\\rightarrow P(M,G)$. Similarly we define $G'$-vector spaces $F'$ and $\\bf F'$ and maps ${\\bf f}:{\\bf F'}\\rightarrow {\\bf F}$ and ${\\rm f}: F'\\rightarrow F$ such that:\n\\begin{equation}\n{\\rm f}(g'\\xi') = \\varphi(g'){\\rm f}(\\xi')\n\\end{equation}\nand\n\\begin{equation}\n{\\bf f}(g'\\zeta') = \\varphi(g'){\\bf f}(\\zeta')\n\\end{equation}\nfor all $g'\\in G'$, $\\xi'\\in F'$ and $\\zeta'\\in {\\bf F'}$.\n\n\tNow consider $P\\times F$ as the total space of the trivial vector bundle $P\\times F\\rightarrow P$, with a canonical projection. That vector bundle is noted $\\chi$. Similarly the trivial vector bundles $P\\times {\\bf\nF}\\rightarrow P$, $P'\\times F'\\rightarrow P'$ and $P'\\times {\\bf F'}\\rightarrow P'$, are respectively noted $\\vartheta$, $\\chi'$ and $\\vartheta'$.\n\n\tTherefore, the diagram below commutes:\n\\[K_{G}(TP)\\stackrel{\\varphi^{*}}{\\longrightarrow}K_{G'}(TP')\\]\n\\[t_{{\\rm ind}_{G}^{P}}\\downarrow\\;\\;\\;\\;\\;\\;\\downarrow t_{{\\rm ind}_{G'}^{P'}}\\]\n\\[R(G))\\stackrel{\\varphi^{*}}{\\longrightarrow}R(G')\\]\n(Here $\\varphi^{*}$ is induced by $\\varphi$.)\n\n\t$\\sigma({\\cal D}_{G}) = \\vartheta-\\chi$ and $\\sigma({\\cal D}_{G'}) = \\vartheta'-\\chi'$. So, the homomorphisms $f$, $\\bf f$ and f and the diagram given above induce the relation:\n\\begin{equation}\n\\sigma({\\cal D}_{G'}) = \\varphi^{*}(\\sigma({\\cal D}_{G}))\n\\end{equation}\n\n\tThus, according to the diagram, $t_{{\\rm ind}_{G'}^{P'}}(\\sigma({\\cal D}_{G'})) = \\varphi^{*}(t_{{\\rm ind}_{G}^{P}}(\\sigma({\\cal D}_{G})))$. If we use the Atiyah-Singer index theorem (\\ref{Atiyah-Singer-D}), it can be noticed that\\\\\n\\begin{equation}\n{\\rm index}\\;{\\cal D}_{G'} = \\varphi^{*}({\\rm index}\\;{\\cal D}_{G}).\\Box\n\\end{equation}\n\n\\begin{observacao}\n{\\rm The condition that ${\\cal D}_{G}$ and ${\\cal D}_{G'}$ are elliptic partial differential operators is necessary in our proof of proposition 3.1 in order to apply the Atiyah-Singer index theorem given by proposition 1.3.}\n\\end{observacao}\n\n\tThe topological condition given in \\cite{Doria-81}, in order to check whether there are false gauge field copies does not impose that the manifold $M$ should be compact, or that the Lie group must be compact (a very common situation in gauge theories). But when we apply the Atiyah-Singer index theorem to obtain the analytical condition for false copies, we must consider that both $M$ and $G$ are compact.$\\Box$\n\n\\section{Conclusions}\n\n\tThere are several points of contact between classical physics and $K$-theory: a method to prove the index theorem based on the asymptotics of the heat equation \\cite{Atiyah-Calor} \\cite{Gilkey}; and the physical interpretation of nonvanishing characteristic classes in terms of magnetic monopoles, solitons and instantons \\cite{Mayer}. We present here a new $K$-theoretical result with consequence to physics: an analytical condition to the existence of `false' gauge field copies obtained from a topological condition.\n\n\tOur result has some limitations: it refers only to false copies; it is imposed that $M$ and $G$ are compact; and it is imposed that $G$ is semi-simple. We believe that it is possible to extend our results while eliminating those restrictions. (One possibility should be to modify the geometry of an irreducible principal fiber bundle in such a way as to handle true copies as false copies (see \\cite{Costa-Amaral} on this). That is a possible way to define an analytical condition for generic copies through the use of the Atiyah-Singer index theorem. On the structure of $M$, we just recall that $K$-theory has a technique for dealing with locally compact spaces. Moreover, a possible way of generalizing our results to any Lie group (compact or non-compact) amounts (we hope) to the use of some kind of compactification \\cite{Dikranjan}.\n\n\tAnother side-result of the present work has to do with the categorical approach that underlies $K$-theory. When we formalize gauge theory and the gauge copy problem with tools from category theory we notice that our categorical formalization ends up as being too complicated when compared with the traditional approach based on set theory \\cite{da Costa-92}.\n\n\tMoreover, we think it is possible to obtain the same result of Proposition \\ref{main} without $K$-theoretical concepts. Firstly, these results make no reference to $K$-theoretical elements. The index theorem only appears in the proof of those propositions. Secondly, there is an analytical (non-topological) proof (non-topological) of the index theorem by the use of Zeta functions $\\zeta(s) = \\sum\\lambda^{-s}$ ($\\lambda$ denotes eigenvalues of an operator) \\cite{Atiyah-Bott}. Therefore, we conjecture it may be possible to prove the analytical condition that we established for the existence of false gauge field copies in a similar way.\n\n\tStill another possibility has to do with proving our results with the help of the asymptotics of the heat equation \\cite{Gilkey}, since that technique was also used to prove the index theorem. But those are questions to be answered in future papers.\n\n\tAll those questions can be summed up in the following: the gauge field copy phenomenon is a remarkable feature of nonlinear gauge fields without a clear-cut physical interpretation. Copied fields belong to a nowhere dense, bifurcation-like domain in the space of all gauge fields in the usual topology \\cite{Doria-84}. Copied fields are actually involved in a (possible) symmetry-breaking mechanism that remains strictly within the bounds of classical field theory \\cite{Amaral}. Yet their contribution to path-integral computations remain unknown -- for instance, if Feynmann integrals are understood as Kurtzweil-Henstock (KH) integrals \\cite{Muldowney} \\cite{Peng-Yee}, the rather unusual topology of the space of all KH-integrable functions may lead to a new interpretation of the Feynman integrals over field and potential spaces in gauge theory, when given the adequate KH-structure.\n\n\tSo, when we try to establish a connection between Atiyah-Singer theory and gauge copies, we are trying to expand the perspectives from where we can observe and try to understand the field copy phenomenon.\n\n\\section{Acknowledgments}\n\n\tThe present work was partially written while A.S.S. was visiting Stanford University as a post-doctoral fellow. A.S.S. wishes to thank Professor Patrick Suppes for his hospitality. The authors acknowledge also support from CNPq, CAPES and Fapesp (Brazil). This paper was computer-formatted with the help of {\\em Project Griffo} at Rio's Federal University.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\bf{Introduction}}\nThe idea of paracompactness given by $\\text{Dieudonn}e^{'}$ in the year 1944 came out as a generalization of the notion of compactness. It has many implication in field of differential geometry and it plays important roll in metrization theory. \nThe concept of the Alexandroff space \\cite{ASF} $($i.e., a $\\sigma$-space or simply a space$)$ was introduced by A. D. Alexandroff in the year 1940 as a generalization of a topological space where the union of open sets were taken to be open for only countable collection of open sets instead of arbitrary collection. \nAnother kind of generalization of a topological space is the idea of a bitopological space introduced by J.C. Kelly in \\cite{JCK}. Using these ideas Lahiri and Das \\cite{BN} introduced the idea of a bispace as a generalization of a $\\sigma$-space. More works on topological properties were carried out by many authors $( $\\cite{WJP}, \\cite{ILR}, \\cite{JW} etc.$)$ in the setting of a bitopological space and in the setting of a bispace $($\\cite{AKB2}, \\cite{AKB3}, \\cite{AKB4}, \\cite{RM}, \\cite{RM3} etc.$)$. Datta \\cite{MCD} studied the idea of paracompactness in a bitopological space and tried to get analogous results of topological properties given by Michael \\cite{Michael} in respect of paracompactness. In 1986 Raghavan and Reilly \\cite{TGR} gave the idea of paracompactness in a bitopological space in another way. Later in 2008 M. K. Bose et al \\cite{MKBOSE} studied the same in a bitopological space as a generalization of pairwise compactness. Here we study the idea of pairwise paracompactness using the thoughts given by Bose et al \\cite{MKBOSE} and investigate some results of it in the setting of more general structure a bispace.\n\\section{\\bf{Preliminaries}}\\label{preli}\n\\begin{defi}\\cite{ASF} \nA set $X$ is called an Alexandroff space or $\\sigma $- space or simply space if in it is chosen a system $\\mathcal {F}$ of subsets of $X$, satisfying the following axioms\\\\\n$\\left(i\\right)$The intersection of countable number sets in $\\mathcal {F}$ is a set in $\\mathcal {F}$.\\\\\n$\\left(ii\\right)$The union of finite number of sets from $\\mathcal {F}$ is a set in $\\mathcal {F}$.\\\\\n$\\left(iii\\right)$The void set and $X$ are in $\\mathcal {F}$.\\end{defi}\nSets of $\\mathcal F$ are called closed sets. There complementary sets are called open.It is clear that instead of closed sets in the definition of a space, one may put open sets with subject to the conditions of countable summability, finite intersectability and the condition that $X$ and the void set should be open.\\\\ \nThe collection of such open will sometimes be denoted by $\\mathcal{P}$ and the space by $(X, \\mathcal{P})$.\nIt is noted that $\\mathcal{P}$ is not a topology in general as can be seen by taking $X=\\mathbb{R}$, the set of real numbers and $\\tau$ as the collection of all $F_\\sigma$ sets in $\\mathbb{R}$.\\\\\n\\begin{defi}\\cite{ASF} \nTo every set $M$ we correlate its closure $\\overline M$ as the intersection of all closed sets containing M.\\end{defi} \nGenerally the closure of a set in a $\\sigma$-space is not a closed set. We denote the closure of a set $M$ in a space $(X, \\mathcal{P})$ by $\\mathcal{P}$-cl$(M)$ or cl$(M)$ or simply $\\overline M$ when there is no confusion about $\\mathcal{P}$.\nThe idea of limit points, derived set, interior of a set etc. in a space are similar as in the case of a topological space which have been thoroughly discussed in \\cite{PD}.\n\\begin{defi} \\cite{AKB}\nLet $(X, \\mathcal{P})$ be a space. A family of open sets $B$ is said to form a base $($open$)$ for $\\mathcal{P}$ if and only if every open set can be expressed as countable union of members of $B$.\\end{defi}\n\\begin{thm} \\cite{AKB}\n A collection of subsets $B$ of a set $X$ forms an open base of a suitable space structure $\\mathcal{P}$ of $X$ if and only if \\\\\n\\noindent 1) the null set $\\phi \\in B$\\\\\n\\noindent 2) $X$ is the countable union of some sets belonging to $B$.\\\\\n\\noindent 3) intersection of any two sets belonging to $B$ is expressible as countable union of some sets belonging to $B$.\\\\\n\\end{thm}\n\\begin{defi} \\cite{BN}\nLet $X$ be a non-empty set. If $\\mathcal{P}$ and $\\mathcal{Q}$ be two collection of subsets of $X$ such that $(X,\\mathcal{P})$ and $(X,\\mathcal{Q})$ are two spaces, then $X$ is called a bispace. \n\\end{defi}\n\\begin{defi} \\cite{BN} \nA bispace $(X,\\mathcal{P},\\mathcal{Q})$ is called pairwise $T_{1}$ if for any two distinct points $x$, $y$ of $X$, there exist $U\\in \\mathcal{P}$ and $V\\in\\mathcal{Q}$ such that $x\\in U$, $y\\notin U$ and $y\\in V$, $x\\notin V$.\n\\end{defi}\n\\begin{defi} \\cite{BN}\nA bispace $(X,\\mathcal{P},\\mathcal{Q})$ is called pairwise Hausdorff if for any two distinct points $x$, $y$ of $X$, there exist $U\\in\\mathcal{P}$ and $V\\in\\mathcal{Q}$ such that $x\\in U$, $y\\in V$, $U\\cap V=\\phi$. \n\\end{defi}\n\\begin{defi} \\cite{BN} \nIn a bispace $(X,\\mathcal{P},\\mathcal{Q})$, $\\mathcal{P}$ is said to be regular with respect to $\\mathcal{Q}$ if for any $x\\in X$ and a $\\mathcal{P}$-closed set $F$ not containing $x$, there exist $U\\in \\mathcal{P}$, $V\\in\\mathcal{Q}$ such that $x\\in U$, $F\\subset V$, $U\\cap V=\\phi$. $(X,\\mathcal{P},\\mathcal{Q})$ is said to be pairwise regular if $\\mathcal{P}$ and $\\mathcal{Q}$ are regular with respect to each other. \n\\end{defi}\n\\begin{defi} \\cite{BN} \nA bispace $(X,\\mathcal{P},\\mathcal{Q})$ is said to be pairwise normal if for any $\\mathcal{P}$-closed set $F_{1}$ and $\\mathcal{Q}$-closed set $F_{2}$ satisfying $F_{1}\\cap F_{2}=\\phi$, there exist $G_{1}\\in \\mathcal{P}$, $G_{2}\\in \\mathcal{Q}$ such that $F_{1}\\subset G_{2}$, $F_{2}\\subset G_{1}$, $G_{1}\\cap G_{2}=\\phi$.\n\\end{defi}\n\\section{\\bf{Pairwise paracompactness}}\n\\noindent We called a space $($ or a set $)$ is bicompact \\cite{BN} if every open cover of it has a finite subcover. Also similarly as \\cite{BN} a cover B of $(X,\\mathcal{P},\\mathcal{Q})$ is said to be pairwise open if $B \\subset \\mathcal{P} \\cup \\mathcal{Q}$ and B contains at least one nonempty member from each of $\\mathcal{P}$ and $\\mathcal{Q}$. Bourbaki and many authors defined the term paracompactness in a topological space including the requirement that the space is Hausdorff. Also in a bitopological space some authors follow this idea. But in our discussion we shall follow the convention as adopted in Munkresh\\cite{JRM} to define the following terminologies as in the case of a topological space.\n\\begin{defi} (cf.\\cite{JRM})\nIn a space $X$ a collection of subsets $\\mathcal{A}$ is said to be locally finite in $X$ if every point has a neighborhood that intersects only a finitely many elements of $\\mathcal{A}$.\n\\end{defi} \nSimilarly a collection of subsets $\\mathcal{B}$ in a space $X$ is said to be countably locally finite in $X$ if $\\mathcal{B}$ can be expressed as a countable union of locally finite collection. \n\\begin{defi} (cf.\\cite{JRM})\nLet $\\mathcal{A}$ and $\\mathcal{B}$ be two covers of a space $X$, $\\mathcal{B}$ is said to be a refinement of $\\mathcal{A}$ if for $B \\in \\mathcal{B}$ there exists a $A \\in \\mathcal{A}$ containing $B$.\n\\end{defi} \nWe call $\\mathcal{B}$ an open refinement of $\\mathcal{A}$ if the elements of $\\mathcal{B}$ are open and similarly if the elements of $\\mathcal{B}$ are closed $\\mathcal{B}$ is said to be a closed refinement.\n\\begin{defi} (cf.\\cite{JRM})\nA space $X$ is said to be paracompact if every open covering $\\mathcal{A}$ of $X$ has a locally finite open refinement $\\mathcal{B}$ that covers $X$.\n\\end{defi}\nAs in the case of a topological space \\cite{MCD, MKBOSE} we define the following terminologies. Let $\\mathcal{A}$ and $\\mathcal{B}$ be two pairwise open covers of a bispace $(X,\\mathcal{P},\\mathcal{Q})$. Then $\\mathcal{B}$ is said to be a parallel refinement \\cite{MCD} of $\\mathcal{A}$ if for any $\\mathcal{P}$-open set$(\\text{respectively } \\mathcal{Q} \\text{-open set})$ $B$ in $\\mathcal{B}$ there exists a $\\mathcal{P}$-open set$(\\text{respectively } \\mathcal{Q} \\text{-open set})$ $A$ in $\\mathcal{A}$ containing $B$. Let $\\mathcal{U}$ be a pairwise open cover in a bispace $(X, \\mathcal{P}_{1}, \\mathcal{P}_{2})$. If $x$ belongs to $X$ and $M$ be a subset of $X$, then by ``$M$ is $\\mathcal{P}_{\\mathcal{U} x}$-open\" we mean $M$ is $\\mathcal{P}_{1}$-open$(\\text{respectively } \\mathcal{P}_{2} \\text{-open set})$ if $x$ belongs to a $\\mathcal{P}_{1}$-open set$(\\text{respectively } \\mathcal{P}_{2} \\text{-open set})$ in $\\mathcal{U}$.\n\\begin{defi} (cf. \\cite{MKBOSE})\nLet $\\mathcal{A}$ and $\\mathcal{B}$ be two pairwise open covers of a bispace $(X, \\mathcal{P}_{1}, \\mathcal{P}_{2})$. Then $\\mathcal{B}$ is said to be a locally finite refinement of $\\mathcal{A}$ if for each $x$ belonging to $X$, there exists a $\\mathcal{P}_{\\mathcal{A} x}$-open open neighborhood of $x$ intersecting only a finite number of sets of $\\mathcal{B}$.\n\\end{defi}\n\\begin{defi} (cf. \\cite{MKBOSE})\nA bispace $(X, \\mathcal{P}_{1}, \\mathcal{P}_{2})$ is said to be pairwise paracompact if every pairwise open cover of $X$ has a locally finite parallel refinement.\n\\end{defi}\n\\noindent To study the notion of paracompactness in a bispace the idea of pairwise regular and strongly pairwise regular spaces plays significant roll as discussed below.\\\\\n\\indent As in the case of a bitopological space a bispace $(X, \\mathcal{P}_{1}, \\mathcal{P}_{2})$ is said to be strongly pairwise regular\\cite{MKBOSE} if $(X, \\mathcal{P}_{1}, \\mathcal{P}_{2})$ is pairwise regular and both the spaces $(X, \\mathcal{P}_{1})$ and $(X, \\mathcal{P}_{2})$ are regular.\\\\\n\\indent Now we present two examples, the first one is of a strongly regular bispace and the second one is of a pairwise regular bispace without being a strongly pairwise regular bispace.\\\\ \n\\textbf{Example 3.1.}\nLet $X=\\mathbb{R}$ and $(x,y)$ be an open interval in $X$. We consider the collection $\\tau_{1}$ with sets $A$ in $\\mathbb{R}$ such that either $(x,y) \\subset \\mathbb{R} \\setminus A$ or $A \\cap (x,y)$ can be expressed as some union of open subintervals of $(x,y)$ and $\\tau_{2}$ be the collection of all countable subsets in $(x,y)$. Also if $\\tau$ be the collection of all countable union of members of $\\tau_{1} \\cup \\tau_{2}$ then clearly $(X,\\tau)$ is a $\\sigma$-space but not a topological space. Also consider the bispace $(X, \\tau, \\sigma)$, where $\\sigma$ is the usual topology on $X$. \\\\\n\\indent We first show that $(X,\\tau)$ is regular. Let $p \\in X$ and $P$ be any $\\tau$-closed set not containing $p$. Then $A =\\{p \\}$ is a $\\tau$-open set containing $p$. Also $A =\\{p \\}$ is closed in $(X,\\tau)$ because if $p \\notin (x, y)$ then $A^{c} \\cap (x,y)= (x,y)$ and if $p \\in (x, y)$ then $A^{c} \\cap (x,y)= (x,p) \\cup (p,y)$ and hence $A^{c}$ is a $\\tau$-open set containing $P$. \\\\\n\n\\indent Now we show that the bispace $(X, \\tau , \\sigma)$ is pairwise regular. Let $p\\in X$ and $M$ be a $\\tau$-closed set not containing $p$. Then $A =\\{p \\}$ is a $\\tau$-open set containing $p$ and also as every singleton set is closed in $(X,\\sigma)$, $A^{c}$ is a $\\sigma$-open set containing $M$.\\\\ \n \\indent Now let $p \\in X$ and $P$ be a $\\sigma$-closed set not containing $p$. Now consider the case when $P \\cap (x,y)=\\phi$ then $P$ is a $\\tau$-open set containing $P$ and $P^{c}$ is a $\\sigma$-open set containing $p$.\\\\\n\\indent Now we consider the case when $P \\cap (x, y) \\neq \\phi$. Since $p \\notin P$, $P^{c}$ is a $\\sigma$-open set containing $p$ and hence there exists an open interval $I$ containing $p$ be such that $p \\in I \\subset P^{c}$ and $p \\in \\overline{I} \\subset P^{c}$, where $\\overline{I}$ is the closer of $I$ with respect to $\\sigma$. If $I$ intersects $(x,y)$ then let $I_{1} = (x,y) \\setminus \\overline{I}$. Clearly $I_{1}$ is non empty because $ P \\cap (x,y) \\neq \\phi$. Also $\\overline{I} \\subset P^{c}$ and hence $(x,y) \\setminus P^{c} \\subset (x,y) \\setminus \\overline{I}$ and its follows that $P \\cap (x, y) \\subset I_{1}$. So clearly $P \\cup I_{1}$ is a $\\tau$-open set containing $P$ and $I$ is a $\\sigma$-open set containing $p$ and which are disjoint. Again if $I$ does not intersect $(x,y)$ then $P \\cup (x,y)$ is a $\\tau$-open set containing $P$ and $I$ itself a $\\sigma$-open set containing $p$ and which are disjoint. Therefore the bispace $(X, \\tau , \\sigma)$ is strongly pairwise regular.\\\\ \n\\noindent \\textbf{Example 3.2.}\nLet $X= \\mathbb{R}$ and $(X, \\tau_{1}, \\tau_{2})$ be a bispace where $(X, \\tau_{1})$ is cocountable topological space and $\\tau_{2}= \\{ X, \\phi \\} \\cup \\{ \\text{countable subsets of real numbers} \\}$. Clearly $\\tau_{2}$ is not a topology and hence $(X, \\tau_{1}, \\tau_{2})$ is not a bitopological space. We show that $(X, \\tau_{1}, \\tau_{2})$ is a pairwise regular bispace but not a strongly pairwise regular bispace. Let $p \\in X$ and $A$ be a $\\tau_{1}$-closed set not containing $p$. Then clearly $A$ itself a $\\tau_{2}$-open set containing $A$ and $A^{c}$ is a $\\tau_{1}$-open set containing $p$ and clearly they are disjoint.\\\\\n\\indent Similarly if $B$ is a $\\tau_{2}$-closed set such that $p \\notin B$, then $B$ being a complement of a countable set is $\\tau_{1}$-open set containing $B$. Also $B^{c}$ being countable is $\\tau_{2}$-open set containing $p$.\\\\\n\\indent Now let $p \\in X$ and $P$ be a closed set in $(X, \\tau_{2})$ such that $p \\notin P$. Then $P$ must be a complement of a countable set in $\\mathbb{R}$ and hence it must be a uncountable set. So clearly the only open set containing $P$ is $\\mathbb{R}$ itself. Therefore $(X, \\tau_{2})$ is not regular and hence $(X, \\tau_{1}, \\tau_{2})$ can not be strongly pairwise regular.\n\\begin{Note}\nIn a bitopological space, pairwise Hausdorffness and pairwise paracompactness together imply pairwise normality but similar result holds in a bispace if an additional condition C$(1)$ holds. \n\\end{Note}\n\\begin{thm}\nLet $(X,\\mathcal{P},\\mathcal{Q})$ be a bispace, which is pairwise Hausdorff and pairwise paracompact and satisfies the condition C$(1)$ as stated below then it is pairwise normal.\\end{thm}\n\\noindent C$(1):$ If $A\\subset X$ is expressible as an arbitrary union of $\\mathcal{P}$-open sets and $A \\subset B$, $B$ is an arbitrary intersection of $\\mathcal{Q}$-closed sets, then there exists a $\\mathcal{P}$-open set $K$, such that $A \\subset K \\subset B$, the role of $\\mathcal{P}$ and $\\mathcal{Q}$ can be interchangeable.\n\\begin{proof}\nWe first show that $X$ is pairwise regular. So let us suppose $F$ be a $\\mathcal{P}$-closed set not containing $x \\in X$. Since $X$ is pairwise Hausdorff for $\\xi \\in F$, there exists a $U_{\\xi}\\in \\mathcal{P}$ and $V_{\\xi}\\in \\mathcal{Q}$, such that $x \\in U_{\\xi}$ and $\\xi \\in V_{\\xi}$ and $U_{\\xi} \\cap V_{\\xi} = \\phi$. Then the collection $\\{V_{\\xi} : \\xi \\in F\\} \\cup (X \\setminus F)$ forms a pairwise open cover of $X$. Therefore it has a locally finite parallel refinement $\\mathcal{W}$. Let $H= \\cup \\{W \\in \\mathcal{W} : W\\cap F\\neq \\phi \\}$. Now $x\\in X \\setminus F$ and $X \\setminus F$ is $\\mathcal{P}$-open set and hence there exists a $\\mathcal{P}$-open neighborhood $D$ of $x$ intersecting only a finite number of members $W_{1}, W_{2} \\dots W_{n}$ of $\\mathcal{W}$. Now if $W_{i} \\cap F = \\phi$ for all $n=1, 2 \\dots n$, then $H \\cap D = \\phi$. Therefore by C$(1)$ we must have a $\\mathcal{Q}$-open set $K$ such that $F \\subset H \\subset K \\subset D^{c}$. Hence we have a $\\mathcal{Q}$-open set $K$ containing $F$ and $\\mathcal{P}$-open set $D$ containing $x$ with $D \\cap K= \\phi$. If there exists a finite number of elements $W_{p_{1}}, W_{p_{2}} \\dots W_{p_{k}}$ from the collection $\\{W_{1}, W_{2} \\dots W_{n} \\}$ such that $W_{p_{i}} \\cap F \\neq \\phi$, then we consider $V_{\\xi _{p_{i}}}$ such that $W_{p_{i}} \\subset V_{\\xi _{p_{i}}}$, $\\xi _{p_{i}} \\in F$ and $i=1, 2 \\dots k$, since $\\mathcal{W}$ is a locally finite parallel refinement of $\\{V_{\\xi} : \\xi \\in F\\} \\cup (X \\setminus F)$. Now if $U_{\\xi_{p_{i}}}$ are the corresponding member of $V_{\\xi_{p_{i}}}$, then $x \\in D \\cap (\\bigcap^{n}_{i=1}U_{\\xi_{p_{i}}})= G(say) \\in \\mathcal{P}$. Since $\\mathcal{W}$ is a cover of $X$ it covers also $D$ and since $D$ intersects only finite number of members $W_{1}, W_{2} \\dots W_{n}$, these $n$ sets covers $D$. Now since the members $W_{p_{1}}, W_{p_{2}} \\dots W_{p_{k}}$ be such that $W_{p_{i}} \\cap F \\neq \\phi$, we have $D \\cap F \\subset \\bigcup_{i=1}^{k}W_{p_{i}}$. Now let $W_{p_{i}} \\subset V_{\\xi_{p_{i}}}$ for some $\\xi_{p_{i}} \\in F$ and consider $U_{\\xi_{p_{i}}}$ corresponding to $V_{\\xi_{p_{i}}}$ be such that $U_{\\xi_{p_{i}}} \\cap V_{\\xi_{p_{i}}}= \\phi$. Now we claim that $G \\cap F = \\phi$. If not let $y \\in G \\cap F = [D \\cap (\\bigcap^{n}_{i=1}U_{\\xi_{p_{i}}})] \\cap F= [D \\cap F]\\cap (\\bigcap^{n}_{i=1}U_{\\xi_{p_{i}}})$. Then $y \\in D \\cap F$ and hence there exists $W_{p_{i}}$ for some $i=1,2, \\dots k$ such that $y \\in W_{p_{i}} \\subset V_{\\xi_{p_{i}}}$. Also $y \\in (\\bigcap^{n}_{i=1}U_{\\xi_{p_{i}}}) \\subset U_{\\xi_{p_{i}}}$ and hence $y \\in U_{\\xi_{p_{i}}} \\cap V_{\\xi_{p_{i}}}$, which is a contradiction. So $G \\cap F = \\phi$. Now we have a $\\mathcal{P}$-open neighborhood $G$ of $x$ intersecting only a finite number of members $W_{r_{1}}, W_{r_{2}} \\dots W_{r_{k}}$ of $\\mathcal{W}$ where $W_{r_{i}} \\cap F= \\phi$. So by similar argument there exists a $\\mathcal{Q}$-open set $K$ such that $F \\subset H \\subset K \\subset G^{c}$. Thus we have a $\\mathcal{Q}$-open set $K$ containing $F$ and a $\\mathcal{P}$-open set $G$ containing $x$ such that $G \\cap K=\\phi$. \\\\\n\\indent Next let $A$ be a $\\mathcal{Q}$-closed set and $B$ be a $\\mathcal{P}$-closed set and $A\\cap B =\\phi$. Then for every $x \\in B$ and $\\mathcal{Q}$-closed set $A$ there exists $\\mathcal{P}$-open set $U_{x}$ containing $A$ and $\\mathcal{Q}$-open set $V_{x}$ containing $x$ with $U_{x} \\cap V_{x}= \\phi$. Now the collection $\\mathcal{U}= (X \\setminus B) \\cup \\{V_{x} : x \\in B \\}$ forms a pairwise open cover of $X$. Hence there exists a locally finite parallel refinement $\\mathcal{M}$ of $\\mathcal{U}$. Clearly $B \\subset Q$ where $Q= \\cup \\{ M \\in \\mathcal{M} : M\\cap B \\neq \\phi \\}$. Now for $x \\in X \\setminus B$, a $\\mathcal{P}$-open set there exists a $\\mathcal{P}$-open neighborhood of $x$ intersecting only a finite number of elements of $\\mathcal{M}$. Since $A \\subset X \\setminus B$, so for $x \\in A$ there exists a $\\mathcal{P}$-open neighborhood $D_{x}$ of $x$ intersecting only a finite number of elements $M_{x_{1}}, M_{x_{2}} \\dots M_{x_{n}}$ of $\\mathcal{M}$ with $M_{x_{i}} \\cap B\\neq \\phi$ for some $i=1,2,\\dots n$. Suppose if $M_{x_{i}} \\subset V_{x_{i}}$, $i=1, 2 \\dots n$ and let $P_{x}= D_{x} \\cap (\\bigcap^{n}_{i=1}U_{x_{i}})$ where $U_{x_{i}} \\cap V_{x_{i}}= \\phi$. If $M_{x_{i}} \\cap B= \\phi$ for all $i=1,2,\\dots n$, then we consider $P_{x}= D_{x}$. Now if $P= \\bigcup \\{P_{x} : x\\in A\\}$ then $A\\subset P$ and $P \\subset Q^{c}$.\\\\\n\\indent\tNow by the given condition C$(1)$ there exists a $\\mathcal{P}$-open set $R$ be such that $A \\subset P \\subset R \\subset Q^{c}$. Again by the same argument there exists a $\\mathcal{Q}$-open set $S$ be such that $B \\subset Q \\subset S \\subset R^{c}$. Hence there exists a $\\mathcal{P}$-open set $R$ containing $A$ and $\\mathcal{Q}$-open set $S$ containing $B$ with $R \\cap S= \\phi$.\n\\end{proof}\n\\begin{thm}\nIf the bispace $(X, \\mathcal{P}_{1}, \\mathcal{P}_{2})$ is strongly pairwise regular and satisfies the condition C$(2)$ given below, then the following statements are equivalent:\\\\\n$(i)$ X is pairwise paracompact.\\\\\n$(ii)$ Each pairwise open cover $\\mathcal{C}$ of $X$ has a countably locally finite parallel refinement.\\\\\n$(iii)$ Each pairwise open cover $\\mathcal{C}$ of $X$ has a locally finite refinement.\\\\\n$(iv)$ Each pairwise open cover $\\mathcal{C}$ of $X$ has a locally finite refinement $\\mathcal{B}$ such that if $B \\subset C$ where $B \\in \\mathcal{B}$ and $C \\in \\mathcal{C}$, then $\\mathcal{P}_{1}$-cl$(B) \\cup \\mathcal{P}_{2}$-cl$(B) \\subset C$.\\\\\n\\end{thm}\n\\noindent C$(2) :$ If $M \\subset X$ and $\\mathcal{B}$ is a subfamily of $\\mathcal{P}_{1} \\cup \\mathcal{P}_{2}$ such that $\\mathcal{P}_{i}$-$cl(B) \\cap M=\\phi$, for all $B \\in \\mathcal{B}$, then there exists a $\\mathcal{P}_{i}$- open set $S$ such that $M \\subset S \\subset[\\bigcup_{B\\in \\mathcal{B}}\\mathcal{P}_{i}$-$cl(B)]^{c}$.\n\\begin{proof}\n$(i) \\Rightarrow (ii)$\\\\\nLet $\\mathcal{C}$ be a pairwise open cover of $X$. Let $\\mathcal{U}$ be a locally finite parallel refinement of $\\mathcal{C}$. Then the collection $\\mathcal{V}= \\bigcup _{n=1} ^{\\infty} \\mathcal{V}_{n}$, where $\\mathcal{V}_{n} = \\mathcal{U}$ for all $n\\in \\mathbb{N}$, becomes the countably locally finite parallel refinement of $\\mathcal{C}$.\\\\\n$(ii) \\Rightarrow (iii)$\\\\\nWe consider a pairwise open cover $\\mathcal{C}$ of $X$. Let $\\mathcal{V}$ be a parallel refinement of $\\mathcal{C}$, such that $\\mathcal{V}= \\bigcup _{n=1} ^{\\infty} \\mathcal{V}_{n}$, where for each $n$ and for each $x$ there exists a $\\mathcal{P}_{\\mathcal{C}x}$-open neighborhood of $x$ intersecting only a finite number of members of $\\mathcal{V}_{n}$. For each $n \\in \\mathbb{N}$, let us agree to write $\\mathcal{V}_{n}$ as $\\mathcal{V}_{n}= \\{\\mathcal{V}_{n \\alpha} : \\alpha \\in \\wedge_{n} \\}$ and we consider $M_{n} = \\bigcup _{\\alpha \\in \\wedge_{n}} \\mathcal{V}_{n \\alpha}$, $n \\in \\mathbb{N}$. Clearly the collection $\\{ M_{n} \\}_{n \\in \\mathbb{N}}$ is a cover of $X$. Let $N_{n}= M_{n} - \\bigcup _{k 0$ to the diagonal of $\\mathbf{K}$ increases numerical stability and has the effect of damping the magnitude of the coefficients and thereby increasing the smoothness of $\\hat{f}(\\mathbf{x})$, with the downside that the known reference values $y_i$ are only approximately reproduced. This, however, also decreases the chance of overfitting and can lead to better generalization, i.e.\\ increased accuracy when predicting unknown values. \n\nMatrix factorization methods like Cholesky decomposition\\cite{golub2012matrix} are typically used to efficiently solve the linear problem in Eq.~\\ref{eq:krr_coefficient_relation_regularized} in closed form. However, this type of approach scales as $\\mathcal{O}(N^3)$ with the number of reference data and may become problematic for extremely large data sets. Iterative, e.g.\\ gradient-based, solvers can reduce the complexity to $\\mathcal{O}(N^2)$.\\cite{raykar2007fast} Once the coefficients have been determined, the value $y_*$ for an arbitrary input $\\mathbf{x}_*$ can be estimated according to\nEq.~\\ref{eq:kernel_regression} with $\\mathcal{O}(N)$ complexity (a sum over all $N$ reference data points is required). \n\nAlternatively, a variety of approximation techniques exploit that kernel matrices usually have a small \\emph{numerical} rank, i.e.\\ a rapidly decaying eigenvalue spectrum. This enables approximate factorizations $\\mathbf{R}\\mathbf{R}^T \\approx \\mathbf{K}$, where $\\mathbf{R}$ is either a rectangular matrix $\\in \\mathbb{R}^{N \\times M}$ with $M < N$ or sparse. As a result Eq.~\\ref{eq:krr_coefficient_relation_regularized} becomes easier so solve, albeit the result will not be exact\\cite{williams2001using, quinonero2005unifying, snelson2006sparse, rahimi2008random, rudi2017falkon}. \n\nA straightforward approach to approximate a linear system is to pick a representative or random subset of $M$ points $\\tilde{\\mathbf{x}}$ from the dataset (in principle, even arbitrary $\\tilde{\\mathbf{x}}$ could be chosen) and construct a rectangular kernel matrix $\\mathbf{K}\n_{MN}\\in\\mathbb{R}\n^{M\\times N}$ with entries $K_{MN,ij} = K(\\tilde{\\mathbf{x}}_i, \\mathbf{x}_j)$. Then, the corresponding coefficients can be obtained via the Moore-Penrose pseudoinverse\\cite{moore1920reciprocal,penrose1955generalized}\n\\begin{equation}\n\\tilde{\\boldsymbol{\\alpha}} = (1+\\lambda)^{-1}\\left(\\mathbf{K}_{MN}\\mathbf{K}_{NM}\\right)^{-1}\\mathbf{K}_{MN}\\mathbf{y}\\,,\n\\label{eq:krr_coefficient_relation_approximated}\n\\end{equation}\nwhere $\\mathbf{K}_{NM}=\\mathbf{K}_{MN}^{\\top}$. Solving Eq.~\\ref{eq:krr_coefficient_relation_approximated} scales as $\\mathcal{O}(NM^2)$ and is much less computationally demanding than inverting the original matrix in Eq.~\\ref{eq:krr_coefficient_relation_regularized}. Once the $M$ coefficients $\\tilde{\\boldsymbol{\\alpha}}$ are obtained, the model can be evaluated with $\\hat{f}(\\mathbf{x})=\\sum_M \\tilde{\\alpha}_iK(\\mathbf{x},\\tilde{\\mathbf{x}}_i)$, i.e.\\ an additional benefit is that evaluation now scales as $\\mathcal{O}(M)$ instead of $\\mathcal{O}(N)$ (see Eq.~\\ref{eq:kernel_regression}).\n\nHowever, the approximation above gives rise to an over-determined system with fewer parameters than training points and therefore reduced model capacity. Strictly speaking, the involved matrix does not satisfy the properties of a kernel matrix anymore, as it is neither symmetric nor positive semi-definite. To obtain a kernel matrix that still maintains these properties, the Nystr\\\"om~\\cite{williams2001using} approximation\n\\begin{equation}\n\\mathbf{K} \\approx \\tilde{\\mathbf{K}}=\\mathbf{K}_{N M} \\mathbf{K}_{M M}^{-1} \\mathbf{K}_{N M}^{\\top}\n\\label{eq:nystroem_approx}\n\\end{equation}\ncan be used instead. Here, the sub-matrix $\\mathbf{K}_{M M}$ is a true kernel matrix between all inducing points $\\tilde{\\mathbf{x}}_i$. Using the Woodbury matrix identity\\cite{cutajar2016preconditioning}, the regularized inverse is given by\n\\begin{equation}\n\\begin{aligned}\n&(\\tilde{\\mathbf{K}}+\\lambda \\mathbf{I}_N)^{-1} = \\lambda^{-1}[\\mathbf{I}_N-\\mathbf{K}_{N M} \\\\ \n& \\: \\left(\\lambda \\mathbf{K}_{M M}+\\mathbf{K}_{N M}^{\\top} \\mathbf{K}_{N M}\\right)^{-1} \\mathbf{K}_{N M}^{\\top}]\n\\end{aligned}\n\\label{eq:nystroem_regularized_inverse}\n\\end{equation}\nand $\\tilde{\\boldsymbol{\\alpha}} = (\\tilde{\\mathbf{K}}+\\lambda \\mathbf{I})^{-1} \\mathbf{y}$. The computational complexity of solving the Nystr\\\"om approximation is $\\mathcal{O}(M^3 + NM^2)$.\n\nIt should be mentioned that kernel regression methods are known under different names in the literature of different communities. Due to their relation to GPs, some authors prefer the name Gaussian process regression (GPR). Others favor the term kernel ridge regression (KRR), since determining the coefficients with Eq.~\\ref{eq:krr_coefficient_relation_regularized} corresponds to solving a least-squares objective with $L^2$-regularization in the kernel feature space $\\phi$ and is similar to ordinary ridge regression\\cite{tikhonov1977solutions}. Sometimes, the method is also referred to as reproducing kernel Hilbert space (RKHS) interpolation, since Eq.~\\ref{eq:kernel_regression} ``interpolates'' between known reference values (when coefficients are determined with $\\lambda=0$, all known reference values are reproduced exactly). All these methods are formally equivalent and essentially differ only in the philosophical manner the relevant equations are derived. The most important concepts discussed in this section are summarized visually in Fig~\\ref{fig:kernel_basics}.\n\n\\subsubsection{Artificial neural networks}\n\\label{subsubsec:artificial_neural_networks}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{neuralnets.pdf}\n \\caption{Schematic representation of the mathematical concepts underlying artificial (feed-forward) neural networks.\n \\textbf{A}: A single artificial neuron can have an arbitrary number of inputs and outputs. Here, a neuron that is connected to two inputs $i_1$ and $i_2$ with ``synaptic weights'' $w_1$ and $w_2$ is depicted. The bias term $b$ can be thought of as the weight of an additional input with a value of 1. Artificial neurons compute the weighted sum of their inputs and pass this value through an activation function $\\sigma$ to other neurons in the neural network (here, the neuron has three outputs with connection weights $w'_1$, $w'_2$, and $w'_3$). \\textbf{B}: A possible activation function $\\sigma(x)$. The bias term $b$ effectively shifts the activation function along the $x$-axis. Many non-linear functions are valid choices, but the most popular are sigmoid transformations such as $\\tanh(x)$ or (smooth) ramp functions, e.g.\\ $\\mathrm{max}(0,x)$ or $\\ln(1+e^x)$. \\textbf{C}: Artificial neural network with a single hidden layer of three neurons (gray) that maps two inputs $x_1$ and $x_2$ (blue) to two outputs $y_1$ and $y_2$ (yellow), see Eq.~\\ref{eq:single_layer}. For regression tasks, the output neurons typically use no activation function. Computing the weighted sums for the neurons of each layer can be efficiently implemented as a matrix vector product. Some entries of the weight matrices ($\\mathbf{W}$ and $\\mathbf{W'}$) and bias vectors ($\\mathbf{b}$ and $\\mathbf{b'}$) are highlighted in color with the corresponding connection in the diagram. \\textbf{D}: Schematic depiction of a \\emph{deep} neural network with $L$ hidden layers (Eq.~\\ref{eq:deep_feedforward_nn}). Compared to using a single hidden layer with many neurons, it is usually more parameter-efficient to connect multiple hidden layers with fewer neurons sequentially.}\n \\label{fig:nn_basics}\n\\end{figure*}\nOriginally, artificial neural networks (NNs) were, as suggested by their name, intended to model the intricate networks formed by biological neurons\\cite{mcculloch1943logical}. Since then, they have become a standard ML algorithm\\cite{mcculloch1943logical,kohonen1988introduction,abdi1994neural,bishop1995neural,clark1999neural,ripley2007pattern,haykin2009neural,lecun2012efficient,theodoridis2020machine} only remotely related to their original biological inspiration. In the simplest case, the fundamental building blocks of NNs are \ndense (or ``fully-connected'') layers -- linear transformations from input vectors\n$\\mathbf{x}\\in \\mathbb{R}^{n_{\\rm in}}$ to output vectors\n$\\mathbf{y}\\in \\mathbb{R}^{n_{\\rm out}}$ according to\n\\begin{equation}\n\\mathbf{y} = \\mathbf{W}\\mathbf{x} + \\mathbf{b}\\,,\n\\label{eq:dense_layer}\n\\end{equation}\nwhere both weights $\\mathbf{W}\\in \\mathbb{R}^{n_{\\rm out} \\times n_{\\rm in}}$ and biases $\\mathbf{b}\\in \\mathbb{R}^{n_{\\rm out}}$ are parameters, and $n_{\\rm in}$ and $n_{\\rm out}$ denote the\nnumber of dimensions of $\\mathbf{x}$ and $\\mathbf{y}$, respectively. Evidently, a single dense layer can only express linear functions. Non-linear relations between inputs and outputs can only be modeled when at least two dense layers are stacked and combined with a non-linear \\emph{activation function} $\\sigma$:\n\\begin{equation}\n\\label{eq:single_layer}\n\\begin{aligned}\n\\mathbf{h} &= \\sigma\\left(\\mathbf{W}\\mathbf{x} + \\mathbf{b}\\right)\\,,\\\\\n\\mathbf{y} &= \\mathbf{W'}\\mathbf{h} + \\mathbf{b'}\\,.\n\\end{aligned}\n\\end{equation}\nProvided that the number of dimensions of the ``hidden layer'' $\\mathbf{h}$ is large enough, this arrangement can approximate any mapping between inputs $\\mathbf{x}$ and outputs $\\mathbf{y}$ to arbitrary precision, i.e.\\ it is a general function approximator\\cite{gybenko1989approximation,hornik1991approximation}.\n\nIn theory, \\emph{shallow} NNs as shown above are sufficient to approximate any functional relationship. In practice, however, \\emph{deep} NNs with multiple hidden layers are often superior and were shown to be exponentially more\nparameter-efficient.\\cite{eldan2016power} To construct a deep NN, $L$ hidden layers are combined sequentially\n\\begin{equation}\n\\label{eq:deep_feedforward_nn}\n\\begin{aligned}\n\\mathbf{h}_1 &= \\sigma\\left(\\mathbf{W}_1\\mathbf{x} + \\mathbf{b}_1\\right)\\,,\\\\\n\\mathbf{h}_2 &= \\sigma\\left(\\mathbf{W}_2\\mathbf{h}_1 + \\mathbf{b}_2\\right)\\,,\\\\\n&\\vdots\\\\\n\\mathbf{h}_L &= \\sigma\\left(\\mathbf{W}_L\\mathbf{h}_{L-1} + \\mathbf{b}_L\\right)\\,,\\\\\n\\mathbf{y} &= \\mathbf{W}_{L+1}\\mathbf{h}_L + \\mathbf{b}_{L+1}\\,,\n\\end{aligned}\n\\end{equation}\nmapping the input $\\mathbf{x}$ to several intermediate feature representations $\\mathbf{h}_l$, until the output $\\mathbf{y}$ is obtained by a linear regression on the features $\\mathbf{h}_L$ in the final layer. For PES construction, typically, the NN maps a representation of chemical structure $\\mathbf{x}$ to a one-dimensional output representing the energy. Contrary to the coefficients $\\boldsymbol{\\alpha}$ in kernel methods (see Eq.~\\ref{eq:krr_coefficient_relation_regularized}), the parameters $\\{\\mathbf{W}_l, \\mathbf{b}_l\\}_{l=1}^{L+1}$ of an NN cannot be fitted in closed form. Instead, they are initialized randomly and optimized (usually using a variant of stochastic gradient descent) to minimize a loss function that measures the discrepancy between the output of the NN and the reference data.\\cite{montavon2012neural} A common choice is the mean squared error (MSE), which is also used in kernel methods. During training, the loss and its gradient are estimated from randomly drawn batches of training data, making each step independent of the number of training data $N$. On the other hand, finding the coefficients for kernel methods scales as $\\mathcal{O}(N^3)$ due to the need of inverting the $N\\times N$ kernel matrix. Evaluating an NN according to\nEq.~\\ref{eq:deep_feedforward_nn} for a single input $\\mathbf{x}$ scales linearly with respect to the number of model parameters. The same is true for kernel methods, but here the number of model parameters is tied to the number of reference data $N$ used for training the model (see Eq.~\\ref{eq:kernel_regression}), which means that evaluating kernel methods scales $\\mathcal{O}(N)$. As the evaluation cost of NNs is independent of $N$ and only depends on the chosen architecture, they are typically the method of choice for learning large datasets. A schematic overview of the mathematical concepts behind NNs is given in Fig.~\\ref{fig:nn_basics}.\n\n\\subsubsection{Model selection: How to choose hyperparameters}\n\\label{subsubsec:model_selection_how_to_choose_hyperparameters}\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{cross_validation.png}\n\t\\caption{Overview of model selection process.}\n\t\\label{fig:cross_validation} \n\\end{figure*}\n\nIn addition to the parameters that are determined when learning an ML model for a given dataset, e.g.\\ the weights $\\mathbf{W}$ and biases $\\mathbf{b}$ in NNs or the regression coefficients $\\boldsymbol{\\alpha}$ in kernel methods, many models contain hyperparameters that need to be chosen before training. They allow to tune a given model to the prior beliefs about the dataset\/underlying physics and thus play a significant role in how a model generalizes to different data patterns. Two types of hyperparameters can be distinguished: The first kind influences the composition of the model itself, such as the type of kernel or the NN architecture, whereas the second kind affects the training procedure and thus the final parameters of the trained model. Examples for hyperparameters are the width (number of neurons per layer) and depth (number of hidden layers) of an NN, the kernel width $\\gamma$ (see Eq.~\\ref{eq:gaussian_kernel}), or the strength of regularization terms (e.g.\\ $\\lambda$ in Eq.~\\ref{eq:krr_coefficient_relation_regularized}). \n\nThe range of valid values is strongly dependent on the hyperparameter in question. For example, certain hyperparameters might need to be selected from the positive real numbers (e.g.\\ $\\gamma$ and $\\lambda$, see above), while others are restricted to positive integers or have interdependencies (such as depth and width of an NN). This is why hyperparameters are often optimized with primitive exhaustive search schemes like grid search or random search in combination with educated guesses for suitable search ranges. Common gradient-based optimization methods can typically not be applied effectively. Fortunately, model performance is fairly robust to small changes for many hyperparameters and good default values can be determined which work across many different datasets.\n\nBefore any hyperparameters may be optimized, a so-called test set must be split off from the available reference data and kept strictly separate. The remainder of the data is further divided into a training and a validation set. This is done because the performance of ML models is not judged by how well they predict the data they were trained on, as it is often possible to achieve arbitrarily small errors in this setting. Instead, the generalization error, i.e.\\ how well the model is able to predict unseen data, is taken as indicator for the quality of a model. For this reason, for every trial combination of hyperparameters, a model is trained on the training data and its performance measured on the validation set to estimate the generalization error. Finally, the best performing model is selected. To get better statistics for estimates of the generalization error, instead of splitting the remaining data (reference data excluding test set) into just two parts, it is also possible to divide it into $k$ parts (or folds). Then, a total of $k$ models is trained, each using $k-1$ folds as the training set and the last fold as validation set. This method is known as $k$-fold cross validation\\cite{hastie2009elements,hansen2013assessment}.\n\nAs the validation data influences model selection (even though it is not used directly in the training process), the validation error may give too optimistic estimates and is no reliable way to judge the true generalization error of the final model. A more realistic value can be obtained by evaluating the model on the held-out test set, which has neither direct nor indirect influence on model selection. To not invalidate this estimate, it is crucial not to further tweak any parameters or hyperparameters in response to test set performance. More details on how to construct ML models (including the selection of hyperparameters and the importance of keeping an independent test set) can be found in Section~\\ref{sec:best_practices_and_pitfalls}. The model selection process is summarized in Fig.~\\ref{fig:cross_validation}.\n\n\\subsection{Combining machine learning and chemistry}\n\\label{subsec:combining_ml_and_chemistry}\nThe need for ML methods often arises from the lack of theory to describe a desired mapping between input and output. A classical example for this is image classification: It is not clear how to distinguish between pictures of different objects, as it is unfeasible to formulate millions of rules by hand to solve this task. Instead, the best results are currently achieved by learning statistical image characteristics from hundreds of thousands of examples that were extracted from a large dataset representing a particular object class. From that, the classifier learns to estimate the distribution inherent to the data in terms of feature extractors with learned parameters like convolution filters that reflect different scales of the image statistics.\\cite{lee2017deep,theodoridis2020machine,bishop1995neural} This working principle represents the best approach known to date to tackle this particular challenge in the field of computer vision.\n\nOn the other hand, the benchmark for solving molecular problems is set by rigorous physical theory that provides essentially exact descriptions of the relationships of interest. While the introduction of approximations to exact theories is common practice and essential to reduce their complexity to a workable level, those simplifications are always physical or mathematical in nature. This way, the generality of the theory is only minimally compromised, albeit with the inevitable consequence of a reduction in predictive power. In contrast, statistical methods can be essentially exact, but only in a potentially very narrow regime of applicability. Thus, a main role of ML algorithms in the chemical sciences has been to shortcut some of the computational complexity of exact methods by means of empirical inference, as opposed to providing some mapping between input and output at all (as is the case for image classification). Notably, recent developments could show that machine learning can provide novel insight beyond providing efficient shortcuts of complex physical computations.\\cite{schutt2017quantum,carleo2017solving,brockherde2017bypassing,chmiela2017,schutt2019unifying,noe2019boltzmann,sauceda2019molecular,sauceda2020} \n\nForce field construction poses unique challenges that are absent from traditional ML application domains, as much more stringent demands on accuracy are placed on ML approaches that attempt to offer practical alternatives to established methods. Additionally, considerable computational cost is associated with the generation of high-level \\textit{ab initio} training data, with the consequence that practically obtainable datasets with sufficiently high quality are typically not very large. This is in stark contrast with the abundance of data in traditional ML application domains, such as computer vision, natural language processing etc. The challenge in chemistry, however, is to retain the generality, generalization ability and versatility of ML methods, while making them accurate, data-efficient, transferable, and scalable.\n\n\\subsubsection{Physical constraints}\n\\label{subsubsec:physical_constraints}\nTo increase data efficiency and accuracy, ML-FFs can (and should) exploit the invariances of physical systems (see Section~\\ref{subsubsec:invariances_of_physical_systems}), which provide additional information in ways that are not directly available for other ML problems. Those invariances can be used \nto reduce the function space from which the model is selected, in this manner effectively reducing the degrees of freedom for learning,\\cite{chmiela2018,anselmi2016invariance} i.e. making the learning problem easier and thus also solvable with a fraction of data. As ML algorithms are universal approximators with virtually no inherent flexibility restrictions, it is important that physically meaningful solutions are obtained. In the following, important physical constraints of such solutions and possible ways of their realization are discussed in detail. Furthermore, existing kernel-based methods and neural network architectures tailored for the construction of FFs and how they implement these physical constraints in practice are described.\n\n\\paragraph{Energy conservation} A necessary requirement for ML-FFs is that, in the absence of external forces, the total energy of a chemical system is conserved (see Section~\\ref{subsubsec:invariances_of_physical_systems}). When the potential energy is predicted by any differentiable method and forces derived from its gradient, they will be conservative by construction. However, when forces are predicted directly, this is generally not true. This makes deriving energies from force samples slightly more complicated, but this \napproach also carries some advantages. The main challenge to overcome is that not every vector field is necessarily a valid gradient field. Therefore, the learning problem cannot simply be cast in terms of a standard multiple output regression task, where the output variables are modeled without enforcing explicit correlations. \n\\begin{figure*}\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{commutative_diagramm.pdf}\n\t\\caption{Differentiation of an energy estimator (blue) versus direct force reconstruction (red). The law of energy conservation is trivially obeyed in the first case, but requires explicit \\emph{a priori} constraints in the latter scenario. The challenge in estimating forces directly lies in the complexity arising from their high $3N$-dimensionality (three force components for each of the $N$ atoms) in contrast to predicting a single scalar for the energy.}\n\t\\label{fig:e_vs_f_learning}\n\\end{figure*}\nA big advantage of predicting forces directly is that they are true quantum-mechanical observables within the BO approximation by virtue of the Hellmann-Feynman theorem\\cite{hellman1937einfuhrung,feynman1939forces}, i.e.\\ they can be calculated \\emph{analytically} and therefore at a relatively low additional cost when generating \\textit{ab initio} reference data. As a rough guideline, the computational overhead for analytic forces scales with a factor of only around $\\sim$1--7 on top of the energy calculation.\\cite{chmiela2019}. In contrast, at least $3N + 1$ energy evaluations would be necessary for a numerical approximation of the forces by using finite differences. For example, at the PBE0\/DFT level of theory\\cite{adamo1999toward}, calculating energy and analytical forces for an ethanol molecule takes only $\\sim$1.5 times as long as calculating just the energy (the exact value is implementation-dependent), whereas for numerical gradients, a factor of at least $\\sim$10 would be expected. \n\nAs forces provide additional information about how the energy changes when an atom is moved, they offer an efficient way to sample the PES, which is why it is desirable to formulate ML models that can make \\emph{direct} use of them in the training process. Another benefit of a direct reconstruction of the forces is that it avoids the amplification of estimation errors due to the derivative operator that would otherwise be applied to the PES reconstruction (see Fig.~\\ref{fig:e_vs_f_learning}).\\cite{chmiela2017,sauceda2019molecular,snyder2012finding}\n\n\\paragraph{Roto-translational invariance}\nA crucial requirement for ML-FFs is the rotational and translational invariance of the potential energy, i.e.\\ $E\\left(\\mathbf{R}\\right) = E\\left(\\mathcal{R}\\mathbf{R} +\\mathcal{T}\\right)$, where $\\mathcal{R}$ and $\\mathcal{T}$ are rigid rotations and translations and $\\mathbf{R}$ are the Cartesian coordinates of the atoms. As long as the representation $\\mathbf{x}(\\mathbf{R})$ of chemical structure chosen as input for the ML model itself is roto-translationally invariant, ML-FFs inherit its desired properties and even the gradients will automatically behave in the correct equivariant way due to the outer derivative $\\partial (\\mathbf{x}(\\mathcal{R}\\mathbf{R}+\\mathcal{T}) \/ \\partial \\mathbf{R} = \\mathcal{R}\\partial\\mathbf{x}(\\mathbf{R})\/\\partial\\mathbf{R}$. One example of appropriate features to construct a representation $\\mathbf{x}$ with the desired properties are pairwise distances. For a system with $N$ atoms, there are $\\binom{N}{2}$ different pairwise distances, which results in reasonably sized feature sets for systems with a few dozen atoms. Apart from very few pathological cases, this representation is complete, in the sense that any possible configuration of the system can be described exactly and uniquely\\cite{bartok2013representing}. However, while pairwise distances serve as an efficient parametrization of some geometry distortions like bond stretching, they are relatively inefficient in describing others, e.g.\\ rotations of functional groups. In the latter case, many distances are affected even for slight angular changes, which can pose a challenge when trying to learn the geometry-energy mapping. Complex transition paths or reaction coordinates are often better described in terms of bond and torsion angles in addition to pairwise distances. The problem is that the number of these features grows rather quickly, with $\\binom{N}{3}$ and $\\binom{N}{4}$, respectively. At that rate, the size of the feature set quickly becomes a bottleneck, resulting in models that are slow to train and evaluate. While an expert choice of relevant angles would circumvent this issue, it reduces some of the \n``data-driven'' flexibility that ML models are typically appreciated for. Note that models without roto-translational invariance are practically unusable, as they will start to generate spurious linear and\/or angular momentum during dynamics simulations.\n\n\\paragraph{Indistinguishability of identical atoms}\nIn the BO approximation, the potential energy of a chemical system only depends on the charges and positions of the nuclei. As a consequence, the PES is symmetric under permutation of atoms with the same nuclear charge. However, symmetric regions are not necessarily sampled in an unbiased way during MD simulations (see Fig.~\\ref{fig:FigBiasedSampl}). Consequently, ML-FFs that are not constrained to treat all symmetries equivalently may (due to the uneven sampling) predict different results when permuting atoms.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{Fig_BiasedSampl.pdf}\n\t\\caption{Regions of the PESs for ethanol, keto-malondialdehyde and aspirin visited during a 200~ps \\textit{ab initio} MD simulation at 500~K using the DFT\/PBE+TS level of theory~\\cite{PBE1996,tkatchenko2009accurate}. The black dashed lines indicate the symmetries of the PES. Note that regions related by symmetry are not necessarily visited equally often.}\n\t\\label{fig:FigBiasedSampl}\n\\end{figure}\nWhile it is in principle possible to arrive at a ML-FF that is symmetric with respect to permutations of same-species atoms indirectly via data augmentation\\cite{montavon2012learning,montavon2013machine} or by simply using datasets that naturally include all relevant symmetric configurations in an unbiased way, there are obvious scaling issues with this approach. It is much more efficient to impose the right constraints onto the functional form of the ML-FF, such that all relevant symmetric variants of a certain atomic configuration appear equivalent automatically. Such symmetric functions can be constructed in various ways, each of which has advantages and disadvantages.\n\nAssignment-based approaches do not symmetrize the ML-FF \\textit{per se}, but instead aim to normalize its input, such that all symmetric variants of a configuration are mapped to the same internal representation. In its most basic realization, this assignment is done heuristically, i.e.\\ by using inexact, but computationally cheap criteria. Examples for this approach are the Coulomb matrix\\cite{rupp2012fast} or the Bag-of-Bonds\\cite{hansen2015machine} descriptors, that use simple sorting schemes for that purpose. Histograms\\cite{huo2017unified, christensen2020fchl} and some density-based\\cite{bartok2013representing,hirn2017wavelet,eickenberg2017solid} approaches follow that same principle, although not explicitly. All of those schemes have in common that they compare the features in aggregate as opposed to individually. A disadvantage is that dissimilar features are likely to be compared to each other or treated as the same, which limits the accuracy of the prediction. Such weak assignments are better suited for datasets with diverse conformations rather than gathered from MD trajectories that contain many similar geometries. In the latter case, the assignment of features might change as the geometry evolves, which would lead to discontinuities in the prediction and would effectively be treated by the ML model as noise (see $\\epsilon$ in Eq.~\\ref{regressionwithnoise}).\n\nAn alternative path is to recover the true correspondence of molecular features using a graph matching approach\\cite{kriege2016valid,Vert2008}. Each input $\\mathbf{x}$ is matched to a canonical permutation of atoms $\\tilde{\\mathbf{x}} = \\mathbf{P}\\mathbf{x}$ before generating the prediction. This procedure effectively compresses the PES to one of its symmetric subdomains (see dashed black lines in Fig.(\\ref{fig:FigBiasedSampl})), but in an exact way. Note that graph matching is in all generality an NP-complete problem which can only be solved approximately. In practice, however, several algorithms exist to ensure at least consistency in the matching process if exactness can not be guaranteed\\cite{pachauri2013solving}. A downside of this strategy is that any input must pass through a matching process, which is relatively costly, despite being approximate. Another issue is that the boundaries of the symmetric subdomains of the PES will necessarily lie in the extrapolation regime of the reconstruction in which prediction accuracy tends to degrade. As the molecule undergoes symmetry transformations, these boundaries are frequently crossed, to the detriment of prediction performance.\n\nArguably the most universal way of imposing symmetry, especially if the functional form of the model is already given, is via invariant integration over the relevant symmetry group $f_\\text{sym}(\\mathbf{x}) = \\int_{\\pi \\in \\mathcal{S}} f(\\mathbf{P}_\\pi \\mathbf{x})$. Typically, $\\mathcal{S}$ would be the permutation group and $\\mathbf{P}_\\pi$ the corresponding permutation matrix that transforms each vector of atom positions $\\mathbf{x}$. Some approaches\n\\cite{bartok2013representing, bartok2015g, de2016comparing} avoid this implicit ordering of atoms in $\\mathbf{x}$ by adopting a three-dimensional density representation of the molecular geometry defined by the atom positions, albeit at the cost of losing rotational invariance, which then must be recovered by integration. \nInvariant integration gives rise to functional forms that are truly symmetric and do not require any pre- or post-processing of the in- and output data. A significant disadvantage is however, that the cardinality of even basic symmetry groups is exceedingly high, which affects both training and prediction times. \n\nThis combinatorial challenge can be solved by limiting the invariant integral to the physical point group and fluxional symmetries that actually occur in the training dataset. Such a sub-group of meaningful symmetries can be automatically recovered and is often rather small\\cite{chmiela2019}. For example, each of the molecules benzene, toluene and azobenzene have only 12 physically relevant symmetries, whereas their full symmetric groups have orders $6!6!$, $7!8!$ and $12!10!2!$ symmetries respectively.\n\n\\subsubsection{(Symmetric) Gradient Domain Machine Learning ((s)GDML)}\n\\label{subsubsec:symmetric_gradient_domain_machine_learning}\nGradient domain machine learning (GDML) is a kernel-based method introduced as a data efficient way to obtain accurate reconstructions of flexible molecular force fields from small reference datasets of high-level \\textit{ab initio} calculations\\cite{chmiela2017}. Contrary to most other ML-FFs, instead of predicting the energy and obtaining forces by derivation with respect to nuclear coordinates, GDML predicts the forces directly. As mentioned in Section~\\ref{subsubsec:physical_constraints}, forces obtained in this way may violate energy conservation. To ensure conservative forces, the key idea is to use a kernel $\\mathbf{K}\\left(\\mathbf{x},\\mathbf{x}^{\\prime}\\right) = \\nabla_{\\mathbf{x}}K_{E}\\left(\\mathbf{x},\\mathbf{x}^{\\prime}\\right)\\nabla_{\\mathbf{x}^{\\prime}}^{\\top}$ that models the forces $\\mathbf{F}$ as a transformation of an unknown potential energy surface $E$ such that\n\\begin{equation}\n\\begin{aligned}\n\\boldsymbol{\\mathbf{F}}&=-\\nabla E\\\\&\\sim\\mathcal{G}\\mathcal{P}\\left[-\\nabla\\mu_{E},(\\mathbf{x}),\\nabla_{\\mathbf{x}}K_{E}\\left(\\mathbf{x},\\mathbf{x}^{\\prime}\\right)\\nabla_{\\mathbf{x}^{\\prime}}^{\\top}\\right]\\,.\n\\end{aligned}\n\\label{eq:gdml_key_idea}\n\\end{equation}\nHere, $\\mu_{E}: \\mathbb{R}^{d} \\rightarrow \\mathbb{R}$ and $K_{E}: \\mathbb{R}^{d} \\times \\mathbb{R}^{d} \\rightarrow \\mathbb{R}$ are the prior mean and covariance functions of the latent energy-based Gaussian process $\\mathcal{G}\\mathcal{P}$, respectively. The descriptor of chemical structure $\\mathbf{x}\\in \\mathbb{R}^{d}$ consists of the inverse of all $d$ pair-wise distances, which guarantees roto-translational invariance of the energy. Training on forces is motivated by the fact that they are available analytically from electronic structure calculations, with only moderate computational overhead atop energy evaluations. The big advantage is that for a training set of size $M$, only $M$ reference energies are available, whereas there are three force components for each of the $N$ atoms, i.e.\\ a total of $3NM$ force values. This means that a kernel-based model trained on forces contains more coefficients (see Eq.~\\ref{eq:kernel_regression}) and is thus also more flexible than an energy-based variant. Additionally, the amplification of noise due to the derivative operator is avoided.\n\nA limitation of the GDML method is that the structural descriptor $\\mathbf{x}$ is not permutationally invariant, because the values of its entries (inverse pairwise distances) change when atoms are re-ordered. An extension of the original approach, sGDML\\cite{chmiela2018,chmiela2019}, additionally incorporates all relevant rigid space group symmetries, as well as dynamic non-rigid symmetries of the system at hand into the kernel, to further improve its efficiency and ensure permutational invariance. Usually, the identification of symmetries requires chemical and physical intuition about the system at hand, which is impractical in an ML setting. Here, however, a data-driven multi-partite matching approach is employed to automatically recover permutations of atoms that appear within the training set\\cite{chmiela2019}. A matching process finds permutation matrices $\\mathbf{P}$ that realize the assignment between adjacency matrices $(\\mathbf{A})_{ij} = \\|\\vec{r}_i - \\vec{r}_j\\|$ of molecular graph pairs $G$ and $H$ in different energy states\n\\begin{equation}\n\\operatorname*{arg\\,min}_{\\tau} \\mathcal{L}(\\tau) = \\|\\mathbf{P}(\\tau)\\mathbf{A}_G\\mathbf{P}(\\tau)\\tran - \\mathbf{A}_H\\|^2\n\\label{eq:matching_objective}\n\\end{equation}\nand thus between symmetric transformations\\cite{Umeyama1988}. The resulting approximate local pairwise assignments are subsequently globally synchronized using transitivity as the consistency criterion\\cite{pachauri2013solving} to eliminate impossible assignments. By limiting this search to the training set, combinatorially feasible, but physically irrelevant permutations $\\tau$ are automatically excluded (ones that are inaccessible without crossing impassable energy barriers). Such hard symmetry constraints (derived from the training set) greatly reduce the intrinsic complexity of the learning problem without biasing the estimator, since no additional approximations are introduced.\n\n\\subsubsection{Gaussian approximation potentials (GAPs)}\n\\label{subsubsec:gaussian_approximation_potentials}\nGaussian approximation potentials (GAPs)\\cite{bartok2010gaussian} were originally developed for materials such as bulk crystals, but were later also applied to molecules\\cite{bartok2017machine}.\nThey scale linearly with the number of atoms of a systems and can accommodate for periodic boundary conditions.\nSimilar to high-dimensional neural network potentials\\cite{behler2007generalized} (see Section~\\ref{subsubsec:high_dimensional_neural_network_potentials}), GAPs decompose each system into atom-centered environments $i$ such that its energy can be written as the sum of atomic contributions\n\\begin{equation}\nE=\\sum_{i=1}^N E_i(\\{\\mathbf{r}_{ij}\\}_{j \\in [1, N]})\\,,\n\\label{eq:gap_local_contributions}\n\\end{equation}\nwith $\\mathbf{r}_{ij} = \\mathbf{r}_{j} - \\mathbf{r}_{i}$ and $\\mathbf{r}_{i}$ being the position of atom $i$. A smooth cutoff function is applied to the pairwise distances $\\lVert \\mathbf{r}_{ij} \\rVert$ to ensure that the contributions $E_i$ are local and no discontinuities are introduced when atoms enter or leave the cutoff radius. Even though such a decomposition is inherently non-unique and no labels for atom-wise energies are available in the reference data, they can still be approximated \nby a Gaussian process: The sum over atomic environments can be moved into the kernel function, yielding a kernel for systems $\\mathbf{x}$ and $\\mathbf{x}'$ with $N$ and $N'$ atoms, respectively:\n\\begin{equation}\nK(\\mathbf{x}, \\mathbf{x}') = \\sum_{i=1}^N \\sum_{j=1}^{N'} K_\\text{local}(x_i, x'_j)\\,.\n\\label{eq:gap_kernel}\n\\end{equation}\nThus, reference energies for the whole system are sufficient for the model to learn a suitable energy decomposition into atomic environments.\n\nSeveral descriptors and kernels for GAPs have been developed based on a local ``atomic density'' $\\rho(\\mathbf{r}) = \\sum_j \\delta(\\mathbf{r} - \\mathbf{r_j})$.\nInitially, \\citet{bartok2010gaussian} proposed to employ local atomic coordinates projected onto a 4D hyper sphere. \nSince this projection can represent the volume of a 3D sphere, the introduction of an additional radial basis can be avoided.\nTo achieve rotational invariance, the bispectrum of 4D spherical harmonics of these coordinates was used as a descriptor. Alternatively, the SOAP (\\textit{smooth overlap of atomic positions}) kernel\\cite{bartok2013representing} is defined as the integral over rotations $\\mathcal{R}$ of atomic densities\n\\begin{equation}\nK(\\rho, \\rho') = \\int d\\mathcal{R} \\left| \\int \\rho(\\mathbf{r}) \\rho'(\\mathcal{R}\\mathbf{r}) \\mathrm{d}\\mathbf{r} \\right|^n\\,.\n\\label{eq:soap_kernel}\n\\end{equation}\nGiven smoothed local densities $\\rho(\\mathbf{r})=\\sum_j\\exp(-\\gamma\\|\\mathbf{r}-\\mathbf{r_j}\\|^2)$, it has been shown that the SOAP kernel is equivalent to the linear kernel over the SO(3) power spectrum and bispectrum for $n=2$ and $n=3$, respectively\\cite{bartok2013representing}.\nBoth approaches are invariant to permutation of neighboring atoms as well as the rotation of the local environment.\nFurther representations include best matches of the atomic densities over rotations\\cite{de2016comparing} and kernels for symmetry-adapted prediction of tensorial properties\\cite{grisafi2018symmetry,csanyi2020machine}.\n\n\\subsubsection{Neural Network Potentials}\n\\label{subsubsec:high_dimensional_neural_network_potentials}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.99\\textwidth]{nn_architectures.png}\n \\caption{Overview of descriptor-based (top) and end-to-end (bottom) NNPs. Both types of architecture take as input a set of $N$ nuclear charges $Z_i$ and Cartesian coordinates $\\mathbf{r}_i$ and output atomic energy contributions $E_i$, which are summed to the total energy prediction $E$ (here $N=9$, an ethanol molecule is used as example). In the descriptor-based variant, pairwise distances $r_{ij}$ and angles $\\alpha_{ijk}$ between triplets of atoms are calculated from the Cartesian coordinates and used to compute hand-crafted two-body ($G^2$) and three-body ($G^3$) ACSFs (see Eqs.~\\ref{eq:two_body_symmetry_function}~and~\\ref{eq:three_body_symmetry_function}). For each atom $i$, the values of $M$ different $G^2$ and $K$ different $G^3$ ACSFs are collected in a vector $\\mathbf{x}_i$, which serves as a fingerprint of the atomic environment and is used as input to an NN predicting $E_i$. Information about the nuclear charges is encoded by having separate NNs and sets of ACSFs for all (combinations of) elements. In end-to-end NNPs, $Z_i$ is used to initialize the vector representation $\\mathbf{x}_i^0$ of each atom to an element-dependent (learnable) embedding (atoms with the same $Z_i$ start from the same representation). Geometric information is encoded by iteratively passing these descriptors (along with pairwise distances $r_{ij}$ expanded in radial basis functions $\\mathbf{g}(r_{ij})$) in $T$ steps through NNs representing interaction functions $\\mathcal{F}^t$ and atom-wise refinements $\\mathcal{G}^t$ (see Eq.~\\ref{eq:general_message_passing}). The final descriptors $\\mathbf{x}^T_i$ are used as input for an additional NN predicting the atomic energy contributions (typically, a single NN is shared among all elements).}\n \\label{fig:nnp_overview}\n\\end{figure*}\n\nThe first Neural Network Potentials (NNPs) used a set of internal coordinates, e.g.\\ distances and angles, as structural representation to model the PES\\cite{blank1995neural,brown1996combining,tafeit1996neural,no1997description,prudente1998fitting}. While being roto-translationally invariant, internal coordinates impose an arbitrary order on the atoms and are thus not reflecting the equivalence of permuted inputs. As a result, the NNP might assign different energies to symmetrically equivalent structures.\nBeyond that, the number of atoms influences the dimensionality of the input $\\mathbf{x}$, limiting the applicability of the PES to chemical systems of the same size.\nDecomposing the energy prediction in the spirit of a many-body expansion circumvents these issues\\cite{manzhos2006random,manzhos2007using,malshe2009development}, however, it scales imperfectly with system size and number of chemical species, because each term in the many-body expansion has to be modeled by a separate NN. \n\n\\citet{behler2007generalized} where the first to propose so-called High-dimensional Neural Networks Potentials (HDNNPs), where the total energy of a chemical system is expressed as a sum over atomic contributions $E=\\sum_{i}E_i$, predicted by the same NN (or one for each element). The underlying assumption is that the energetic contribution $E_i$ of each atom depends mainly on its local chemical environment. As all atoms of the same type are treated identically and summation is commutative, the output does not change when the input is permuted.\nDue to the decomposition into atomic contributions, systems with varying numbers of atoms can be predicted by the same NNP. In principle, this framework also enables transferability between system sizes, e.g.\\ a model can be trained on small systems, but applied to predict energies and forces for larger systems. \nHowever, this requires sufficient sampling of the local environments to remove spurious correlations caused by the training data distribution, as well as corrections for long-range effects.\n\nThe introduction of HDNNPs inspired many NN architectures that can be broadly categorized into two types. \\emph{Descriptor-based} NNPs\\cite{behler2011atom,khorshidi2016amp,artrith2017efficient,unke2018reactive} rely on fixed rules to encode the environment of an atom in a vector $\\mathbf{x}$, which is then used as input for an ordinary feed-forward NN (see Eq.~\\ref{eq:deep_feedforward_nn}). These architectures include many variants of the original Behler-Parrinello network, such as ANI\\cite{smith2017ani} and TensorMol\\cite{yao2018tensormol}. \nOn the other hand, \\textit{end-to-end} NNPs\\cite{duvenaud2015convolutional,kearnes2016molecular,schutt2017quantum,gilmer2017neural}, take nuclear charges and Cartesian coordinates as input and learn a suitable representation from the data. \n\nMany end-to-end NNPs have been inspired by the graph neural network model by \\citet{scarselli2008graph} and were later collectively cast as message-passing neural networks (MPNNs)\\cite{gilmer2017neural}. A prominent example is the Deep Tensor Neural Network (DTNN)\\cite{schutt2017quantum}. Since its introduction, this approach has been refined to create new architectures, such as\nSchNet\\cite{schutt2017schnet, schutt2018quantum},\nHIP-NN\\cite{lubbers2018hierarchical} or\nPhysNet\\cite{unke2019}.\nEnd-to-end NNPs that do not directly fall into the category of MPNNs are covariant compositional networks that are able to employ features of higher angular momentum\\cite{hy2018predicting,anderson2019cormorant,weiler20183d} as well as models using a pseudo-density as input\\cite{eickenberg2017solid}.\n\nBecause no fixed rule is used to construct descriptors, end-to-end NNPs are able to automatically adapt the environment representations $\\mathbf{x}$ to the reference data (in contrast to the descriptor-based variant). However, as long as $\\mathbf{x}$ is invariant with respect to translation, rotation, and permutation of symmetry equivalent atoms, both types of NNPs adhere to all physical constraints outlined in Section~\\ref{subsubsec:physical_constraints}. NNPs are commonly used to predict energies, while conservative forces are obtained by derivation. Despite being energy-based, it is still possible to incorporate information from \\textit{ab initio} forces by including them in the loss term optimized during training. \nAt this point, it should be noted that the requirement for differentiable models excludes the use of certain activation functions, for example the popular ReLU activation\\cite{nair2010rectified}, when constructing ML-FFs based on neural networks. To avoid discontinuities in the forces, activation functions used for NNPs must always be smooth.\n\n\\paragraph{Descriptor-based NNPs}\nThe first descriptor-based NNP introduced by \\citet{behler2007generalized} uses atom-centered symmetry functions (ACSFs)\\cite{behler2011atom} consisting of two-body terms\n\\begin{equation}\nG^2_i = \\sum_{j \\neq i}^{} e^{-\\eta(r_{ij}-r_s)^2}f_{\\rm cut}(r_{ij})\n\\label{eq:two_body_symmetry_function}\n\\end{equation}\nand three-body terms\n\\begin{equation}\n\\begin{split}\nG^3_i = 2^{1-\\zeta}\\sum_{j,k \\neq i}^{} \\left(1+\\lambda\\cos\\theta_{ijk}\\right)^\\zeta e^{-\\eta\\left(r_{ij}^2+r_{ik}^2+r_{jk}^2\\right)} \\\\ \n\\times f_{\\rm cut}(r_{ij})f_{\\rm cut}(r_{ik})f_{\\rm cut}(r_{jk})\n\\end{split}\n\\label{eq:three_body_symmetry_function}\n\\end{equation}\n to encode information about the chemical environment of each atom $i$. Here, $r_{ij}$ is the distance between atoms $i$ and $j$, $\\theta_{ijk}$ the angle spanned by atoms $i$, $j$ and $k$ centered around $i$, and the summations run over all atoms within a cutoff distance $r_{\\rm cut}$. As the atom order is irrelevant for the values of $G^2_i$ and $G^3_i$ and only internal coordinates are used to calculate them, all physical invariants are satisfied. A cutoff function such as\n\\begin{equation}\nf_{\\rm cut}(r) = \\begin{cases} \n\\dfrac{1}{2}\\left[\\cos\\left(\\dfrac{\\pi r}{r_{\\rm cut}}\\right)+1\\right]\\,, & r\\leq r_{\\rm cut} \\\\\n0\\,, & r > r_{\\rm cut}\n\\end{cases}\n\\label{eq:behler_parrinello_cutoff}\n\\end{equation}\nensures that $G^2_i$ and $G^3_i$ vary smoothly when atoms enter or leave the cutoff sphere and the parameters $\\eta$, $r_s$, $\\zeta$, and $\\lambda(=\\pm1)$ determine to which distances, or combinations of angles and distances, the ACSFs are most sensitive. When sufficiently many $G^2_i$ and $G^3_i$ with different parameters are combined and stored in a vector $\\mathbf{x}_i$, they form a ``fingerprint'' of the local environment of atom $i$. This environment descriptor is then used as input for a neural network for predicting the energy contributions $E_i$ of atoms~$i$ and the total energy $E=\\sum_{i}E_i$ is obtained by summation.\n\nSince the ACSFs only use geometric information, they work best for systems containing only atoms of one element, for example crystalline silicon\\cite{behler2007generalized}. To describe multi-component systems, typically, the symmetry functions are duplicated for each combination of elements and separate NNs are used to predict the energy contributions for atoms of the same type\\cite{behler2015constructing}. Since the combinatorial explosion can lead to a large number of ACSFs for systems containing many different elements, an alternative is to modify the ACSFs with element-dependent weighting functions\\cite{gastegger2018wacsf}. Most descriptor-based NNPs, such as ANI\\cite{smith2017ani} and TensorMol\\cite{yao2018tensormol}, use variations of Eqs.~\\ref{eq:two_body_symmetry_function}~and~\\ref{eq:three_body_symmetry_function} (sometimes allowing parameters of ACSFs to be optimized during training) to construct the environment descriptors $\\mathbf{x}_i$. Different ways to encode the structural information are possible, for example using three-dimensional Zernike functions\\cite{khorshidi2016amp}, or the coefficients of a spherical harmonics expansion\\cite{unke2018reactive}, but the general principle remains the same. Also, while most descriptor-based NNPs use separate parametrizations for different elements, it is also possible to use a single NN to predict all atomic energy contributions\\cite{unke2018reactive}. The common feature for all variations of this approach is that the functional form of the environment descriptor is predetermined and manually designed.\n\n\\paragraph{End-to-end NNs}\nA potential drawback of the previously introduced ACSFs is their restriction to collective symmetry functions, i.e.\\ the choice of descriptor may limit the expressive power of the neural network and designing a good descriptor often requires expert knowledge. Additionally, a growing number of input dimensions can quickly become computationally expensive, both for calculating the descriptors and for evaluating the NN. This is especially the case when modeling multi-component systems, where commonly orthogonality is assumed between different elements (which increases the number of input dimensions) or the descriptors are simply weighted by an element-dependent factor (which may limit the structural resolution of the descriptor).\n\nIn contrast, end-to-end NNPs directly take atomic types and positions as inputs to learn suitable representations from the reference data. Similar to descriptor-based NNPs, many end-to-end NNPs obtain the total energy $E$ as a sum of atomic contributions $E_i$. However, those are predicted from learned features $\\mathbf{x}_i$ encoding information about the local chemical environment of each atom $i$. This allows them to adapt the features based on the size and distribution of the training set as well as the chemical property of interest during the training process. The idea is to learn a mapping to a high-dimensional feature space, so that structurally (and energetically) similar atomic environments lie close together and dissimilar ones far apart.\n\nWithin the deep tensor neural network framework\\cite{schutt2017quantum}, this is achieved by iteratively refining the atomic features $\\mathbf{x}_i$ based on neighboring atoms. The features are initialized to $\\mathbf{x}_i^0 = \\mathbf{e}_{Z_i}$, where $\\mathbf{e}_{Z}$ are learnable element-dependent representations that are updated for $T \\in [3, 6]$ steps. This procedure is inspired by diffusion graph kernels\\cite{kondor2002diffusion} as well as the graph neural network model by \\citet{scarselli2008graph}.\nMany end-to-end networks have adapted this approach which can be written in general as\n\\begin{equation}\n\\mathbf{x}_{i}^{t+1} = \\mathcal{G}^t\\left(\\mathbf{x}_{i}^{t} + \\sum_{j \\neq i} \\mathcal{F}^{t}\\left(\\mathbf{x}_{i}^{t}, \\mathbf{x}_{j}^{t},\\mathbf{g}(r_{ij})\\right)f_{\\rm cut}(r_{ij})\\right)\\,,\n\\label{eq:general_message_passing}\n\\end{equation}\nwhere the summation runs over all atoms within a distance $r_{\\rm cut}$ and a cutoff function $f_{\\rm cut}$ ensures smooth behavior when atoms cross the cutoff.\nHere, the ``atom-wise'' function $\\mathcal{G}^{t}$ is used to refine the atomic features after they have been updated with information from neighboring atoms through the interaction-function $\\mathcal{F}^{t}$. Usually, the interatomic distance $r_{ij}$ is not used directly as input to $\\mathcal{F}^{t}$, but expanded in a set of uniformly spaced radial basis functions\\cite{schutt2017quantum,schutt2017schnet,unke2019} to form a vectorial input $\\mathbf{g}(r_{ij})$.\nBoth $\\mathcal{F}^{t}$ and $\\mathcal{G}^{t}$ functions are NNs with the specific implementations varying between different end-to-end NNP architectures. As only pair-wise distances are used and the order of atoms is irrelevant due to the commutative property of summation, the features $\\mathbf{x}_i$ obtained by Eq.~\\ref{eq:general_message_passing} are automatically roto-translationally and permutationally invariant (and thus also the energy predictions).\n\n\\citet{gilmer2017neural} have cast graph networks of this structure as message-passing neural networks and proposed a variant that uses a set2set decoder instead of a sum over energy contributions to achieve permutational invariance of the energy.\nSchNet\\cite{schutt2017schnet} takes an alternative view of the problem: the atoms of the molecule sparsely populate the continuous space while their interactions are modeled with convolutions. The convolution filters need to be continuous (to have smooth predictions) but are evaluated at finite points, i.e.\\ the positions of neighboring atoms. To ensure rotational invariance, only radial filters are used, leading again to an interaction function that is a special case of Eq.~\\ref{eq:general_message_passing}.\n\nWhile the previously introduced approaches aim to learn as much as possible from the reference data guaranteeing only basic quantum-chemical regularities, several models have been proposed to better exploit chemical domain knowledge.\nThe hierarchical interacting particle neural network (HIP-NN)\\cite{lubbers2018hierarchical} obtains the prediction as a sum over atom-wise contributions $E_i^t$ that are predicted after every update step $t$. A regularizer penalizes larger energy contributions in deeper layers, i.e.\\ enforcing a declining, hierarchical prediction of the energy.\nPhysNet\\cite{unke2019} modified the energy function to include explicit terms for electrostatic and dispersion interactions,\n\\begin{equation}\nE = \\sum_{i=1}^N E_i + k_e \\sum_{i=1}^N \\sum_{j>i}^N \\tilde{q}_i \\tilde{q}_j \\chi(r_{ij}) + E_\\text{D3}\\,,\n\\end{equation}\nwhere $E_\\text{D3}$ is Grimme's D3 dispersion correction\\cite{grimme2010consistent}, $k_e$ is Coulomb's constant and $\\tilde{q}_i$ are corrected partial charges predicted by the network that are guaranteed to sum to the total charge of the molecule. In an ablation study on a dataset of S$_\\text{N}$2 reactions, \\citet{unke2019} showed that the inclusion of long-range terms improves prediction accuracy for energies and forces while models without these terms show qualitatively wrong asymptotic behavior.\n\n\\section{Best practices and Pitfalls}\n\\label{sec:best_practices_and_pitfalls}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{best_practices.png}\n \\caption{Overview of the most important steps when constructing and using ML-FFs.}\n \\label{fig:best_practices}\n\\end{figure*}\n\nA number of careful modeling steps are necessary to construct an ML-FF for a particular problem of interest (Fig.~\\ref{fig:best_practices}). Even before starting this process, some forethought is appropriate due to certain limitations of \\textit{ab initio} methods themselves. This section gives an overview about all steps necessary to construct an ML-FF from scratch and highlights possible ``pitfalls'', i.e.\\ issues that may occur along the way, in particular when the recommended practices are not followed. First, some preliminary considerations, which should be taken before starting with the construction of an ML-FF, are discussed (\\ref{subsec:preliminary_considerations}). Next, basic principles for choosing an appropriate ML method for a specific task are given (\\ref{subsec:choosing_an_appropriate_ml_method}). Then, the importance of high quality reference data, different strategies to collect it (\\ref{subsec:data_collection}), and how the data has to be prepared (\\ref{subsec:data_preparation}), are outlined. This is followed by an overview of how to train an ML model on the collected data (\\ref{subsec:training_the_model}) and guidelines for using the trained ML-FF in a production setting, e.g.\\ for running MD simulations (\\ref{subsec:using_the_model_in_production}). Finally, popular software packages for constructing ML-FFs are briefly described and code examples are given (\\ref{subsec:example_code_and_software_packages}).\n\n\\subsection{Preliminary considerations}\n\\label{subsec:preliminary_considerations}\nBefore running any \\textit{ab initio} calculations to collect data for training learning models, it is advisable to think about the limitations of the chosen level of theory itself. The issues discussed here are problem-specific and often not unique to ML-FFs, but PES reconstruction in general. As such, a comprehensive list is not possible, but a few examples are given below.\n\n\\paragraph{Practicability}\nOn the spectrum of quantum chemistry methods, ML-FFs fit into the niche between highly efficient conventional FFs\\cite{monticelli2013force} and accurate, but computationally expensive \\textit{ab initio} methods.\\cite{friesner2005ab} Efficiency-wise, they are still inferior to classical FFs, because their functional forms are considerably more complex and thus more expensive to evaluate. Even the fastest ML-FFs are still one to three orders of magnitude slower\\cite{chmiela2019,brickel2019reactive,sauceda2020molecular}. On the other end, ML-FFs are lower bounded by the accuracy of the reference data used for training, which means that the underlying \\textit{ab initio} method will always be at least equally accurate. In practical terms, this means that in order to be useful, ML-FFs need to offer time savings over directly running \\textit{ab initio} calculations and an improved accuracy compared to conventional FFs. For this purpose, the full procedure of data generation, training and inference must be taken into account, as opposed to just regarding inference speed, which will almost certainly be much quicker than \\textit{ab initio} methods. While this consideration sounds trivial at first, it is still advisable to think about whether constructing an ML-FF really is economical. For example, if the goal is to run just a single short MD trajectory, the question is how much data is necessary for the model to reach the required accuracy. Some models may require several thousands of training points to produce accurate enough predictions, even for fairly small molecules. Then, when factoring in the overall time required for going through the process of creating the ML model, testing it, and running the MD simulation, it might be more efficient to simply run an \\textit{ab initio} MD simulation in the first place.\n\n\\paragraph{Multireference effects}\nMany \\textit{ab initio} methods use a single Slater determinant to express the wave function of a system. The problem with this approach is that different determinants may be dominant in different regions of the PES, leading to a poor description of the wave function if the wrong determinant is chosen. Especially when many calculations are performed for various strongly distorted geometries, for example when a reaction is studied and bonds need to be broken, it may happen that the solution ``jumps'' discontinuously from one electronic state to another, leading to inconsistent reference data. When an ML model is trained on such a data set, it will try to find a compromise between the inconsistencies and its performance typically be unsatisfying. It is therefore advisable to check for possible multireference effects prior to generating data and, if necessary, switch to a multireference method (for a comprehensive review on multireference methods, see Ref.~\\citenum{szalay2012multiconfiguration}).\n\n\\paragraph{Strong delocalization}\nThe models discussed in this review all assume that energy contributions are local to some degree. This assumption is either introduced explicitly by a cutoff radius, or it enters the model through the use of a specific structural descriptor. For example, by using inverse distances to encode chemical structures for kernel methods (as is done e.g.\\ in GDML, see Section~\\ref{subsubsec:symmetric_gradient_domain_machine_learning}), relative changes between close atoms are weighted more strongly when comparing two conformations. While assuming locality is valid in many practical applications, there exist many cases where this assumption breaks down. An example are extensive conjugated $\\pi$-systems, where a rotation around certain bonds might break the favorable interaction between electrons, leading to a ``non-local'' energy contribution. If such effects exist, an appropriate model should be chosen, for example the cutoff radius may need to be larger than usual or a different structural descriptor must be picked. \n\n\\subsection{Choosing an appropriate ML method}\n\\label{subsec:choosing_an_appropriate_ml_method}\nSeveral different variants of ML-FFs have been discussed in Section~\\ref{subsec:combining_ml_and_chemistry} and many more are described in the literature. Although all these methods can be applied to construct ML-FFs for any chemical system, some methods might be more promising than others depending on the very situation. For researchers who want to apply ML methods to a specific problem for the first time, the abundance of different models to choose from may be overwhelming and it might be difficult to find an appropriate choice. \n\nIn the following, possible applications of ML-FFs are broadly categorized based on simple questions about the task at hand. For each case, advantages and disadvantages of individual models are discussed to provide help and guidance to the reader for identifying an appropriate model for their use case.\n\n\\paragraph{How much reference data can be used for training?} \\textit{\n\tWhen in doubt which method to use, a rule of thumb could be to prefer kernel methods when there are less than $\\sim10^3$--$10^4$ training points and NN-based approaches otherwise (but this may also be a matter of preference).\n}\n\nDepending on the desired accuracy, the amount of \\emph{ab initio} reference data which can be collected within a reasonable time frame may be vastly different. For example, if reference calculations are performed at the DFT level of theory, it is often feasible to collect several thousands of data points, even for relatively large molecules. On the other hand, if CCSD(T) accuracy and a large basis set is required, already a few hundred reference calculations for small molecules can require a considerable amount of computing time. Although it is of course always desirable to perform as few reference calculations as possible, for some tasks, collecting a large data set is unavoidable. For example, if a model should be able to predict a variety of different molecules containing many different elements, the relevant chemical space must be sampled sufficiently.\n\nIn general, kernel-based models tend to achieve good prediction accuracies even with few training points, whereas NNs often need more data to reach their full potential (although there may be exceptions for both model variants, see also Fig.~\\ref{fig:md17_scatter}). Further, the optimal model parameters for kernel models can be determined analytically (see Eq.~\\ref{eq:krr_coefficient_relation_regularized}) which, for small datasets, is typically faster than training a NN via (a variant of) stochastic gradient descent. However, when the data set size $N$ is very large, solving Eq.~\\ref{eq:krr_coefficient_relation_regularized} can become prohibitively expensive as it scales $\\mathcal{O}(N^3)$ and training an NN may become more efficient. Further, evaluating kernel models scales with $\\mathcal{O}(N)$ (see Eq.~\\ref{eq:kernel_regression}), whereas the cost of evaluating NN-based methods has (as long as the number of parameters does not have to be increased for larger datasets) constant complexity. For this reason, NNs tend to be more suitable for large datasets. Note that there are approximations which improve the scaling of kernel methods (so they can be applied even to very large datasets), this however at the cost of accuracy (see Eqs.~\\ref{eq:krr_coefficient_relation_approximated}-\\ref{eq:nystroem_regularized_inverse}).\n\n\n\n\\paragraph{Should the model be able to predict a single type of chemical system or multiple different ones?}\n\\textit{\n\tTo be applicable to multiple systems, a model must decompose its prediction into atomic contributions. For global models, either a fixed size descriptor must be used or several separate models need to be trained.\n}\n\n\nSome ML-FFs only need to be able to predict systems with a fixed composition and number of atoms, for example to study the dynamics of a single molecule, whereas other applications require the ability to predict different systems with varying size, e.g.\\ when clusters consisting of a different number and kind of molecules are studied with the same model.\n\nWhile all ML-FFs can be applied in the first case, the latter requires either that the length of chemical descriptors is independent of the number of atoms, or that model predictions are decomposed into local contributions based on fixed-size fingerprints of atomic environments (which naturally makes them extensive). Most NNPs (see Section~\\ref{subsubsec:artificial_neural_networks}) and many kernel methods, e.g.\\ GAPs (see Section~\\ref{subsubsec:gaussian_approximation_potentials}) or FCHL\\cite{faber2018alchemical,christensen2020fchl}, use such an approach and can be applied to differently sized chemical systems without issues. An exception are e.g.\\ (s)GDML models (see Section~\\ref{subsubsec:symmetric_gradient_domain_machine_learning}), which encode chemical structures as vectors of inverse distances between atomic pairs. Consequently, the length of the descriptor changes with the number of atoms and the model can only be applied to a single type of system. In some special cases, it may be possible to choose a maximum descriptor length and pad descriptors of smaller molecules with zeros, but this may introduce other problems and\/or reduce the accuracy. \n\n\n\\paragraph{Will the model be applied to single or multi-component systems?}\n\\textit{\n\tIf only a handful of elements is relevant, all models are equally suitable. When a large number of elements needs to be considered, the model should be able to encode and use information about atom types efficiently.\n}\n\nAs long as an ML-FF is only applied to single-component systems (consisting of a single element), for example elemental carbon or silicon, all relevant information is contained in the relative arrangement of atoms and nuclear charges need not be encoded explicitly. However, as soon as there are multiple atom types (as is common for most applications of ML-FFs), the model must have some way to distinguish between them. A notable exception are some global models such as (s)GDML. Here, although only inverse pairwise distances are used as a descriptor, information about atom types is implicitly contained, because specific entries always correspond to the same combination of atom types. \n\nMany local descriptors of atomic environments only use geometric information in the form of distances and angles between pairs and triplets of atoms (see Eqs.~\\ref{eq:two_body_symmetry_function}~and~\\ref{eq:three_body_symmetry_function}). To include information about atomic types, descriptor entries are duplicated for every possible combination of elements (the same approach is used for kernel machines based e.g.\\ on SOAP\\cite{bartok2013representing} or FCHL\\cite{faber2018alchemical,christensen2020fchl}). Many descriptor-based NNPs further use separate NNs to predict atomic contributions of different elements (see Fig.~\\ref{fig:nnp_overview}). A disadvantage of these approaches is that the number of terms in the descriptor increases combinatorially with the number of elements covered by the model (in particular if three-body or even four-body terms are used), which impacts the computational cost of training and evaluating the model (in particular larger amounts of training data become necessary). As long as only a few elements need to be considered, this is not an issue, but if a model for a significant fraction of the periodic table is required, a more efficient representation is desirable. Most end-to-end NNPs employ so-called element embeddings (see Fig.~\\ref{fig:nnp_overview}), which do not become more complex when the number of elements is increased. This has the additional benefit of potentially increasing the data efficiency of the model by utilizing alchemical information. Another alternative is to introduce element-dependent weighting functions (instead of duplicating terms in the descriptor)\\cite{gastegger2018wacsf}. \n\n\\paragraph{Will the model be applied to small or large systems?} \\textit{Models for very large target systems should be able to exploit chemical locality, so that reference calculations for fragments can be used as training data. Additionally, this allows trivial parallelization of predictions over multiple machines.}\n\nOften, ML-FFs are used to study small or medium-sized molecules. In such cases, all models are equally applicable. For very large systems containing many atoms however, some methods have particularly advantageous properties. For example, it might be infeasible to run \\textit{ab initio} calculations for the full target system. In this case, being able to fragment the system into smaller parts, for which reference calculations are affordable, would be very useful. \n\nTo be trainable on such fragments, ML-FFs must introduce an explicit assumption about chemical locality by introducing a cutoff radius. Every method that decomposes predictions into a sum of local atomic contributions can thus be trained in this way. Global ML-FFs on the other hand need reference data for the full system (see above). Another advantage of local models is that their predictions are \\emph{embarrassingly parallel}: The contributions of individual atoms can be calculated on separate machines (storing a copy of the model), each requiring only information about neighboring atoms within the cutoff radius. Apart from possible efficiency benefits, this may even become necessary if the computations to handle all atoms do not fit into the memory of a single machine (for example when the system of interest consists of millions of atoms\\cite{jia2020pushing}). Note that while not all ML methods to construct FFs are embarrassingly parallel, most models contain mostly linear operations, which are amenable to other parallelization methods, e.g.\\ by utilizing GPUs.\n\nAt this point, a subtle difference between cutoffs used in NNPs of the message-passing type (see Section~\\ref{subsubsec:high_dimensional_neural_network_potentials}) and descriptor-based NNPs (as well as kernel machines based on local atomic environments) should be pointed out. In message-passing schemes, information between all atoms within the cutoff radius is exchanged over $T$ iterations, thus the \\emph{effective} cutoff radius increases by a factor of $T$. This means that in order to distribute the computation over multiple machines, it is either necessary to communicate updates to other machines after each iteration, or a sufficiently large subdomain needs to be stored on all machines. \n\n\n\\paragraph{Are long-range interactions expected to be important for the system of interest?} \\textit{If strong long-range contributions to the energy are present, it is advisable to either use a global model, or augment a local ML method by explicitly including physical interaction terms.}\n\nAs described earlier, many ML-FFs introduce cutoffs to exploit chemical locality. An obvious downside of this approach is that all interactions beyond the cutoff cannot be represented. For uncharged molecules without strong dipole moments, relevant interactions are usually sufficiently short-ranged that this is not problematic. However, when strong long-ranged (e.g.\\ charge-dipole) interactions are important, cutoffs may introduce significant errors. Models such as (s)GDML, which consider the global chemical structure, do not suffer from this issue. \n\nWhile it is possible to simply increase the cutoff distance until more long-ranged contributions can be neglected, this decreases the computational efficiency and data efficiency of models. A better alternative could be to include the relevant physical interaction terms explicitly in the model. For example, TensorMol\\cite{yao2018tensormol} and PhysNet\\cite{unke2019} include such correction terms by default, but other models can be augmented in a similar fashion. Although not strictly necessary, even global models may profit from such terms by an increased data efficiency.\n\n\n\\subsection{Data collection}\n\\label{subsec:data_collection}\nA fundamental component of any ML model is the reference data. While its architecture and other technical details are responsible for the potential accuracy of a model, the choice of reference data and its quality defines the reliability and range of applicability of the final model. Any deficiencies that are present in the data will inevitably also lead to artifacts in models trained on it, a principle often colloquially stated as ``garbage in, garbage out''\\cite{sanders2017garbage}. As such, the reference data is one of the most important components of an ML-FF. The generation of datasets in computational chemistry and physics are challenges on their own. First of all, each reference point is a result of computationally expensive and often nontrivial calculations (see Section~\\ref{subsec:chemistry_foundations}), which limits the amount of data that can be collected. Furthermore, the dimensionality of the configurational space of molecules, solids, or liquids is so vast that -- except for trivial cases -- it is not apparent how to identify the representative geometries in the ocean of possibilities. The optimal choice of reference data might even need to be adapted to the individual properties of the respective ML model that consumes it and\/or its intended application. In the following, several strategies for sampling the PES and generating reference datasets are outlined (multiple of these approaches can be combined). Afterwards, problems that may occur due to insufficient sampling are highlighted and general remarks about the importance of a consistent reference dataset are given.\n\n\\paragraph{AIMD sampling}\nA good starting point to assemble the reference dataset is by sampling the PES using \\textit{ab initio} molecular dynamics (AIMD) simulations. Albeit expensive in terms of the amount of necessary reference calculations, this technique constitutes a straightforward way to explore configurational space. Here, the temperature of the simulation determines which regions of the PES and what energy ranges (according to the Boltzmann distribution) are explored (see Fig.~\\ref{fig:FigEthSampl}). For example, if the aim is to construct an ML-FF for calculating the vibrational spectrum of ethanol at 300~K, generating the database at 500~K is a safe option since the subspace of configurations relevant at 300~K is contained in the resulting database (see Fig.~\\ref{fig:FigEthSampl}A). Sampling at higher temperatures ensures that the model does not enter the extrapolation regime during production runs, which is practically guaranteed to happen when a lower temperature is used for sampling. In general, the resulting dataset will be biased towards lower energy regions of the PES, where the system spends most of the simulation time. For this reason, pure AIMD sampling is only advisable when the intended application of the final ML model involves MD simulations for equilibrium or close to equilibrium properties, where rare events do not play a major role. Examples of this are the study of vibrational spectra, minima population, or thermodynamic properties.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{Fig_Sampling_Ethanol.pdf}\n\t\\caption{\\textbf{A}: Two-dimensional projection of the sampled regions of the PES of ethanol at 100~K, 300~K and 500~K from running AIMD simulations with FHI-aims\\cite{FHIaims2009} at the DFT\/PBE+TS level of theory\\cite{PBE1996,tkatchenko2009accurate}. The length of the simulation was 500 ps. \\textbf{B}: Distribution of sampled potential energies for the three different temperatures.}\n\t\\label{fig:FigEthSampl}\n\\end{figure}\n\n\\paragraph{Sampling by proxy}\nConstructing reliable reference datasets from AIMD simulations can be computationally expensive. While system size plays a major role, other phenomena, such as the presence of intramolecular interactions and fluxional groups, can also influence how quickly the PES is explored. Because of this, long simulation times may be required to visit all relevant regions. For example, generating $2\\times10^5$ conformations from AIMD using a relatively affordable level of theory (e.g.\\ PBE+TS with a small basis set) can take between a few days to several weeks (depending on the size of the molecule). With higher levels of theory, the required computation time may increase to months, or, when highly accurate methods such as CCSD(T) are required, even become prohibitively long (several years).\n\nTo resolve this issue, a possible strategy is to sample the PES at a lower level of theory to generate a long trajectory that covers many regions on the PES. The collected dataset is then subsampled to generate a small, but representative set of geometries, which serve as input for performing single-point calculations at a higher level of theory (see Fig.~\\ref{fig:FigSubSampl}).\nThis strategy works best when the PES has a similar topology at both levels of theory, so it can be expected that configurations generated at the lower level are representative of configurations that would be visited in an AIMD simulation at the higher level (see the two-dimensional projections of the PES in Fig.~\\ref{fig:FigSubSampl}). When the two PESs are topologically very different, e.g.\\ when a semi-empirical method or even a conventional FF is used to generate the initial trajectory, it may happen that the relevant regions of the PES at the higher level of theory are not covered sufficiently. Then, when an ML-FF is trained on the collected dataset and used for running an MD simulation, the trajectory may enter the extrapolation regime and the model might give unphysical predictions. Thus, extra care should be taken when two very different levels of theory are used for sampling by proxy.\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.0\\columnwidth]{Fig_SubSampling.pdf}\n\t\\caption{Procedure followed to generate a database at the CCSD(T) level of theory for keto-malondialdehyde using sampling by proxy. An AIMD simulation at 500~K computed at the DFT\/PBE+TS level of theory is used to sample the molecular PES. Afterwards, the trajectory is sub-sampled (black dots) to generate a subset of representative geometries, for which single-point calculations at the CCSD(T) level of theory are performed (red dots). This highly accurate reference data is then used to train an ML-FF.}\n\t\\label{fig:FigSubSampl}\n\\end{figure}\n\n\\paragraph{Adaptive sampling}\nAnother method to minimize the amount of expensive \\textit{ab initio} calculations is called adaptive sampling or \\textit{on-the-fly} ML\\cite{csanyi2004learn}. Here, a preliminary ML-FF is trained on only a small initial set of reference data and then used to run an MD simulation. During the dynamics, additional conformations are collected whenever the model predictions become unreliable according to an uncertainty criterion. Then, new reference calculations are performed for the collected structures and the training of the ML model is continued or started from scratch on the augmented dataset. The process is repeated until no further unreliable regions can be discovered during MD simulations.\n\nWhen following this strategy, the quality of the uncertainty estimate is crucial for an efficient sampling of the PES: If the estimate is overconfident, deviations from the reference PES might be missed. If the estimate is overly cautious, many redundant \\textit{ab initio} calculations have to be performed. There exist several ways to estimate the uncertainty of an ML-FF. For example, Bayesian methods learn a probability distribution over models, which enables straightforward uncertainty estimates (see the predictive variance of a Gaussian process, Eq.~\\ref{eq:gaussian_process_variance}). For models where an explicit uncertainty estimate is not available, e.g.\\ neural networks, a viable alternative is \\textit{query-by-committee}\\cite{seung1992query,behler2015constructing}. Here, an ensemble of models is trained, for example on different subsets of the reference data and each starting from a different parameter initialization. Then, the discrepancy between their predictions can be used as uncertainty estimate. Query-by-committee has been successfully employed to sample PESs using neural networks for water dimers\\cite{morawietz2012neural}, organic molecules\\cite{gastegger2017machine,unke2019} as well as across chemical compound space\\cite{smith2018less}. Other alternatives, for example using dropout\\cite{srivastava2014dropout} as a Bayesian approximation\\cite{gal2016dropout}, could also be used.\n\nCollecting data ``on-the-fly'' is even possible without uncertainty estimates. Instead, additional reference calculations are performed at fixed intervals during the MD simulation\\cite{csanyi2004learn,li2015molecular}. This relies on the assumption that the probability of reaching the extrapolation regime of an ML model rises with increasing length of the MD trajectory.\nWhile performing \\textit{ab initio} calculations in regular intervals will discover all deviations of the model eventually, this variant of on-the-fly ML does not exploit any information about the already collected reference set and may thus lead to many redundant data points. More detailed reviews on uncertainty estimation and active sampling of PESs can be found in Refs.~\\citenum{gastegger2020molecular}\n~and~\\citenum{shapeev2020active}.\n\n\\paragraph{Normal mode sampling}\nIt is also possible to sample the PES without running any kind of MD simulation. In normal mode sampling\\cite{smith2017ani}, the idea is to start from a minimum on the PES and generate distorted structures by randomly displacing atoms along the normal modes. They are the eigenvectors of the mass-weighted Hessian matrix obtained at the minimum position, i.e.\\ a harmonic approximation of the molecular vibrations. From the associated force constants (related to the eigenvalues), the increase in potential energy for displacements along individual normal modes can be estimated. Since they are orthogonal to each other, it is straightforward to combine multiple random displacements along different normal modes such that the resulting structures are sampled from a Boltzmann distribution at a certain temperature. In other words, structures generated like this are drawn from the same distribution as if an ``approximated PES'' was sampled with a (sufficiently long) MD simulation. This approximated PES is equivalent to a Taylor expansion of the original PES around the minimum position, truncated after the quadratic term (the contribution of the linear term vanishes at extrema). \n\nStructures generated from random normal mode sampling are not correlated, in contrast to those obtained from adjacent time steps in MD simulations, which makes this approach an efficient way to explore the PES. However, the disadvantage is that only regions close to minima can be sampled. Additionally, the harmonic approximation is only valid for small distortions, i.e.\\ the larger the temperature, the more the sampled distribution diverges from the Boltzmann distribution on the true PES. Because of these limitations, it is best to combine normal mode sampling with other sampling methods, for example to generate an initial reference dataset, which is later expanded by adaptive sampling.\n\n\\paragraph{Problems due to insufficient sampling} Because their extrapolation capabilities are limited, ML methods only give reliable predictions in regions where training data is present.\\cite{sugiyama2007covariate} When generating reference data, it is therefore important that all regions of the PES that may be relevant for a later study are sampled sufficiently. For example, when studying a reaction, the data should not only cover configurations corresponding to educt and product structures, but also the region around the transition state and along the transition pathway. When the reaction coordinate defining the transition process is already known, a straightforward way to generate the reference data would be running metadynamics\\cite{barducci2011metadynamics}. However, even when an ML model can reproduce the entire reference dataset with the required accuracy, it is still possible to run into issues when the model is used to study the reaction. If the rare transition process was not sampled sufficiently, it is not guaranteed that MD simulations with the ML-FF reproduce it correctly. The reference data may be restricted to a specific subset of molecular configurations along the transition pathway. Hence, the model can enter the extrapolation regime somewhere between the boundary states and the transition pathways generated by an MD simulation might be unreasonable. Another potential issue is that after passing the transition state region, typically, a large amount of potential energy is converted to internal motions such as bond vibrations. As a result, the effective temperature defined by the kinetic energy exceeds the ambient conditions by orders of magnitude. Even when using a thermostat in the simulation, thermal energy increases so rapidly that it may not be able to handle the increase in temperature immediately. As a consequence, the trajectory visits high-energy configurations, which may not be included in the reference data, and the model again has to extrapolate.\n\nWhen ML-FFs enter the extrapolation regime, i.e.\\ they are used to predict values outside the sampled regions of the PES, unphysical effects may be observed. Consider for example the dissociation of the O\\nobreakdash--H bond in the hydroxyl group of ethanol (Fig.~\\ref{fig:etoh_dissociation_curve}). \nHere, different models were trained on data gathered from an MD simulation of ethanol at 500~K and used to predict how the energy changes when the O\\nobreakdash--H distance of the hydroxyl group is shortened or elongated to extreme values well outside the range sampled during the dynamics. In this example, while the sGDML model is able to accurately extrapolate to much shorter distances than are present in the training data, it still fails to predict the bond dissociation. The NNP models (PhysNet and SchNet) exhibit qualitatively wrong short-range behavior and spurious minima on the PES, which may trap trajectories during MD simulations. Because of these limited extrapolation capabilities, it is advisable to sample larger regions of the PES than are expected to be visited during MD simulations, so that there is a ``buffer'' and models never enter the unreliable extrapolation regime during production runs. For example, when an ML-FF is to be used for a study at a temperature of 300~K, the PES should be sampled around 500~K or higher.\n\\begin{figure}[t]\n\t\\includegraphics[width=\\columnwidth]{ethanol_OH_dissociation_curve.png}\n\t\\caption{One-dimensional cut through the PES of ethanol along the O--H bond distance for different ML-FFs (solid blue, yellow and orange lines) compared to \\textit{ab initio} reference data (dashed black line). Close to the region sampled by the training data (range highlighted in gray), all model predictions are virtually identical to the reference method (see zoomed view). When extrapolating far from the sampled region, the different models have increasingly large prediction errors and behave unphysically.}\n\t\\label{fig:etoh_dissociation_curve}\n\\end{figure}\n\n\n\\paragraph{Importance of data consistency} Although it may appear trivial, it is crucial that all data used for training a model is internally consistent: A single level of theory (method and basis set) should be used to calculate the reference data. When multiple quantum chemical codes (or even different versions of the same code) are used for data generation, it should be checked that their output is numerically identical when given the same input geometry (if they do not then this will effectively manifest itself like noisy outputs, severely deteriorating the precision of the ML model). Further, many \\textit{ab initio} codes automatically re-orient the input geometry such that the principal moments of inertia are aligned with the $x$-, $y$- and $z$\\nobreakdash-axes, so extra care should be taken when forces or other orientation-dependent quantities (i.e.\\ electric moments) are extracted to verify they are consistent with the input geometry. When some calculation settings need to be adapted for a subset of the data, e.g.\\ for cases with difficult convergence, it is important to check that values computed with the modified settings are consistent with the rest of the data. Additionally, for training some ML models, it may be essential that atoms are ordered in a particular way throughout the data set. For example, the permutational symmetry of (s)GDML models is limited to the transformations recovered from the training set, whereas the NN models discussed in this review are fully agnostic with respect to atom indexing.\n\n\\subsection{Data preparation}\n\\label{subsec:data_preparation}\nAfter the reference data is collected, it has to be prepared for the training procedure. This includes splitting the data into different subsets, which are reserved for separate purposes. Some models may also require that the data is preprocessed in some way before the training can start. In the following, important aspects of these preparation steps are highlighted.\n\n\\paragraph{Splitting the data}\nPrior to training any ML model, it is necessary to split the reference data into disjoint subsets for training\/validation and testing (see Section~\\ref{subsubsec:model_selection_how_to_choose_hyperparameters}). While the training\/validation set is used for fitting the model, the test set is only ever used after a model is trained to estimate its generalization error, i.e.\\ to judge how well the model performs on unseen data.\\cite{hansen2013assessment,hastie2009elements} It is very important to keep the two splits separate, as it is easily possible to achieve training errors that are several orders of magnitude lower than the true generalization error when the model is not properly regularized. Many models also feature hyperparameters, such as kernel widths, regularization terms or learning rates, that must be tuned by comparing several trained model variants on a third dataset used purely for validation (split from the training\/validation set). Note that information from the validation set will still enter the model indirectly, i.e.\\ it also participates in the training process. This is why a strict separation of the training\/validation set from the test set is crucial. Undetected duplicates in the dataset can complicate splitting, as the contamination of the test set with training data (``data leakage'') might go unnoticed. In this case, the model is effectively trained on part of the test set and estimates of the generalization error might be too optimistic and unreliable. Such a scenario can occur even when no obvious mistakes were made, e.g.\\ when the structures for a dataset are sampled by running a long MD simulation where snapshots are written very frequently. Structures collected from adjacent time steps may be highly correlated in this case and when splitting the data randomly into training and test sets, a large portion of both sets will be almost identical. In such a case, instead of using a random split, a better approach would be to use a time-split of the dataset,\\cite{lemm2011introduction} e.g.\\ using the first 80\\% of the MD trajectory as the training\/validation set and reserving the last 20\\% for testing. \n\n\\paragraph{Data preprocessing}\nPrior to training a model, the raw data is often processed in some way to improve the numerical stability of the ML algorithm. For example, a common practice is normalization, where inputs (or prediction targets) are scaled and shifted to lie in the range $-1\\dots1$ or to have a mean of zero and unit variance. The constants required for such transformations must never be extracted from the complete dataset. Instead, only the training set may be used to obtain this information.\\cite{muller2001introduction,lemm2011introduction,hansen2013assessment} Otherwise, estimates of the generalization error on the test set may be overconfident (this is another form of data leakage). While normalization may be less common for the purpose of constructing ML-FFs, any ``data-dependent`` transformation must be done carefully. For example, it may be desirable to subtract the mean energy of structures from the energy labels in order to obtain numbers with smaller absolute values (for numerical reasons). This mean energy should be calculated only from the structures in the training set.\n\nIf a model is trained using a hybrid loss that incorporates multiple interdependent properties, such as energy and forces, it is important to consider the effects of the normalization procedure on the functional relationship of those values. For example, multiplying the energy labels by a factor requires that the forces are treated in the same way, because the factor carries over to the derivative. Also, while subtracting the mean value from energy labels is valid, it is not correct to add any constant to the force labels, because that would translate into a linear term in the energy domain (the energy is related to the forces through integration). Consequently, the consistency between both label types would be broken and an energy conserving model would be incapable of learning. Even when doing simple unit transformations, care should be taken not to introduce any inconsistencies. For example, when energy labels are converted from $E_{\\rm h}$ to kcal~mol$^{-1}$ and atom coordinates from $a_0$ to \\AA, force labels have to also be converted to kcal~mol$^{-1}$~\\AA$^{-1}$ so that all data is consistent. Depending on which code was used to obtain the reference data, it is even possible that units for some labels \\emph{must} be converted, because they may be given in different unit systems in the raw data (\\textit{ab initio} codes often report energy and forces in atomic units, whereas for coordinates, angstroms are popular).\n\n\\subsection{Training the model}\n\\label{subsec:training_the_model}\nAfter the data has been collected and prepared, the next step is training the ML-FF. During the training process, the parameters of the model are tuned to minimize a loss function, which measures the discrepancy between the training data and the model predictions. In some cases, the optimal solution can be found analytically. When this is not possible, the parameters are typically optimized by gradient descent or a similar algorithm (see Ref.~\\citenum{ruder2016overview} for an overview). The hyperparameters of a model (e.g.\\ the number of layers or their width in the case of NNs) can also be selected in this step, albeit by checking the model performance on the validation set after training (instead of optimizing them directly). This section details the training process and highlights important points to consider, e.g.\\ the choice of loss function or how to prevent overfitting of the model to the training data.\n\n\\paragraph{Choosing the loss function}\nFor regression tasks, a standard choice for the loss function is the mean squared error (MSE) given by $\\mathcal{L}=\\frac{1}{N}\\sum_{i=1}^{N} (y_i-\\hat{y}_i)^2$, because it punishes outliers disproportionately. Here, the index $i$ runs over all $N$ samples of the training data, $y_i$ is the reference value for data point $i$ and $\\hat{y}_i$ is the corresponding model prediction. When the MSE is used as loss function, it is implicitly assumed that any noise present in the reference data is distributed normally, which without additional information, is a sensible guess for most data. Further, the MSE loss allows finding the optimal parameters analytically (due to convexity) for linear ML algorithms, such as kernel ridge regression (see Eq.~\\ref{eq:krr_coefficient_relation_regularized} in Section~\\ref{subsubsec:kernel_based_methods}). However, the MSE is not necessarily the best choice for all cases. For example, to make the model less sensitive to outliers, a common alternative is to use a mean absolute error (MAE) loss given by $\\mathcal{L}=\\frac{1}{N}\\sum_{i=1}^{N} \\lvert y_i-\\hat{y}_i \\rvert$. Other functional forms, such as Huber loss\\cite{huber1992robust} or even an adaptive loss\\cite{barron2019general}, are also possible, provided they are a meaningful measure of model performance.\n\nAfter deciding on the general form of the loss function, the question remains which labels $y$ to use as a reference. \nWhile the potential energy is an obvious choice, in classical MD, the PES is explored via integration of Newton's second law of motion, which exclusively involves atomic forces. Since an important objective of ML-FFs is to reproduce the dynamical behavior of molecules in MD simulations as well as possible, it could even be argued that accurate force predictions should take priority over energy predictions in MD applications. However, since energy labels are usually available as a byproduct of force calculations, it seems reasonable to include both label types in the hope that this will help improve the overall prediction performance for both quantities. This gives rise to models based on hybrid loss functions that simultaneously penalize force $\\mathbf{F}$ and energy $E$ training errors. Assuming an MSE loss, it generally takes the form\n\\begin{equation}\n\\mathcal{L} = \\frac{1}{N}\\sum_{i=1}^{N} \\underbrace{\\lVert \\hat{\\mathbf{F}}_i-\\mathbf{F}_{i}\\rVert^{2}}_{\\mathcal{L}_{\\mathbf{F}_{i}}}+\\eta\\underbrace{(\\hat{E}_i-E_{i})^{2}}_{\\mathcal{L}_{E_{i}}}\\,,\n\\label{eq:multi_objective_loss}\n\\end{equation}\nwhere the hyperparameter $\\eta$ determines the relative weighting between both loss terms to account for differences in units, information content, and noise level of the label types. A bilateral reduction of both loss terms is only possible if the objectives are non-competing, i.e.\\ when the optimal parameter set is equally effective across both tasks. For this to be true, it must hold that \n\\begin{equation}\n\\begin{aligned}\n\\lVert\\hat{\\mathbf{F}}_i - \\mathbf{F}_i\\rVert^2 =& \\eta(\\hat{E}_i - E_i)^2\\,,\\\\\n\\Rightarrow \\lVert-\\nabla_{\\mathbf{R}}(\\hat{E}_i - E_i)\\rVert^2 =& \\eta(\\hat{E}_i - E_i)^2\n\\end{aligned}\n\\label{eq:loss_relation}\n\\end{equation}\n at every training point $i$ (here, the relation $\\mathbf{F}=-\\nabla_{\\mathbf{R}}E$ was substituted). Otherwise, the objectives $\\mathcal{L}_{\\mathbf{F}_{i}}$ and $\\mathcal{L}_{E_{i}}$ are necessarily minimized by a different set of model parameters. Eq.~\\ref{eq:loss_relation} is only true in general when $\\mathcal{L}_{E_{i}}=\\mathcal{L}_{\\mathbf{F}_{i}}=0$ for all~$i$, which is not fulfilled in practice, because both labels may contain noise and they can usually not be fitted perfectly. A model trained using a hybrid loss (Eq.~\\ref{eq:multi_objective_loss}) will thus have to compromise between fulfilling both objectives on the training data, as opposed to joining energy and force labels for a performance gain on both. For this reason, the use of hybrid loss functions (or how to weight different contributions) warrants careful consideration depending on the intended application of the final model. Some models, e.g.\\ (s)GDML (see \\ref{subsubsec:symmetric_gradient_domain_machine_learning}), do not even include energy constraints in their loss function at all and are trained on forces only. The energy can still be recovered via integration, but it does not participate in the training procedure except for determining the integration constant. In the end, the ultimate measure of a model's quality should not be how well it minimizes a particular loss function, but instead how well it is able to reproduce the experimental observables of interest. Also, it is important to keep in mind that the loss function measured on the training data is only a proxy for the true objective of any model, which is to generalize to unseen data. Compromising between the energy and force labels of the training data can even improve prediction accuracy for \\emph{both} label types on unseen data. For a more thorough discussion on the role of gradient reference data and how it can improve prediction performance, see Ref.~\\cite{chmiela2019towards,christensen2020on,meyer2020machine}.\n\n\\paragraph{Tuning hyperparameters}\nHyperparameters, such as kernel widths or the depth and width of a neural network, are typically optimized independently of the parameters that determine the model fit to the data: A hyperparameter configuration is chosen, the model is trained, and its performance is measured on the validation set. This process is repeated for as many trials as are affordable or until the desired accuracy is reached. Here it is crucial that no test data is used to measure model performance when tuning hyperparameters, so the ability to estimate the generalization error on the test set is not compromised. Choosing good values for the hyperparameter regimes requires some experience and intuition of the problem at hand. Fortunately, many models are quite robust and good default hyperparameters exist, which do not require any further tuning to arrive at good results. In other cases, hyperparameter tuning can be automated (for example via grid or random search\\cite{lecun2012efficient,muller2001introduction,bergstra2012random,hansen2013assessment,chmiela2019}) and does not need to be performed manually. See also Section~\\ref{subsubsec:model_selection_how_to_choose_hyperparameters} for a more detailed discussion on tuning hyperparameters.\n\n\\paragraph{Regularization}\nBecause ML models contain many parameters (sometimes even more than the number of data points used for training), it is possible or even likely that they ``overfit'' to the training data. An overfitted model achieves low prediction errors on the training set, but performs significantly worse on unseen data (Fig.~\\ref{fig:overfitting}A). The aim of regularization methods is to prevent this unwanted effect by limiting or decreasing the complexity of a model. \n\\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{regularization.pdf}\n\t\\caption{\\textbf{A}: One-dimensional cut through a PES predicted by different ML models. The overfitted model (red line) reproduces the training data (black dots) faithfully, but oscillates wildly in between reference points, leading to ``holes'' (spurious minima) on the PES. During an MD simulation, trajectories may become trapped in these regions and produce unphysical structures (inset). The properly regularized model (green line) may not reproduce all training points exactly, but fits the true PES (gray line) well, even in regions where no training data is present. However, too much regularization may lead to underfitting (blue line), i.e.\\ the model becomes unable to reproduce the training data at all. \\textbf{B}: Typical progress of the loss measured on the training set (blue) and on the validation set (orange) during the training of a neural network. While the training loss decreases throughout the training process, the validation loss saturates and eventually increases again, which indicates that the model starts to overfit. To prevent overfitting, the training can be stopped early once the minimum of the validation loss is reached (dotted vertical line).}\n\t\\label{fig:overfitting} \n\\end{figure}\n\nWhen the loss function is minimized iteratively by gradient descent or similar algorithms, as is common practice for training NNs, one of the most simple methods to prevent overfitting is \\emph{early stopping}(see e.g. \\cite{prechelt1998early}): In the beginning of the training process, prediction errors typically decrease on both training and validation data. At some point however, because the validation set is not used to directly optimize parameters, the performance on the training data will continue to improve, whereas the loss measured on the validation set will stagnate at a constant value or even begin to increase again. This indicates that the model starts overfitting. Early stopping simply halts the training process as soon as the validation error converges (instead of waiting for convergence of the training error), see Fig.~\\ref{fig:overfitting}B. Early stopping also limits the size of the neural network weights and thus implicitly limits the complexity of the underlying function class. Similar to tuning hyperparameters, only the validation set, but never the test set, must be used for determining the stopping point.\n\nAnother method of regularization is the introduction of penalty terms to the loss function. Since overfitted models often are characterized by high variance in the prediction (see Fig.~\\ref{fig:overfitting}A), the idea is to penalize large model parameters. For example, $L^2$ regularization (adding the squared magnitude of parameters to the loss) shrinks the $L^2$-norm of the parameter vectors towards zero and prevents very large parameter values. On the other hand, $L^1$ regularization (adding the absolute values of parameters to the loss) shrinks their $L^1$-norm, i.e.\\ it favors sparse parameter combinations. Typically, the regularization term is weighted by an additional hyperparameter $\\lambda$ that determines its strength (like all hyperparameters, $\\lambda$ has to be tuned on the validation set). Note that solving Eq.~\\ref{eq:krr_coefficient_relation_regularized} to determine the parameters of a kernel method will result in an $L^2$-regularized model trained on the MSE loss function.\n\n\\subsection{Using ML-FFs in production}\n\\label{subsec:using_the_model_in_production}\nThe main motivation for training an ML-FF is to use it for some production task, such as running an MD simulation. Before doing so however, it is advisable to verify that it fulfills the accuracy requirements for its intended application. At this point in time, the test set becomes important: Since it was neither used directly nor indirectly during the training process, the data in the test set allows to estimate the performance of a model on truly unseen data, i.e.\\ how well it generalizes. For this, it is common practice to compute summary errors on the test set, for example the mean absolute error (MAE) or root mean squared error (RMSE), as a measure of the global accuracy of a model. In general, such a way of quantifying accuracy gives an overview of the ML model's performance on the given dataset and provides a simple way to benchmark. \n\nHowever, summary errors are biased towards the densely sampled regions of the PES, whereas much larger errors can be expected for less populated regions. Therefore, while summary errors measured on the test set are typically a good indicator for the quality of a model, they are not necessarily the best way to judge how well an ML-FF performs at its primal objective, namely capturing the relevant quantum interactions present in the original molecular system. In other words, performance measures evaluated on the test set should not be trusted blindly. They are only reliable when the test set is representative of the new data encountered during production tasks, i.e.\\ when they are drawn from the same distribution. When a model has to extrapolate, it might give unreliable predictions, even when its performance on the test set is satisfactory. When in doubt, especially when an ML-FF is used for a different task than it was originally constructed for, it is better to collect a few new reference data points to verify that a model is still valid for its use case. Because of the generally limited extrapolation capabilities of ML models, results obtained from studies with ML-FFs should always be scrutinized more carefully than e.g.\\ results obtained with conventional FFs. For example, it is advisable to randomly select a few trajectories and verify that the sampled structures look ``physically sensible'', e.g.\\ no extremely short or long bonds are present and atoms have no unusual valencies. Since the PES is a high-dimensional object, rare events, where a trajectory visits a part of configurational space that is not sampled in the reference data, are always possible, even when the PES was carefully sampled. If any questionable model predictions are found, it is advisable to double-check their accuracy with additional reference calculations. \n\n\\subsection{Example code and software packages}\n\\label{subsec:example_code_and_software_packages}\nWhile many modern ML-FFs are conceptually simple, their implementation is often not straightforward, involving many intricate details that can not be exhaustively covered in publications. Instead, those details are best conveyed by a reference implementation of the respective model. Publicly available well maintained codes allow to replicate numerical experiments and to build on top of existing models with minimal effort. \n\nIn this section, example code snippets for training and evaluating kernel- and NN-based ML-FFs with the \\texttt{sGDML}\\cite{chmiela2019} (\\ref{subsubsec:sgdml_package}) and \\texttt{SchNetPack}\\cite{schutt2018schnetpack} (\\ref{subsubsec:schnetpack_package}) software packages are given. This is followed by a short description of other popular software packages for the construction of ML-FFs (\\ref{subsubsec:other_software_packages}). Note that the list is not comprehensive and a number of other similar packages exist.\n\n\\subsubsection{The \\texttt{sGDML} package}\n\\label{subsubsec:sgdml_package}\nA reference implementation of the (s)GDML model is available as Python software package at \\url{http:\/\/www.sgdml.org}~\\cite{chmiela2019}. It includes a command-line interface that guides the user through the complete process of model creation and testing, in an effort to make this ML approach accessible to broad practitioners. Interfaces to the Atomic Simulation Environment (ASE)\\cite{ase17} or i-PI\\cite{ipiv2} make it straightforward to perform MD simulations, vibrational analyses, structure optimizations, nudged elastic band computations, and more. \n\nTo get started, only user-provided reference data is needed, specifically a set of Cartesian geometries with corresponding total energy and atomic-force labels. Force labels are necessary, because sGDML implements energy conservation as an explicit linear operator constraint by modeling the FF reconstruction as the transformation of an underlying energy model (see Section~\\ref{subsubsec:symmetric_gradient_domain_machine_learning}). The trained model will give predictions at the accuracy of the reference data and can be queried like any other FF.\n\n\\paragraph{Dataset preparation} The \\texttt{sGDML} package uses a proprietary format for its datasets, but scripts to import and export to all file types supported by the ASE package\\cite{ase17}, which covers most popular standards, are included. To convert a \\texttt{}, simply call\n\\begin{verbatim}\n\tsgdml_dataset_via_ase.py \n\\end{verbatim}\nand follow the instructions.\n\n\\paragraph{Training} The most convenient way to reconstruct a FF is via the command line interface:\n\\begin{verbatim}\nsgdml all \n\\end{verbatim}\nThis command will automatically generate a fully trained and cross-validated model and save it to a file, i.e.\\ model selection and hyperparameter tuning (see Section~\\ref{subsubsec:model_selection_how_to_choose_hyperparameters}) are performed automatically. The parameters \\texttt{} and \\texttt{} specify the sample sizes for the training and validation subsets, respectively. All remaining points are reserved for testing. Each subset is sampled from the provided reference \\texttt{} without overlap.\n\n\\paragraph{Using the model} To use the trained model, the sGDML predictor is instantiated from the \\texttt{} file generated above and energy and forces are queried for a given geometry (for example stored in an XYZ file \\texttt{}):\n\\begin{verbatim}\nimport numpy\nfrom sgdml.predict import GDMLPredict\nfrom sgdml.utils import io\n\n# Load model from file\nparameters = numpy.load(\"\")\ngdml = GDMLPredict(parameters)\n\n# Load structure from xyz file\nr,_ = io.read_xyz(\"\")\n\n# Evaluate model \n# (energies e and forces f)\ne,f = gdml.predict(r)\n\\end{verbatim}\n\nIt is also possible to run MD simulations using ASE and the \\texttt{Calculator} interface included with the \\texttt{sGDML} package:\n\n\\begin{verbatim}\nimport ase\nfrom sgdml.intf.ase_calc import \\\n SGDMLCalculator\nfrom ase.md.velocitydistribution import \\\n MaxwellBoltzmannDistribution\n\n# Load sGDML model as ASE calculator\ncalc = SGDMLCalculator(\"\")\n\n# Load structure and attach calculator\natoms = ase.io.read(\"\")\natoms.set_calculator(calc)\n\n# Initialize momenta at 300 K\nMaxwellBoltzmannDistribution(\n atoms, 300*units.kB)\n\n# Setup MD using the velocity Verlet \n# integrator and a time step of 0.2 fs \ndyn = ase.md.verlet.VelocityVerlet(\n atoms, 0.2*ase.units.fs,\n trajectory=\"\")\n\n# Simulate for 1000 steps\ndyn.run(1000)\n\\end{verbatim}\n\nTo run this script, a trained model (\\texttt{}) and an initial geometry (\\texttt{}) are needed. The resulting MD trajectory is stored in a file \\texttt{}. For more details and applications examples, please visit the documentation at \\url{www.sgdml.org\/doc\/}.\n\t\n\\subsubsection{The \\texttt{SchNetPack} package}\n\\label{subsubsec:schnetpack_package}\n\\texttt{SchNetPack}\\cite{schutt2018schnetpack} is a toolbox for developing and applying deep neural networks to the atomistic modeling of molecules and materials available from \\url{https:\/\/schnetpack.readthedocs.io\/}. It offers access to models based on (weighted) atom-centered symmetry functions and the deep tensor neural network SchNet, which can be coupled to a wide range of output modules to predict potential energy surfaces and forces, as well as a growing number of other quantum-chemical properties. \\texttt{SchNetPack} is designed to be readily extensible to other neural network potentials such as the DTNN~\\cite{schutt2017quantum} or PhysNet~\\cite{unke2019}. It provides extensive functionality for training and deploying these models, including access to common benchmark datasets. It also provides an Atomic Simulation Environment (ASE)\\cite{ase17} \\texttt{Calculator} interface, which can be used for performing a wide variety of tasks implemented in ASE. Moreover, \\texttt{SchNetPack} includes a fully functional MD suite, which can be used to perform efficient MD and PIMD simulations in different ensembles.\n\nAs it is based on the PyTorch deep learning framework\\cite{paszke2019pytorch}, \\texttt{SchNetPack} models are highly efficient and can be applied to large datasets and across multiple GPUs. Combined with the modular design paradigm of the code package, these features also allow for a straightforward implementation and evaluation of new models. Similar to the \\texttt{sGDML} package, the central commodity for training models in \\texttt{SchNetPack} is a dataset containing the Cartesian geometries (including unit cells and periodic boundary conditions, if applicable) and atom types, as well as the target properties to be modeled (e.g.\\ energies, forces, dipole moments, etc.). More information can be found in Ref.~\\citenum{schutt2018schnetpack}.\n\n\\paragraph{Dataset preparation} \\texttt{SchNetPack} uses an adapted version of the ASE database format to handle reference data. The package provides several routines for preparing custom datasets, as well as a range of pre-constructed dataset classes for popular benchmarks (e.g.\\ QM9\\cite{ramakrishnan2014quantum} and MD17\\cite{chmiela2017}), which will automatically download and format the data. For example, molecular data from the MD17 dataset can be loaded via\n\\begin{verbatim}\nspk_load.py md17 \n\\end{verbatim}\nwhere \\texttt{} indicates the molecule for which data should be loaded (e.g.\\ \\texttt{ethanol}), while the second argument specifies where the data is stored locally.\n\n\\texttt{SchNetPack} also provides a utility script for converting data files in the extended XYZ format, which is able to handle a wide variety of properties, to the database format used internally. Conversion can be invoked with the command\n\\begin{verbatim}\nspk_parse.py \n\\end{verbatim}\nwhere the arguments specify the file paths to the \\texttt{} data file and \\texttt{} database in \\texttt{SchNetPack} format, respectively.\n\n\\paragraph{Training} \nAs for the \\texttt{sGDML} package, training and evaluating ML models in \\texttt{SchNetPack} can be performed via a command line interface. For example, a basic model can be trained with the script:\n\\begin{verbatim}\nspk_run.py train [model_type]\n [dataset_type] \n --split \n\\end{verbatim}\nHere, \\texttt{[model\\_type]} specifies which kind of NNP to use (\\texttt{wacsf} for a descriptor-based NNP using wACSFs\\cite{gastegger2018wacsf}, or \\texttt{schnet} for the SchNet\\cite{schutt2017schnet} end-to-end NNP architecture) and\n \\texttt{[dataset\\_type]} specifies either a preexisting dataset (e.g.\\ \\texttt{qm9} or \\texttt{md17}), or a \\texttt{custom} dataset provided by the user. The next two arguments are the paths to the reference \\texttt{} and the file the trained \\texttt{} will be written to. The arguments\n\\texttt{} and \\texttt{} specify the sample sizes for the training and validation subsets, while the remaining points are reserved for testing. \\texttt{SchNetPack} offers a wide range of additional settings to modify the training process (e.g.\\ model composition, use of GPU, how different properties should be treated etc.), see \\url{https:\/\/schnetpack.readthedocs.io\/}.\n\n\\paragraph{Using the model} \nOnce a model has been trained, it can be evaluated in several different ways. The most basic method is to perform predictions via:\n\\begin{verbatim}\nimport torch\nimport ase\nfrom schnetpack.data.atoms \\\n\timport AtomsConverter\n\n# Load model from file\nspk_model = torch.load(\"\")\n\n# Set up converter for ASE atoms\nconverter = AtomsConverter()\n\n# Load structure from xyz file\natoms = ase.io.read(\"\")\ninputs = converter(atoms)\n\n# Evaluate model and collect\n# predictions in a dictionary\nresults = spk_model(inputs)\n\\end{verbatim}\n\nIt is also possible to use the \\texttt{SchNetPack} MD suite to perform various simulations with the trained model. \nContinuing the above example, a basic MD run can be carried out as:\n\\begin{verbatim}\nimport ase\nimport schnetpack.md as md\n\n# Set up the system using 1 replica\n# (use n_replicas > 1 for PIMD)\nmd_system = md.System(n_replicas=1)\n\n# Load structure from xyz file\natoms = ase.io.read(\"\")\nmd_system.load_molecules(atoms)\n\n# Initialize momenta at 300 K\ninit = md.MaxwellBoltzmannInit(300)\ninit.initialize_system(md_system)\n\n# Set up integrator \ntstep = 0.2 # fs\nintegrator = md.VelocityVerlet(tstep)\n\n# Prepare model for MD (specify \n# units, required properties, etc.)\ncalculator = md.SchnetPackCalculator(\n spk_model,\n required_properties=[\"forces\"],\n force_handle=\"forces\",\n position_conversion='A',\n force_conversion='kcal\/mol\/A')\n\n# Combine everything\nsimulator = md.Simulator(\n md_system, integrator, calculator)\n\n# Simulate for 1000 steps\nsimulator.simulate(1000)\n\\end{verbatim}\n\nSimulations can be further modified via hooks, which introduce temperature and pressure control, as well as various sampling schemes.\nFurther documentation of the code package and usage tutorials can be found at \\url{https:\/\/schnetpack.readthedocs.io\/}.\n\n\\subsubsection{Other software packages}\n\\label{subsubsec:other_software_packages}\n\\paragraph{AMP: Atomistic Machine-learning Package}\nAMP is a Python package designed to integrate closely with the Atomistic Simulation Environment\\cite{ase17} (ASE) and aims to be as intuitive as possible. Its modular architecture allows many different combinations of structural descriptors and model types. The main idea of AMP is to construct ML-FFs \\emph{on-demand}, i.e.\\ simulations are first started with an \\textit{ab initio} method and later switched to the ML-FF once the model is sufficiently accurate. The package is described in greater detail in Ref.~\\citenum{khorshidi2016amp} and on its official website \\url{https:\/\/amp.readthedocs.io\/}.\n\n\\paragraph{DeePMD-kit}\nThe DeePMD-kit is a package written in Python\/C++ aiming to minimize the effort required to build deep NNPs with different structural descriptors. It is based on the TensorFlow deep learning framework\\cite{abadi2016tensorflow} and offers interfaces to the high-performance classical and path-integral MD packages LAMMPS\\cite{plimpton1993fast} and i-PI\\cite{ipiv2}. More details on the DeePMD-kit can be found in Ref.~\\citenum{wang2018deepmd} or on \\url{https:\/\/github.com\/deepmodeling\/deepmd-kit\/}.\n\n\\paragraph{Dscribe} Dscribe is a Python package for transforming atomic structures into fixed-size numerical fingerprints.\\cite{dscribe} These descriptors can then be used as input for neural networks or kernel machines to construct ML-FFs. Supported representations include the standard Coulomb matrix\\cite{rupp2012fast} and variants for the description of periodic systems\\cite{faber2015crystal}, ACSFs\\cite{behler2011atom}, SOAP\\cite{bartok2013representing}, and MBTR\\cite{huo2017unified}. More details can be found on the official website \\url{https:\/\/singroup.github.io\/dscribe\/} or in Ref.~\\citenum{dscribe}.\n\n\\paragraph{RKHS toolkit}\nThe RKHS toolkit is mainly intended for constructing highly accurate and efficient PESs for studying scattering reactions of small molecules. As described in section~\\ref{subsubsec:kernel_based_methods}, the evaluation of kernel-based methods scales linearly with the number of training points $N$ (see Eq.~\\ref{eq:kernel_regression}). By using special kernel functions and precomputed lookup tables, the RKHS toolkit allows to bring this cost down to $\\mathcal{O}(\\log N)$. However, it requires that the training data has grid structure, which limits its applicability to small systems, where it is meaningful to sample the PES by scanning a list of values for each internal coordinate. The implemented kernel functions also allow to encode physical knowledge about the long-range decay behavior of certain coordinates, which enables accurate extrapolation well beyond the range covered in the training data. A Fortran90 implementation of the toolkit can be downloaded from \\url{https:\/\/github.com\/MMunibas\/RKHS\/} and the algorithmic details are described in Ref.~\\citenum{unke2017toolkit}.\n\n\\paragraph{QML}\nQML is a toolkit for learning properties of molecules and solids written in Python.\\cite{christensen2017qml} It supplies building blocks to construct efficient and accurate kernel-based ML models, such as different kernel functions and premade implementations of many different structural representations, e.g.\\ Coulomb matrix\\cite{rupp2012fast}, SLATM\\cite{huang2016communication}, and FCHL\\cite{christensen2020fchl}. The package is primarily intended for the general prediction of chemical properties, but can also be used for the construction of ML-FFs. For further details, refer to the official website \\url{https:\/\/www.qmlcode.org} or the github repository \\url{https:\/\/github.com\/qmlcode\/qml\/}.\n\n\\section{Challenges}\n\\label{sec:challenges}\nFollowing the best practices outlined in the previous section, the current generation of ML-FFs is applicable to a wide range of problems in chemistry that involve small- to medium-sized systems. While this space of chemical compounds is already significant in size, the ``dream scenario'' of chemists and biologists referenced in the introduction can only be realized with access to larger system sizes. Not only does the number of stable structures increase exponentially with added atomistic degrees of freedom,\\cite{blum2009970,ruddigkeit2012enumeration} many interesting phenomena play out at nanoscale resolution, which is inaccessible to ML methods as of yet. This is because some steps involved in the construction of ML-FFs, like sampling the reference data, which are solvable at small scale, become seemingly insurmountable obstacles at larger scales due to unfeasible computing times. The complexity of interactions, e.g.\\ the non-classical behavior of nuclei, as well as significant contributions from large fluctuations, increase the space of conformations that need to be learned. To further complicate things, the cost of accurate \\textit{ab initio} calculations increases steeply with expanding system size, limiting the amount of reference data that can be collected within a reasonable time frame. This also means that a growing number of atom correlations need to be represented by a model in order to capture the full scope of interactions present in the real system. Below, some considerations in reconciling the somewhat contradicting demands of scalability, transferability, data efficiency and accuracy in large-scale ML-FFs are outlined.\n\n\\subsection{Locality and smoothness assumptions}\n\\label{subsec:locality_and_smoothness_assumptions}\nA fundamental challenge that must be faced by \\textit{ab initio} methods, conventional FFs, and ML models alike is the many-body problem. Most properties of a physical system are determined by the interaction of many particles, whether those are electrons or, on a higher abstraction level, atoms. In fact, the reason that \\textit{ab initio} calculations are expensive to obtain is due to the challenging computational scaling properties of high-dimensional many-body problems. As a result, the hierarchy of different levels of theory is directly defined by the level of correlation treatment in the respective wave function parametrization. Because the electronic degrees of freedom of a system are much higher than the number of atoms, the computational limitations of \\textit{ab initio} methods become evident very quickly, even for small systems. Atomistic approximations scale more favorably, because they need to correlate less particles, but they are subject to the same scaling laws. The only escape is to neglect some correlations in favor of a reduced problem size. Unfortunately, it is to date impossible to reliably determine which interactions can be removed with minimal impact, without compromising the full many-body solution. Thus, the ideal of a local model is in conflict with the very nature of many-body systems. While not fully justified from a physics perspective, assuming locality is still a useful inductive bias, which can help generalization and computational efficiency. It also helps when collecting reference data, as it implies that larger systems can be predicted using the information learned from smaller systems. Another assumption, which all ML-FFs discussed in this review make, is that the PES is smooth. This is a necessary requirement for most practical applications, since a non-smooth PES implies force discontinuities, which would lead to instabilities during MD simulations. Smoothness is also a requirement from the ML perspective, as only regular signals can be reconstructed from limited observations. \n\nFor most commonly used NNPs and many kernel-based ML-FFs, locality is built into the design explicitly through the introduction of a cutoff radius. The global interactions between atoms are modeled by accumulating individual local atomic contributions. In this ``mean-field approximation'', the interaction of a particle with its surroundings is reduced to an effective one-body problem, i.e.\\ an interaction of that particle with the average effect of its neighbors. As similar neighborhoods can be identified in different compounds across chemical space, these assumptions allow to build models from reference calculations of small molecules, which are transferable to much larger structures.\\cite{huang2017dna,huang2020quantum} However, the lack of explicit higher-order terms comes at the cost of potentially loosing some important interaction effects, similar to the Hartree-Fock method and Kohn-Sham DFT in \\textit{ab initio} calculations. \n\nOn the other hand, some models (e.g.\\ (s)GDML) capture global correlations in the sense that a single prediction is obtained for the whole structure. Of course, this relies on reference calculations that are accurate enough to contain the relevant information. Global interactions of large systems can not be accurately inferred from a training set of small molecules or molecular fragments, which is why reference calculations for the exact target structure are necessary. It can therefore become difficult to collect enough reference data for large structures. In addition, global models might still implicitly assume that interactions are local to some degree due to their chemical descriptor. For example, in (s)GDML models, systems are encoded as a vector of inverse pair-wise distances. Therefore, structural changes between distant atoms contribute less strongly to changes in the overall descriptor than proximal atoms.\n \nWhile locality and smoothness are valid assumptions for the majority of chemical systems, there are pathological cases where they break down and ML models that rely on them perform poorly. As an example, consider cumulenes -- hydrocarbons of the form C$_{2+n}$H$_4$ ($n \\ge 0$) with $n+1$ cumulative double bonds. These molecules have a rigid linear geometry with the two terminal methylene groups forming an equilibrium dihedral angle of 0$^{\\circ}$ (when $n$ is even) or 90$^{\\circ}$ (when $n$ is odd). Rotating the dihedral angle out of its equilibrium position results in a sharp increase in potential energy even though the methylene groups may be separated by several angstroms when $n$ is large. This is due to the energetically favorable overlap of $\\pi$-orbitals along the carbon chain (a highly non-local interaction), which is broken when the methylene groups are rotated against each other. Additionally, the potential energy exhibits a sharp ``cusp'' at the maximum energy (i.e.\\ it is not smooth), because the ground state electronic configuration switches abruptly from one state to another (strictly speaking, multi-reference calculations would be necessary here). One-dimensional projections of the PESs predicted by ML-FFs along the rotation of the dihedral angle reveal several problems (Fig.~\\ref{fig:cumulene_scans}). For example, all models predict smooth approximations by design, which is beneficial for running MD simulations, but results in large prediction errors around the cusp. Further, when the number of double bonds ($n+1$), i.e.\\ the ``non-locality'' of relevant interactions, is increased, the quality of predictions decreases dramatically, until all models are unable to reproduce the energy profile. \n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{cumulene_scans.pdf}\n\\caption{Energy profiles of different ML-based PESs for a rotation of the dihedral angle between the terminal methylene groups of cumulenes (C$_{2+n}$H$_4$) of different sizes ($0\\le n \\le 7$). All reference calculations were performed with the semi-empirical MNDO method\\cite{dewar1977ground} and models were trained on 4500 structures (with an additional 450 structures used for validation) collected from MD simulations at 1000~K. Because rotations of the dihedral angle are not sufficiently sampled at this temperature, the dihedral was rotated randomly before performing the reference calculations. Instead of a sharp cusp at the maximum of the rotation barrier, all models predict a smooth curve. Predictions become worse for increasing cumulene sizes with the cusp region being over-smoothed more strongly. For $n=7$, all models fail to predict the angular energy dependence. Note that NNP models (such as PhysNet and SchNet) may already fail for smaller cumulenes when the cutoff distance is chosen too small ($r_{\\rm cut} = 6$~\\AA), as they are unable to encode information about the dihedral angle in the environment-descriptor. However, it is possible to increase the cutoff ($r_{\\rm cut} = 12$~\\AA) to counter this effect.}\n\\label{fig:cumulene_scans} \n\\end{figure}\n\nNote that by design, NNPs relying on message-passing are unable to resolve information about the dihedral angle if information between hydrogen atoms on opposite ends of the molecule cannot be exchanged directly (i.e.\\ $r_{\\rm cut}$ is too small) and predict constant energies in this case. The same is true for descriptor-based NNPs, as fingerprints of chemical environments also only consider atoms up to a cutoff (see Eqs.~\\ref{eq:two_body_symmetry_function}~and~\\ref{eq:three_body_symmetry_function}). Any kernel method taking as input local structural descriptors relying on cutoff radii (e.g.\\ SOAP\\cite{bartok2013representing} or FCHL19\\cite{christensen2020fchl}) will suffer from the same problems. Even when a ``global'' descriptor such as inverse pair-wise distances is chosen (e.g.\\ Coulomb matrix\\cite{rupp2012fast}), changes in the dihedral angle between distant groups of atoms are not resolved sufficiently for accurate predictions (see sGDML model in Fig.~\\ref{fig:cumulene_scans}). The only way to fix this problem in general is to drop the locality assumption completely, for example by including all $\\binom{N}{4}$ possible dihedral angles in the structural descriptor (without introducing additional factors that decrease the weight of these features with increasing distance between atoms). However, due to the combinatorial explosion of the number of possible dihedral angles, this would lead to extremely large descriptors whenever the number of atoms $N$ is not very small. The resulting models would be slow to evaluate and require a lot of reference data to give robust predictions (to prevent them from entering the extrapolation regime). An expert choice, i.e.\\ including only a single relevant dihedral angle in the descriptor, is a possible way around this issue, but requires prior knowledge of the problem at hand and goes somewhat against ML philosophy. \n\nAs a final remark, it should be mentioned that conventional FFs only include terms for dihedral angles between directly bonded atoms, so they are equally unable to predict the energy profiles of the larger cumulenes shown in Fig.~\\ref{fig:cumulene_scans}. As such, relying on chemical locality is an assumption made by virtually all methods for approximating PESs and is not specific to just ML methods.\n\n\\subsection{Transferability, scalability and long-range interactions}\n\\label{subsec:transferability_scalability_and_long_range_interactions}\nThe concept of chemical locality discussed above also plays a central role in the transferabilty and scalability of ML models for atomistic systems. Transferability indicates how well models can adapt to compounds varying in their chemical composition, while scalability indicates how efficiently these models scale with respect to the size of systems modeled. Both concepts are closely related and inherently rooted in chemical locality. The assumption that interactions between atoms are local implies that similar structural motifs will give rise to comparable interactions and hence similar contributions to the global properties of a molecule or material. In an ML context, chemical locality allows a model to reuse the information learned for different parts of a molecule for similar features in different systems. In this manner, a large atomistic system could in principle be assembled from smaller components like a jigsaw puzzle.\\cite{huang2017dna} The former aspect is crucial in order to make models transferable, while the latter allows for the development of architectures whose evaluation cost scales linearly with system size.\n\nML-FFs exploiting chemical locality offer several advantages compared to global models. If trained properly, they can be applied to systems of different size and composition. The training procedure benefits in a similar manner, as local models can be trained on structures containing different numbers of atoms. Moreover, it is also possible to use only fragments of the original system during construction of a model. This property is very attractive in situations where accurate reference computations for the whole system are infeasible due to system size and\/or scaling of the computational method. Local chemical environments are also less diverse than global atomistic structures, potentially reducing the need for extensive sampling and decreasing the chances that models enter the extrapolation regime in a production setting.\nIn addition, local models scale linearly with system size, as interactions are limited to the cutoff radius and can be evaluated efficiently. In contrast, global models are typically more limited in their practical applicability for extended systems. They always require reference computations to be performed for the whole system and, once trained, can only be reused for this particular molecule or material.\n\nDespite these advantages, local ML models suffer from several inherent problems. In order to construct models which exploit locality, a chemical system needs to be partitioned in one way or another. This can for example be achieved by limiting interactions to terms involving only a certain number of atoms (similar to conventional FFs) or by restricting them to local atom-centered environments.\nThese approximations place strong limitations on which kind of interactions can be described. As a result, local ML models have difficulty when dealing with the situations where non-local effects are important, such as strongly conjugated systems and excited states (see Section~\\ref{subsec:locality_and_smoothness_assumptions}). For standard simulations, the presence of long-range interactions, such as electrostatic and dispersion effects, are much more common phenomena. These are particularly important for modeling extended systems, where ML models are typically believed to offer a significant advantage over more conventional FFs. Since the structure and dynamical behavior of such systems is influenced greatly by long-range interactions, ML models need to be able to account for them in a satisfying manner.\n\nRecovering long-range effects necessitates a balancing act between physical accuracy and computational efficiency, as the scalability of local models hinges on there being a limited number of interactions which need to be evaluated. This feat is further complicated by the typical energy scales of these interactions, which are small compared to local contributions such as bond energies. \nFor these reasons, it is not advisable to account for long-range interactions by simply increasing the size of local environments. While local models with sufficiently large cutoffs are able to learn the relevant effects in principle, it may require a disproportionately large amount of data to reach an acceptable level of accuracy for an interaction with a comparably simple functional form. The reason is that average gradients and curvature in different regions of the PES may differ by several orders of magnitude, which makes it difficult to achieve uniformly low prediction errors across all regions. Hence, an optimal description would require to employ different characteristic scales. \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=\\columnwidth]{LJ3.pdf}\n\\caption{Lennard-Jones potential (thick gray line) predicted by KRR with a Gaussian kernel. In (a), the potential energy is decomposed into short- (red), middle- (magenta) and long-range (blue) parts, which are learned by separate models (symbols show the training data and solid lines the model predictions). The mean squared prediction errors (in arbitrary units) for the respective regions are shown in the corresponding colors. In (b), the entire potential is learned by a single model using the same training points (green). All models in (a) and (b) use $r$ as the structural descriptor. Panel (c) shows a single model learning the potential, but using $r^{-1}$ as structural descriptor (yellow). The mean squared errors (a.u.) for different parts of the potential in (b) and (c) are reported independently to allow direct comparison with the values reported in (a).}\n\\label{fig:LJ}\n\\end{figure}\n\nFor illustration, consider the following toy examples: In the first variant, a Lennard-Jones (LJ) potential\\cite{jones1924determination} is separated into a region around its minimum, a repulsive short-range, and an attractive long-range part. The task is to learn each of the three regions with a separate model (see Fig.~\\ref{fig:LJ}a). In the second variant, a single model is trained on all regions at once (see Fig.~\\ref{fig:LJ}b). Here, all models are kernel-based and use a Gaussian kernel (Eq.~\\ref{eq:gaussian_kernel}). The kernel hyper-parameter $\\gamma$ is optimized by a grid-search and cross-validation. Compared to the models trained on the first task, the prediction errors of the global model increase by around an order of magnitude. Further, the global model shows spurious oscillations between training points in the long-range region. When the optimal values of $\\gamma$ for the different models are compared, the reason for the failure of the global model becomes apparent: The optimal values of $\\gamma$ are $198.88$, $75.47$, $0.08$ for the short-, middle-, and long-range models, respectively, which highlights the multi-scale nature of the PES. On the other hand, the global model necessarily has to compromise between the different regions, which leads to an optimal value of $\\gamma=22.12$. In this toy example, the multi-scale problem can be solved by switching from using $r$ as a structural descriptor to the more appropriate inverse distance $r^{-1}$ (Fig.~\\ref{fig:LJ}c). Unfortunately, for realistic (high-dimensional) PESs with multiple minima, it can be difficult to find an appropriate descriptor to address the multi-scale nature of the PES, which leads to data-inefficient models. As a result, more training data is needed to reach an acceptable accuracy, which is problematic considering the computational cost of high-quality reference calculations.\n\nOne possibility of overcoming these limitations is by instead partitioning the energy into contributions modeled entirely via ML (short-range) and contributions described via explicit physical relations based on local quantities predicted via ML (long-range). A prime example for such an approach is the treatment of electrostatics, as was first introduced in Ref.~\\citenum{morawietz2012neural}. Here, an ML model is used to predict partial charges for each atom based on their local environment. These charges can then be used in standard Coulomb and Ewald summation to compute the long-range electrostatic energy of a system. While such schemes initially relied on point charge reference data obtained from (arbitrary) partitioning methods of the \\textit{ab initio} electron density (e.g.\\ Hirshfeld charges\\cite{hirshfeld1977bonded}), they have since been extended to operate on charges derived from an ML model for dipole moments (a true quantum mechanical observable).\\cite{gastegger2017machine,yao2018tensormol,unke2019} Here, scalar partial charges $q_i$ are predicted for each atom $i$ and the molecular dipole moment is constructed as $\\boldsymbol{\\mu}=\\sum_i q_i \\mathbf{r}_i$, where $\\mathbf{r}_i$ are the atomic positions (the predicted $q_i$ can be corrected to guarantee charge conservation\\cite{unke2019}). The discrepancy between reference and predicted dipole moments is included in the loss function used for training the model (see Section~\\ref{subsec:training_the_model}) and the partial charges consequently derived in a purely data-driven manner.\n\nContrary to electrostatics, accounting for dispersion interactions is not as straightforward. First, the exact physical form of dispersion interactions is still debated and a variety of approximate schemes have been proposed.\\cite{hermann2017first} Second, dispersion corrections typically depend on coefficients computed from atomic polarizabilities as local properties. Unfortunately, these are not as amenable to ML driven partitioning, as the corresponding quantum mechanical observable is the molecular polarizability. In contrast to charges derived from dipole moments (a vector quantity), molecular polarizabilities are scalars and offer insufficient constraints to a purely data driven partitioning. Because of these reasons, ML approaches often rely on the same empirical pair-wise potentials employed in correcting density functional theory computations.\\cite{morawietz2013density,uteva2017interpolation,unke2019}\n\nTo summarize, local ML architectures are a promising approach towards transferable and scalable models, but they have a number of drawbacks which will still need to be addressed in the future. Promising alternative approaches to achieve transferability are ML models based directly on electronic structure methods, i.e.\\ ``semi-empirical ML''\\cite{li2018density,zubatyuk2019machine,stohr2020accurate} and models for electron density and Hamiltonians\\cite{schutt2019unifying}.\nThese approaches express fundamental quantum chemical quantities in a local representation, e.g.\\ Hamiltonian matrix elements in an atomic orbital basis. Non-locality can then be introduced via the ``correct'' mathematical mechanism, e.g.\\ matrix diagonalization in the case of Hamiltonians. This physically motivated structure allows such models to recover a wide range of interactions while still being transferable. They are also better suited to predict intensive properties of molecules (whose magnitude is independent of system size), where assuming additive atomic contributions is not valid. A downside of such models compared to conventional ML-FFs is the increased computational cost due to the additional matrix operations. \n\nWith respect to scalability, hybrid approaches similar to QM\/MM\\cite{senn2009qm} might constitute valid alternatives to pure ML models. Although several orders of magnitude more efficient than electronic structure theory, even local ML models encounter problems when faced with systems containing tens of thousands of atoms. Compared to conventional FFs, the more complex functional form underlying ML-FFs leads to an increased computational cost. In such cases, partitioning the system into regions treated at different levels of approximation can lead to a significant speedup. ML models can for example be embedded into regions modeled by classical force fields (yielding ML\/MM like simulation protocols) or even coarse-grained environments generated via another ML model\\cite{wang2020ensemble}. Restricting elaborate ML approaches to only a subset of a chemical systems would make it possible to employ more accurate approximations in a manner analogous to conventional QM\/MM.\n\n\\section{Concluding remarks}\n\\label{sec:concluding_remarks}\nThe last decades have witnessed significant advances in statistical learning that allowed ML techniques to enter our daily lives, industrial practice and scientific research. \n\nClassically, automation in industry and scientific fields relied on hand-crafted rules that represented human knowledge.\\cite{nilsson1982principles} Not only is the creation of rule-based systems laborious and may require to cover an excessive number of cases, it often leads to rigid structures that are unable to adapt well to new situations. Even worse, some concepts are difficult or impossible to formalize, such as human perception for image classification. \n\nModern statistical ML algorithms\\cite{murphy2012machine,theodoridis2020machine} such as deep learning\\cite{bishop1995neural,lecun2015deep, schmidhuber2015deep,goodfellow2016deep} or kernel-based learning\\cite{cortes1995support,vapnik1995nature,scholkopf1998nonlinear,muller2001introduction,scholkopf2002learning} enable models that freely adapt to knowledge that is implicitly contained in datasets (in an abstract form) and thus offer a more robust way of solving problems than rule-based reasoning. \nFor the field of molecular simulations, the potential of ML methods may help to bridge the accuracy-efficiency gap between first-principles electronic structure methods and conventional (rule-based) FFs. Bringing both fields together has raised many questions and still poses some fundamental challenges for new generations of ML-FFs. At this point in time, ML-FFs have already become a successful and practical tool in computational chemistry. \n\nStarting from a broad perspective, this review has focused on the role of ML for constructing force fields and assessed what can be achieved with these new techniques at the current stage of development. This has been contrasted with problems that are (so far) beyond the reach of present methods. Illustrative examples of the relevant chemistry and ML concepts have been discussed to demonstrate the practical usefulness that modern ML techniques can bring to chemistry and physics. This includes an overview of the most important considerations behind the construction of modern ML-FFs, such as the incorporation of physical invariances, choice of ML algorithms, and loss functions. Special attention has been given to the topic of validating ML-FFs, which requires particular care in scientific applications.\\cite{lapuschkin2019unmasking} Furthermore, a comprehensive list of best practices, pitfalls, and challenges has been provided, which will serve as a useful guideline for practitioners standing on either side of this growing interdisciplinary field. These ``tricks of the trade''\\cite{montavon2012neural} are often assumed to be obvious and thus omitted from publications -- here they have been deliberately spelled out to avoid unnecessary barriers to enter the field. Additionally, a small catalog of software tools that can enable and accelerate the implementation of ML-FFs has been provided as a pointer for readers wishing to adopt ML methods in their own research.\n\nWhile routinely performing computational studies of condensed phase systems (e.g.\\ proteins in solution) at the highest levels of theory is still beyond reach, ML methods have already made other ``smaller dreams'' a reality: Just a decade ago, it would have been unthinkable to study the dynamics of molecules like aspirin at coupled cluster accuracy. Today, a couple hundred \\textit{ab initio} reference calculations are enough to construct ML-FFs that reach this accuracy within a few tens of wavenumbers.\\cite{sauceda2020construction} In the past, even if suitable reference data was available, constructing accurate force fields was labor-intensive and required human effort and expertise. Nowadays, by virtue of automatic ML methods, the same task is as effortless as the push of a button. Thanks to the speed-ups offered by ML methods over conventional approaches, studies that previously required supercomputers to be feasible in a realistic time frame\\cite{Spura_CC-PIMD_PCCP2015,Litman_Porphycen-PIMD_JACS2019} can now be performed on a laptop computer\\cite{schutt2017schnet,chmiela2018}. \n\nIn addition to enabling studies that were prohibitively expensive in the past, ML methods have also led to new chemical insights on systems that were thought to be already well understood. Even relatively small molecules were shown to display non-trivial electronic effects, influencing their dynamics and allowing a better understanding of experimental observations.\\cite{sauceda2020} Many other unknown chemical effects potentially wait to be discovered by studies now possible with ML-FFs. At the speed at which improvements to existing ML-FFs are published, it is not unreasonable to expect significant advances that will make similar studies possible for larger systems and help realize many more ``dreams'' in the near future. \n\nConcluding, ML-FFs are a highly active line of research with many unexplored avenues and attractive applications in chemistry, with possibilities to contribute to a better understanding of fundamental quantum chemical properties and ample opportunity for novel theoretical, algorithmic and practical improvement. Given the success of this relatively young interdisciplinary field, it is to be expected that ML-FFs will become a fundamental part of modern computational chemistry. \n\n\\begin{acknowledgement}\nOTU acknowledges funding from the Swiss National Science\nFoundation (Grant No. P2BSP2\\_188147).\nKRM was supported in part by the Institute of Information \\& Communications Technology Planning \\& Evaluation (IITP) grant funded by the Korea Government (No. 2019-0-00079, Artificial Intelligence Graduate School Program, Korea University), and was partly supported by the German Ministry for Education and Research (BMBF) under Grants 01IS14013A-E, 01GQ1115, 01GQ0850, 01IS18025A, 031L0207D and 01IS18037A; the German Research Foundation (DFG) under Grant Math+, EXC 2046\/1, Project ID 390685689. We would like to thank Stefan Ganscha for his valuable input to the manuscript. Correspondence to AT and KRM. \n\\end{acknowledgement}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}