diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzznadz" "b/data_all_eng_slimpj/shuffled/split2/finalzznadz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzznadz" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\t\n\nStatistics of extremes under random censoring is a relatively new area in extreme value analysis that has received considerable attention in the literature during the last few years. Examples of applications include estimating survival time \\citep{Einmahl2008,Ndao2014} and large insurance claims \\citep{Beirlant2017}, among others. In order to obtain estimates of parameters of extreme events, the extreme value index (EVI) is the primary parameter needed. Although the EVI estimation in the case of complete samples has been studied extensively, the same cannot be said of the censored case. In this paper, we review existing estimators and propose two estimators that are aimed at reducing the bias and variance. In addition, we provide a simulation comparison of the various estimators of the Extreme Value Index (EVI). \n\nThe first work on the subject can be attributed to \\citet{Beirlant2001}. The authors proposed an adaptation of the Hill estimator under random right censoring. The motivation for this adapted Hill estimator to censoring was the same as that of the Hill estimator obtained from the slope of the Pareto quantile plot. However, since the censored observations have the same values (i.e the maximum), the Pareto quantile plot will be horizontal in those observations. As a result, the adaptation of the Hill estimator to censoring was based on the slope of the Pareto quantile plot for the noncensored observations only. In addition, by using the second order properties of the representation of log-spacings in the exponential regression model, a bias-corrected version of the adapted Hill estimator was obtained. The finite sample properties of the estimator were studied through a simulation study and the estimator was found to give credible estimates for a percentage of censoring of 5\\% at most. Consistency and asymptotic normality of the estimator were obtained under some restrictive conditions on the number of noncensored observations and the sample tail fraction. \\citet{Delafosse2002} proved the almost sure convergence of the adapted Hill estimator in \\cite{Beirlant2001} under very general conditions on the number of noncensored observations.\n\nAlso, in \\citet[Section 6.1]{Reiss2007}, \nthe authors introduced an estimator of the EVI when data is randomly-or fixed censored. In the case of random right censoring, the Pareto or generalised Pareto distribution was fitted to the excesses over a given threshold. The likelihood function of the chosen distribution was adapted to censoring and maximised to obtain an estimator of the EVI. However, the authors made no attempt to study the asymptotic properties of the their proposed estimators of the EVI. \n\nIn addition, \\citet{Beirlant2007} proposed an entirely different approach by adapting the estimator of the EVI from the Peaks-Over Threshold (POT) method \\citep{Smith1987} and the moment estimator \\citep{Dekkers1989} to random right censoring. The former estimator involved adapting the likelihood function to the context of censoring whereas the latter estimator was obtained by dividing the classical EVI estimator by the proportion of noncensored observations in the top order statistics selected from the sample. Due to the difficulties in establishing the asymptotic properties of the maximum likelihood estimator of the POT method, \\citet{Beirlant2010} proposed a one-step approximation based on the Newton-Raphson algorithm. The reported simulation study showed the closeness of the approximation of the one-step estimators to the maximum likelihood estimators. The added advantage was that the asymptotic normality of the one-step estimators has been established, unlike that for the maximum likelihood estimators.\n\nBased on the ideas of \\citet{Beirlant2007}, \\citet{Einmahl2008} provided a second methodological paper which considered estimators based on the top order statistics. In addition, the authors proposed a unified method to prove the asymptotic normality of the EVI estimators. A small scale simulation showed the superiority of the adapted Hill estimator for the Pareto domain of attraction and a slight advantage of the adapted generalised Hill for the Weibull and the Gumbel domains of attraction. \\citet{Einmahl2008} used restrictive conditions to prove the asymptotic normality of the EVI estimators. However, these conditions were relaxed by \\citet{Brahimi2013} to prove the asymptotic normality of the adapted Hill estimator of the EVI under random right censoring. \n\nThe estimation of the EVI has also received attention from \\citet{Gomes2010} and \\citet{Gomes2011}. These papers form an overview of the EVI estimators in the context of random censoring. To the best of our knowledge, \\citet{Gomes2011} made the first attempt at introducing a reduced-bias estimator of the EVI, in the form of the minimum-variance reduced-bias (MVRB) estimator \\citep{Caeiro2005}. The reported simulation study showed an overall best performance for the adapted MVRB estimator for samples generated from distributions from the Pareto domain of attraction. As in \\citet{Einmahl2008}, the generalised Hill performed better than the other adapted EVI estimators for samples whose underlying distribution functions are in the Weibull and Gumbel domains of attraction.\n\nThe Hill estimator for estimating the EVI under random censoring performs well, although in the classical case it is known to be biased, not location invariant and unstable. Efforts have been made to provide reduced-bias and minimum variance Hill-type estimators to improve on the Hill estimator for the heavy-tailed distributions (i.e. distributions in the Pareto domain of attraction). In this regard, \\citet{Worms2014} provided another methodological paper for the estimation of the EVI in the case of censoring. They provided two sets of Hill-type estimators based on the Kaplan-Meier estimation of the survival function \\citep[see][]{Kaplan1958} and the synthetic data approach of \\citet{Leurgans1987}. In addition, the authors presented a small scale simulation that compared the performance of the two proposed estimators to the adapted Hill and MVRB estimators. The results showed that the two proposed estimators are superior to the Hill estimator, in particular, the estimator based on the ideas of \\citet{Leurgans1987}. On the other hand, MVRB performed better than the authors' proposed estimators. However, the EVI estimator based on the synthetic data approach of \\citet{Leurgans1987} compared favourably in the strong censoring framework with the MVRB estimator. The consistency of these estimators was proved under mild censoring. However, the asymptotic normality of these two estimators remains an open problem.\n\nFurthermore, the estimation of the EVI for the Pareto domain of attraction has also been obtained from the Bayesian perspective by \\citet{Ameraoui2016}. They constructed a maximum aposteriori and mean posterior estimators for various prior distributions of the EVI, namely Jeffrey's, Maximal Data Information (MDI) and a conjugate Gamma. The asymptotic properties, namely consistency and normality of the estimators, were established. A small simulation study was used to examine the finite sample properties and the performance of the estimators. The reported simulation result showed the superiority of the maximum aposteriori estimator under maximal data information prior. \n\nWe aim to achieve two objectives in this paper. Firstly, we propose some estimators of the EVI including a reduced-bias estimator based on the exponential regression model of \\citet{Beirlant1999a}. Secondly, the above researchers compared their proposed estimators under different simulation conditions. In addition, some of the estimators' asymptotic distributions remain an open problem, and hence, theoretical comparison is not possible. Therefore, the second objective of this paper is to compare several of the existing estimators with the proposed ones in a simulation study under identical conditions. \n\nThe rest of the paper is organised as follows. In Section \\ref{sec2}, we present the framework of extreme value analysis when data is censored. In Section \\ref{sec_sim}, a simulation comparison of the various estimators is presented. In Section \\ref{sec_prac}, we present a practical application of the estimators to estimate the extreme value index for a medical data set on the survival of AIDS patients. Lastly, concluding remarks are presented in Section \\ref{sec_conc}. \n\n\n \n\\section{Framework}\t\\label{sec2}\n\nLet $X_1,X_2,..., X_n$ be a sequence of independent and identically distributed (\\textit{i.i.d}) random variables with distribution function $F,$ and $X_{1,n} \\leq X_{2,n}\\leq ... \\leq X_{n,n}$ the associated order statistics. Therefore, the sample maximum is denoted by $X_{n,n}$. Extreme value theory attempts to solve the problem of the possible limit distributions of $X_{n,n}$. It is well-known that the distribution of the sample maximum can be obtained from the underlying distribution of $X$ as \n\n\\begin{equation}\\label{max}\nF_{X_{n,n}}(x)=F^n(x).\n\\end{equation}\nHowever, $F$ is usually unknown and, hence, EVT focuses on the search for an approximate family of models for $F^n$ as $n\\to\\infty.$ \n\nLimiting results for $F^n$ in EVT have been addressed in the papers by \n\\citet{Fisher1928} and \\citet{Gnedenko1943}. Specifically, the results can be stated as follows: if there exist sequences of constants $b_n$ and $a_n>0$ $(n=1, 2,...)$, such that \\begin{equation} \\label{LimDist}\n\\lim\\limits_{x \\to \\infty}P\\left(\\frac{X_{n,n}-b_n}{a_n}\\leq x\\right)\\rightarrow \\Psi(x), \n\\end{equation}\nwhere $\\Psi$ is a nondegenerate distribution function, then $\\Psi$ belongs to the family of distributions,\n\n\\begin{equation} \\label{GEV}\n\\Psi_\\gamma(x)=\\left\\{\\begin{array}{ll}\n\\exp\\left(-\\left(1+\\gamma\\frac{x-\\mu}{\\sigma}\\right)^{-1\/\\gamma}\\right), & 1+\\gamma\\frac{x-\\mu}{\\sigma}>0,~ \\gamma\\ne 0,\\\\\n\\exp\\left(-\\exp\\left(\\frac{x-\\mu}{\\sigma}\\right)\\right), & x\\in\\mathbb{R}, ~\\gamma=0,\\end{array} \\right.\n\\end{equation}\nwhere $\\mu \\in\\mathbb{R} $ and $\\sigma>0.$ The quantity $\\gamma\\in\\mathbb{R}$, is the \\emph{Extreme Value Index} (EVI) or the \\emph{tail index}: it determines the tail heaviness of the extreme value distributions. The EVI is classified into three groups, each representing one of the three families of distributions, Gumbel (exponential tails), Pareto (heavy-tailed) and Weibull (light-tailed). The group of families have $\\gamma=0,$ $\\gamma>0$ and $\\gamma<0$ corresponding to the Gumbel, Pareto and Weibull families respectively. A distribution function $F$ satisfying (\\ref{GEV}) is said to be in the maximum domain of attraction of $\\Psi_\\gamma$ written as $F\\in D\\left(\\Psi_\\gamma\\right).$\n\nIn addition to (\\ref{GEV}), \\citet{Balkema1974} and \\citet{Pickands1975} showed the generalised Pareto distribution (GPD) as the limit distribution of scaled excesses over a sufficiently large threshold. The GPD can be written as\n\n\\begin{equation} \\label{GPD}\n\\Lambda_{\\gamma}(x)= 1+\\ln\\Psi_\\gamma(x)=\\left\\{\\begin{array}{ll}\n1-\\left(1+\\gamma\\frac{x-\\mu}{\\sigma}\\right)^{-1\/\\gamma}, & 1+\\gamma\\frac{x-\\mu}{\\sigma}>0,~ \\gamma\\ne 0,\\\\\n1-\\exp\\left(\\frac{x-\\mu}{\\sigma}\\right), & x\\in\\mathbb{R}, ~\\gamma=0,\\end{array} \\right.\n\\end{equation}\nwhere $\\Psi_\\gamma$ is given in (\\ref{GEV}).\n\nIn this paper, our interest is in the Pareto domain of attraction i.e. the case $\\gamma>0.$ This family consists of distribution functions $F$ whose tails are regularly varying with a negative index of variation. That is\n\n\\begin{equation}\\label{FtDomain 1-F}\n1-F(x)=x^{-1\/\\gamma}\\ell_F(x), ~~x\\to \\infty,\n\\end{equation}\nwhere $\\ell_F$ is the slowly varying function associated with $F.$ A slowly varying function, $\\ell,$ is of the form $\\ell(xt)\/\\ell(x)\\to 1$ for $x\\to\\infty.$ Relation (\\ref{FtDomain 1-F}) can be stated equivalently in terms of the associated upper tail quantile function $U$ as\n\\begin{equation}\\label{FtDomain U}\nU(x)=F^{-1}(1-\\frac{1}{x})=x^{\\gamma}\\ell_U(x),~~ x\\to \\infty,\n\\end{equation}\nwhere $\\ell_U$ is the slowly varying function associated with $U$. \n\n\n\\subsection{EVT Conditions}\t\nThe conditions underlying domain of attraction are presented in this section. These conditions are needed in defining estimators of tail parameters and to study their asymptotic properties. \n\n\\citet{deHaan1984} gave the following well-known necessary and sufficient condition for $F\\in D(\\Psi_\\gamma),$ known as the first-order condition or extended regular variation: \n\n\\begin{equation}\\label{cond1}\n\\lim\\limits_{u \\to \\infty}\\frac{U(ux)-U(u)}{a(u)}=h_\\gamma(x):=\\left\\{ \\begin{array}{ll}\n\\frac{x^\\gamma-1}{\\gamma} & \\mbox{if $\\gamma\\ne 0$}\\\\\n\\log{x} & \\mbox{if $\\gamma= 0$},\\end{array} \\right.\n\\end{equation}\nwhere $a$ is a positive measurable function, $x>0.\n\nIn addition, to study the asymptotic properties of the estimators of tail parameters, the first-order condition is generally not sufficient; a second-order condition specifying the rate of convergence of (\\ref{cond1}) is also required. \n\n\nIn the literature, the second-order condition can be stated in terms of $U$ \\citep[see e.g.][]{deHaan2006,Gomes2008}, or, equivalently, also in terms of the rate of convergence of the slowly varying function, $\\ell,$ in (\\ref{FtDomain U}). \\citet[page 602]{Beirlant1999a} state it as follows:\nthere exists a real constant $\\rho<0$ and a rate function $b$ satisfying $b(x)\\to 0$ as $x\\to \\infty,$ such that for all $\\lambda\\geq 1$, \n\n\\begin{equation}\\label{2nd s.v}\n\\lim\\limits_{x\\to \\infty}\\frac{\\log\\ell(\\lambda x)-\\log\\ell(x)}{b(x)}=\\kappa_\\rho(\\lambda),\n\\end{equation}\nwhere $\\kappa_\\rho(\\lambda)=\\int_{1}^{\\lambda}u^{\\rho-1}du.$ \n\n\n\n\\subsection{General Estimation under Censored Data}\\label{EVTCEN_genEst}\n\nLet the random variable of interest be $X$ with distribution function, $F.$ Since samples on $X$ may not be fully observed, we introduce another positive random variable $C,$ which is independent of $X,$ with distribution function $G.$ In this setting, we then observe $\\left(Z_i,\\delta_i\\right), i=1,\\ldots,n$ with\n\n\\begin{equation} \\label{Zi}\nZ_i=\\mbox{min}\\left(X_i, C_i\\right)\n\\end{equation}\n\nand \n\n\\begin{equation}\n\\delta_i=\\left\\{ \\begin{array}{ll}\n1 & \\mbox{if $X_i \\leq C_i$};\\\\\n0 & \\mbox{if $X_i > C_i$}.\\end{array} \\right.\n\\end{equation}\nHere, $\\delta_i$ is a variable indicating whether $Z_i$ is censored or not. Let $H$ be the distribution function of $Z$ defined in (\\ref{Zi}). Thus, by the independent assumption of the random variables $Y$ and $C,$ we have $1-H=(1-F)(1-G).$\n\nIn addition, let $\\vartheta_F=\\sup{\\{F(x)<1\\}}$ be the corresponding right endpoint of the underlying distribution function, $F.$ Similarly, let $\\vartheta_G~ \\mbox{and}~ \\vartheta_H$ be the right endpoints of the underlying distribution functions of $C$ and $Z$ respectively. If we assume $F\\in D(\\Psi_{\\gamma_1})$ and $G\\in D(\\Psi_{\\gamma_2})$ for some real numbers, $\\gamma_1~\\mbox{and}~\\gamma_2,$ then $H\\in D(\\Psi_{\\gamma})$ where $\\gamma \\in \\mathbb{R}.$ \\citet{Einmahl2008} considered these three combinations of $\\gamma_1$ and $\\gamma_2:$\n\n\\begin{enumerate}[\\hspace{0.5cm}\\textbf{Case} \\bfseries 1.]\n\t\\item $ F~ \\mbox{and}~ G~ \\mbox{are Pareto types:~~}~\\gamma_1>0, ~\\gamma_2>0 ~~~~~~~~~~~~~~~~~~~~~~~~~\\to~ \\gamma=\\frac{\\gamma_1\\gamma_2}{\\gamma_1+\\gamma_2}$\n\t\\item $ F~ \\mbox{and}~ G~ \\mbox{are Gumbel types:}~\\gamma_1=0, ~\\gamma_2=0, ~\\vartheta_F=\\vartheta_G ~~~~~~~\\to~~\\gamma=0$\n\t\\item $ F~ \\mbox{and}~ G~ \\mbox{are Weibull types:} ~\\gamma_1<0, ~\\gamma_2<0,~ \\vartheta_F=\\vartheta_G =\\infty~ \\to ~\\gamma=\\frac{\\gamma_1\\gamma_2}{\\gamma_1+\\gamma_2}.$\n\\end{enumerate}\nThe other two possibilities, $\\{\\gamma_1>0, ~\\gamma_2<0\\}$ and $\\{\\gamma_1<0, ~\\gamma_2>0\\},$ correspond closely to the completely noncensored case which has been studied widely whereas the latter corresponds closely to the completely censored case where estimation is impossible. \n\n\\subsection{Extreme Value Index Estimation Methods }\\label{EVI_CEN}\n\nThe estimation of the extreme value index (EVI) when observations are censored needs some modification from that of the complete sample. This is because the observed sample is $(Z_i,\\delta_i),~i=1,\\ldots, n,$ and hence, the application of the classical EVI estimation methods will yield estimators that converge to $\\gamma,$ the EVI of the underlying distribution of the random variable $Z.$ However, our interest is in $\\gamma_1,$ the EVI of the underlying distribution of the random variable $X.$ Therefore, some modification is needed to adapt the estimation of $\\gamma$ from the $Z$ sample to estimate $\\gamma_1.$ \n\nThe existing methodologies for estimating the EVI under right censoring can be grouped into four categories: \n\n\\begin{enumerate}[\\hspace{0.5cm}\\bfseries 1.]\n\t\\item\tadapting a classical EVI by dividing it by the proportion of noncensored observations \\citep{Beirlant2007,Einmahl2008,Gomes2011}; \n\t\\item\t adapting the likelihood function of an extreme value distribution \\citep{Beirlant2010};\n\t\\item Censored regression \\citep{Worms2014}.\n\t\\item Bayesian estimation \\citep{Ameraoui2016,Beirlant2017}\n\\end{enumerate}\nIn this paper, we consider the frequentist methods only i.e. the first three cases. The methods are grouped into three categories and presented together with the proposed estimators in the three sub-sections that follow. Following that, we propose four further estimators that are adapted to the censored case. \n\n\\subsubsection{First Method}\nThe first method was introduced in \\citet{Beirlant2007} and further developed by \\citet{Einmahl2008}. In this method a classical estimator of the EVI is obtained from the $Z$ sample and then adapted to censoring. Among these estimators are: the maximum likelihood estimator from the Peaks-Over Threshold (POT) method and the moment estimator \\citep{Beirlant2007}; Hill, Moment, Generalised Hill and the maximum likelihood estimator from the POT method \\citep{Einmahl2008}; and Hill, moment, mixed moment and generalised Hill \\citep{Gomes2011}. In addition, \\citet{Einmahl2008} provides a uniform way to establish the asymptotic normality of the proposed estimators of the EVI (i.e. Hill, Moment, Generalised Hill and the maximum likelihood estimators). These estimators are reviewed below in terms of the random variable $Z,$ and thus estimates $\\gamma$ the EVI $Z.$ \n\n\n\\textbf{The Hill Estimator}: The Hill estimator \\citep{Hill1975} is arguably the most common estimator of $\\gamma$ in the Pareto case i.e. $\\gamma>0.$ The Hill estimator is defined for the $(k+1)$-largest order statistics as\n\n\\begin{equation}\\label{Hill}\n\\hat{\\gamma}^{(Hill)}_{Z,k,n}=\\frac{1}{k}\\sum_{j=1}^{k}\\log{Z_{n-j+1,n}}-\\log{Z_{n-k,n}}. \n\\end{equation}\n\nThe properties of the Hill estimator have been studied widely and its attractive properties include consistency \\citep{Mason1982} and asymptotic normality \\citep{Hall1982,deHaan1998}. \n\n\\textbf{The Generalised Hill Estimator}: \\citet{Beirlant1996} proposed the generalised Hill (UH) estimator in a bid to extend the Hill estimator to the case where $\\gamma \\in \\mathbb{R}.$ The UH estimator is obtained as the slope of the ultimately linear part of the generalised Pareto quantile plot, \n\\begin{equation}\\label{GPQ}\n\\left(-\\log{\\left(\\frac{j+1}{n+1}\\right)},~ \\log{(Z_{n-j,n}H_{Z,j,n})}\\right), j=1,2, ..., n-1.\n\\end{equation}\nIt is given by\n\\begin{equation}\\label{UH}\n\\hat{\\gamma}^{(UH)}_{Z,k,n}= \\frac{1}{k}\\sum_{j=1}^{k}\\log{ UH}_{Z,j,n}-\\log UH_{Z,k+1,n}, \n\\end{equation}\nwhere $UH_{Z,j,n}=Z_{n-j,n}\\left( \\frac{1}{j}\\sum_{i=1}^{j}\\log{Z_{n-i+1,n}}-\\log{Z_{n-j,n}}\\right).$\n\n\n\n\\textbf{The Minimum-Variance Reduced Bias Estimator}: \\citet{Caeiro2005} proposed the Minimum-Variance Reduced Bias (MVRB) estimator for heavy-tailed distributions belonging to the Hall class \\citep{Hall1982} of models. The estimator is a direct modification of the Hill estimator using the second order parameters to reduce bias. It has the added advantage of having the same asymptotic variance as the Hill estimator. The MVRB estimator is obtained by using the second-order condition (\\ref{2nd s.v}) with $b(u)=\\gamma\\beta u^\\rho.$ It is given by\n\n\\begin{equation}\\label{MVRB}\n\\hat{\\gamma}^{(MVRB)}_{Z,k,n}= \\hat{\\gamma}^{(Hill)}_{Z,k,n} \\left(1-\\frac{\\hat{\\beta}}{1-\\hat{\\rho}}\n\\left(\\frac{k}{n}\\right)^{-\\hat{\\rho}}\\right),\n\\end{equation}\n\nwhere, $\\hat{\\gamma}^{(Hill)}_{Z,k,n}$ is the Hill estimator in (\\ref{Hill}) and the pair $(\\hat{\\beta}, \\hat{\\rho})$ is the estimator for the pair of parameters $(\\beta, \\rho)$ of the second-order auxiliary function $b.$ \n\n\\textbf{The Moment Estimator}: \\citet{Dekkers1989} introduced another estimator known as the moment estimator as an adaptation of the Hill estimator valid for all domains of attraction. The moment estimator is defined for $k\\in \\{2, ..., n-1\\}$ and it is given by\n\n\\begin{equation}\\label{MOM}\n\\hat{\\gamma}^{(MOM)}_{Z,k,n}= M_{Z,k,n}^{(1)}+1-\\frac{1}{2}\\left(1-\\frac{(M_{Z,k,n}^{(1)})^2}{M_{Z,k,n}^{(2)}}\\right)^{-1},\n\\end{equation}\n\nwhere\n\\begin{equation}\\label{Mj}\nM_{Z,k,n}^{(j)}=\\frac{1}{k}\\sum_{i=1}^{k}(\\log{Z_{n-i+1,n}}-\\log{Z_{n-k,n}})^j, ~ j=1,2.\n\\end{equation}\n\n\n\n\\subsubsection*{Adapting EVI Estimators} \n\\cite{Beirlant2007} and \\citet{Einmahl2008} proposed that the EVIs for the complete sample, $\\gamma^{(.)}_{Z,k,n},$ (i.e. (\\ref{Hill}) - (\\ref{PMoM})) can be adapted to censoring by dividing each estimator by the proportion of noncensored observations, $\\hat{\\wp},$ in the $k$ largest $Z$ observations. Thus, the estimator of $\\gamma_1$ is given by\n\n\\begin{equation}\\label{adapt_Z}\n\\hat{\\gamma}_1=\\hat{\\gamma}_{Z,k,n}^{(c, .)} =\\frac{\\hat{\\gamma}_{Z,k,n}}{\\hat{\\wp}}.\n\\end{equation}\nHere, $\\hat{\\wp}$ is given by \n\\begin{equation} \n\\hat{\\wp}=\\frac{1}{k}\\sum_{i=1}^{k}\\delta_{n-i+1, n},\n\\end{equation}\nwhere $\\delta_{i,n},~ i=1, ..., n$ are the $\\delta$-values corresponding to $Z_{i,n},~ i=1, ..., n$ respectively. In the literature, (\\ref{adapt_Z}) has primarily been used to adapt the EVI estimators to censoring.\n\n\n\n\\subsubsection{Second Method}\n\nThe second method introduced by \\citet{Beirlant2010} involves using the POT method and adapting the log-likelihood function for censoring. We know from (\\ref{GPD}) that given a high threshold, $u,$ the limit distribution of excesses $V_j=Z_i-u, j=1, \\ldots, k$ given $Z_i>u, i=1, \\ldots, n$ can be approximated by the generalised Pareto (GP) distribution. In \\citet{Beirlant2007} and \\citet{Einmahl2008}, the maximum likelihood estimator, $\\hat{\\gamma}^{(c,POT)_{Z,k,n}},$ is obtained from the GP approximation of the distribution of the $V_j$'s and is adapted to censoring using (\\ref{adapt_Z}). \n\nAn alternative approach in \\citet{Beirlant2010} involves adapting the likelihood function of the random variable $V_j, j=1, \\ldots,k$, \n\n\\begin{equation}\\label{c.potlik}\nL(\\gamma_1, \\sigma_{1,k})=\\Pi_{j=1}^{k}[\\lambda(V_j)]^{\\delta_j}[1-\\Lambda(V_j)]^{1-\\delta_j}\n\\end{equation}\nwhere $\\Lambda$ is the GP distribution and $\\lambda$ the corresponding density function of the GP distribution. However, there are difficulties with obtaining explicit expressions for the maximum likelihood estimators of $\\gamma_1$ and $\\sigma_{1,k}.$ In addition, their asymptotic properties remain an open problem. As a result, \\citet{Beirlant2010} proposed solving the maximum likelihood equations using one-step approximations based on the Newton-Raphson algorithm. The resulting estimator of the parameters is given by \n\n\\begin{equation}\\label{Newton}\n\\left( \\begin{array}{c}\n\\hat{\\gamma}_{Z,k,n}^{(c,POT.L)}\\\\\\\\\n\\frac{\\hat{\\sigma}_{Z,k}^{(c,POT.L)}}{\\sigma_{1,k}}\n\\end{array}\\right) =\\left( \\begin{array}{c}\n\\hat{\\gamma}_{Z,k,n}^{(c, I)}\\\\\\\\\n\\frac{\\hat{\\sigma}_{Z,k}^{(c,I)}}{\\sigma_{1,k}}\n\\end{array}\\right)-\n\\left(\n\\begin{array}{cc}\nL_{11}^{''} & \\sigma_{1,k}L_{12}^{''}\\\\\\\\\n\\sigma_{1,k}L_{12}^{''} & \\sigma_{1,k}^{2}L_{22}^{'}\n\\end{array}\n\\right) \\left( \n\\begin{array}{c}\nL_{1}^{'} \\\\\\\\\n\\sigma_{1,k}L_{2}^{'}\n\\end{array}\n\\right)\n\\end{equation} \nwhere $L_{i}^{'}$ and $L_{ij}^{''},~ i=1,2,j=1,2$ are the first and second derivatives of $\\log{L(\\gamma_1, \\sigma_{1,k})},$ evaluated at $\\left(\\hat{\\gamma}_{Z,k,n}^{(c, I)},~ \\hat{\\sigma}_{Z,k}^{(c,I)}\\right).~$ The estimators, $\\hat{\\gamma}_{Z,k,n}^{(c, I)}~ \\mbox{and}~\\hat{\\sigma}_{Z,k}^{(c,I)},$ are the initial estimators and must be asymptotically normal. The authors state that the moment estimator provides a good example of the initial estimators. The performance of the estimators, $\\hat{\\gamma}_{Z,k,n}^{(c,POT.L)}$ and $\\hat{\\sigma}_{Z,k}^{(c,POT.L)},$ were found to be close to the maximum likelihood estimators obtained from (\\ref{c.potlik}). In addition, the asymptotic normality of the one-step Newton-Raphson estimators obtained in (\\ref{Newton}) has been established in that paper.\n\n\\subsubsection{Third Method} \nThe third method introduced by \\citet{Worms2014} is based on censored regression method of \\citet{Koul1981}. The estimators are valid for estimating the EVI for distributions in the Pareto domain of attraction. From the well known result of deriving the Hill estimator from the mean excess function, they define an adaptation of the classical Hill estimator valid for case 1 as,\n\\begin{equation}\\label{WW.KL}\n\\hat{\\gamma}_{Z,k,n}^{(c, WW.KM)} := \\frac{1}{n(1-\\hat{F}(Z_{n-k,n}))}\\sum_{j=1}^{k}\\frac{\\delta_{n-j+1,n}}{1-\\hat{G}(Z_{n-j+1,n}^-)}\\log{\\left(\\frac{Z_{n-j+1,n}}{Z_{n-k,n}}\\right)},\n\\end{equation}\nwhere $\\hat{F}$ and $\\hat{G}$ are the Kaplan-Meier estimators for $F$ and $G$ respectively. Here, the Kaplan-Meier estimators of the survival functions are defined for $b0,$ and hence, for $\\gamma_1>0.$ \n\n\\citet{Beirlant1999a} provide an approximate representation for the log-spacings of successive order statistics: \n\n\\begin{equation}\\label{L-spacings}\nR_j=j(\\log{Z_{n-j+1,n}}-\\log{Z_{n-j,n}})\\sim \\left( \\gamma+b_{n,k}\\left(\\frac{j}{k+1}\\right)^{-\\rho} \\right)E_j, ~j=1, \\ldots, k,\n\\end{equation}\nwhere $E_j, ~j=1, ..., k$ are standard exponential random variables, $b_{n,k}=b\\left((n+1)\/(k+1)\\right)\\in \\mathbb{R}~ (\\mbox{also}~ b_{n,k}\\to 0,~\\mbox{as}~k,n\\to \\infty)$ and $\\rho$ are second-order parameters from (\\ref{2nd s.v}). From the approximate distribution of log-spacings (\\ref{L-spacings}), a likelihood function can be formed. Maximisation of the likelihood function leads to the maximum likelihood estimators $\\hat{\\gamma}_{Z,k,n}^{(ERM)}, \\hat{b}_{n,k}~ \\mbox{and}~ \\hat{\\rho}$ of $\\gamma, ~b_{n,k}~\\mbox{and}~ \\rho$ respectively. We note that (\\ref{L-spacings}) simplifies to $R_j \\sim \\gamma E_j, ~ j=1, ..., k$ if $ b_{n,k}=0.$ In addition, the resulting maximum likelihood estimator is the usual Hill estimator. \n\nThe maximum likelihood estimator, $\\hat{\\gamma}_{Z,k,n}^{(ERM)},$ of $\\gamma$ is adapted to censoring to obtain an estimator of $\\gamma_1$ using (\\ref{adapt_Z}). Moreover, the estimation of $\\gamma$ leads to concurrent estimates of the second order parameters, $\\hat{b}_{n,k}~ \\mbox{and}~ \\hat{\\rho}.$ These estimators can be adapted to censoring and used to obtain reduced-bias estimators for quantiles and exceedance probabilities.\n\nIn addition, we propose adapting the Zipf estimator of\n\\citet{Kratz1996}. This estimator is a smoother version of the Hill estimator and is is valid for $\\gamma>0.$ The estimator is obtained through a minimisation of the unconstrained least squares function involving the $k$ largest observations on the generalised Pareto quantile (\\ref{GPQ}),\n\\[\nL(\\gamma, \\eta)=\\sum_{j=1}^{k}\\left(\\log Z_{n-j,n}H_{Z,j,n}-\\eta+\\gamma\\log{\\frac{j+1}{n+1}}\\right)^2,\n\\]\nwith respect to $\\eta$ and $\\gamma.$ This results in the Zipf estimator given by\n\n\\begin{equation}\\label{Zipf}\n\\hat{\\gamma}^{(Zipf)}_{Z,k,n}= \\frac{\\frac{1}{k}\\left(\\sum_{j=1}^{k}\\log{\\frac{k+1}{j+1}}-\\frac{1}{k}\\sum_{i=1}^{k}\\log{\\frac{k+1}{i+1}}\\right)\\log {Z_{n-j+1,n}}}{\\frac{1}{k}\\sum_{j=1}^{k}\\log^2{\\frac{k+1}{j+1}-\\left(\\frac{1}{k}\\sum_{i=1}^{k}\\log{\\frac{k+1}{j+1}}\\right)^2}}. \n\\end{equation}\nThe estimator, $\\hat{\\gamma}^{(Zipf)}_{Z,k,n},$ in (\\ref{Zipf}) is adapted to censoring, $\\hat{\\gamma}^{(c,Zipf)}_{Z,k,n},$ using (\\ref{adapt_Z}).\n\n\nFurthermore, the popularity of the moment estimator, (\\ref{MOM}), has led to the development of a couple of variants to deal with its shortcomings. In the case where there is no censoring, the moment ratio \\citep{Danielsson1996} and the Peng's Moment \\citep{Deheuvels1997} are examples of these estimators. We present these estimators and propose its adaptation to the case where observations are subject to right random censoring.\t\n\nThe moment ratio estimator unlike the moment estimator,(\\ref{MOM}), is valid for the Pareto domain of attraction only. It is given by\n\n\\begin{equation}\\label{MoMR}\n\\hat{\\gamma}^{(MomR)}_{Z,k,n}=\\frac{1}{2}\\frac{M_{Z,k,n}^{(2)}}{M_{Z,k,n}^{(1)}}.\n\\end{equation}\nwhere $M_{Z,k,n}^{(j)},~j=1,2$ is defined in (\\ref{Mj}). The moment ratio estimator has been shown to have a smaller asymptotic bias than the Hill estimator and a moderate mean square error at the same value of $k$ \\citep{Danielsson1996}. \n\nThe Peng's Moment Estimators is designed to reduce bias in the moment estimator and it is given by\n\\begin{equation}\\label{PMoM}\n\\hat{\\gamma}^{(PMom)}_{Z,k,n}= \\frac{1}{2}\\frac{M_{Z,k,n}^{(2)}}{M_{Z,k,n}^{(1)}}+1-\\frac{1}{2}\\left(1-\\frac{(M_{Z,k,n}^{(1)})^2}{M_{Z,k,n}^{(2)}}\\right)^{-1}.\n\\end{equation}\nThis estimator is valid for all domains of attraction and was shown to be asymptotically normal under appropriate conditions on $k.$\n\nIn the case where observations are subject to censoring, we also adapt the estimators (\\ref{MoMR}) and (\\ref{PMoM}) to censoring using (\\ref{adapt_Z}).\n\n\n\n\\section{Simulation Study}\\label{sec_sim}\n\nTo investigate and compare the performance of different EVI estimators, we shall make use of simulation. The simulation study is grouped into two categories: point and confidence interval estimation. The former involves assessing the performance of the estimators in terms of Median Absolute Deviation (MAD) and median bias. The latter case consists of diagnostic checks on 95\\% confidence intervals based on the coverage probabilities and interval lengths. \n\nWe consider the following combination of factors in the simulation: distributions, sample sizes, threshold levels, proportions of censoring. Several samples sizes, $n=500, 1000, 2000$ and 5000, and number of top order statistics, taken as 10\\%, 20\\% and 30\\% of the sample size. However, the result did not differ so much and hence, for brevity and ease of presentation, we consider samples of size, $n=1000$ and the number of top order statistics taken as 10\\% of the sample size. \n\nData were generated from the three distributions presented in Table \\ref{Dist}.\n\\begin{table}[htp!]\n\t\\centering\n\t\\caption{Distributions}\n\t\\begin{tabular}{ccc}\n\t\t\\toprule\n\t\tDistribution & $1-F(z)$ & $\\gamma$ \\\\ \\hline\n\t\t& & \\\\\n\t\tBurr ($\\eta, \\tau, \\lambda$) & $\\left(\\eta\/(\\eta+z^\\tau)\\right)^\\lambda,~~~z>0;\\eta,\\lambda,\\tau>0 $ & $\\frac{1}{\\tau\\lambda}$ \\\\\n\t\t& & \\\\\n\t\tPareto ($\\alpha$) & $z^{-\\alpha},~~~z>1;\\alpha>0$ & $\\frac{1}{\\alpha}$ \\\\\n\t\t& & \\\\\n\t\tFr\\'{e}chet ($\\alpha$) & $1-\\exp{\\left(-z^{-\\alpha}\\right)},~~~z>1;\\alpha>0$ & $\\frac{1}{\\alpha}$ \\\\ \\bottomrule\n\t\\end{tabular}\n\t\\label{Dist}\n\\end{table}\nWith regard to the proportion of censoring in the right tail, we consider three values: 0.10 (small), 0.35 (medium) and 0.65 (large). This allows us to study the performance of the estimators as censoring increases or decreases. \n\n\\subsection{Simulation Design}\\label{comp_EVIQp}\n\nIn this section, we examine the procedure for measuring the performance of point and interval estimators of the EVI. In the case of point estimators, the median of $R~(R=1000)$ repetitions was used as the point estimate of $\\gamma_1,$ and MAD and median bias are obtained as the performance measures.\n\nOn the other hand, the comparison of the confidence intervals are based on two properties: interval length and coverage probability. Before, we introduce the simulation algorithm to compute the diagnostics of the confidence interval, we present a procedure known as the conditional block bootstrap for obtaining samples for extreme value analysis in the case of censoring.\n\n\\subsubsection{Conditional Block Bootstrap for Censored Data}\\label{cond.bootstap}\nIn order to obtain the performance measures, coverage probability and average interval length, the bootstrap samples are required. However, as stated in Section \\ref{EVTCEN_genEst}, two scenarios in EVT in the case of censoring are to be avoided in this study. Firstly, if none of the observations are censored (i.e. as can happen in cases where $\\gamma_1>0$ and $\\gamma_2<0$), then the classical EVT estimation techniques apply. This has been widely studied in the literature and is not of interest in this paper. Secondly, for a completely censored case (which can occur when $\\gamma_1<0$ and $\\gamma_2>0$) the estimation of the EVI and the other extreme events are impossible. \nTherefore, any bootstrap procedure implemented for the estimation of parameters of extreme events for censored data must be constrained to exclude the above scenarios, particularly where the estimation is impossible. However, the bootstrap sampling \\citet{Efron1993} and bootstrap for censored data \\citet{Efron1981} do not guarantee the exclusion of these two scenarios. \n\nWe present here a bootstrap procedure, termed the ``conditional block bootstrap\", for selecting bootstrap samples that exclude the two scenarios in statistics of extremes when data is subject to random censoring. The conditional block bootstrap is a combination of ideas from the moving block bootstrap \\citep{Efron1993} and the bootstrap for censored data \\citep{Efron1981}. \n\nIn this procedure, the censored data is grouped into randomly chosen blocks and it is crucial that each block must contain at least one censored observation. This ensures that the second case is eliminated from each generated bootstrap sample. The bootstrap observations are obtained by repeatedly sampling with replacement from these blocks and placing them together to form the bootstrap sample. Enough blocks must be sampled to obtain approximately the same sample size as the original censored sample. \n\nGiven a sample of size, $n,$ a proportion of censoring in the right tail, $\\wp,$ and assuming $\\wp\\le 0.5,$ the conditional block bootstrap procedure is as follows: \n\n\\begin{enumerate}\n\t\n\t\\item Group the $n$ observations into two groups namely, censored and noncensored with sample sizes $n_c$ and $n_{\\bar{c}}$ respectively. Thus, $\\wp=n_{\\bar{c}}\/n.$\n\t\n\t\\item Let $d~(d\\ge1)$ denote the number of censored observations to be included in each block. The size of each block, $s,$ is obtained as $(n\\times d)\/n_c.$ If $s$ is not an integer, then let $s=\\lceil(n\\times d)\/n_c\\rceil.$\n\t\n\t\\item The number of blocks, $m,$ is chosen such that $n\\approxeq m\\times s.$ In the case, $n=m\\times s,$ the blocks will have the same number of observations. Otherwise, if $n\\approx m\\times s,$ then $m$ is taken as $\\lceil n\/s\\rceil,$ in which case the first $m-1$ blocks are allocated $s$ observations each and the remaining $n-s(m-1)$ observations, allocated to the $m$th block. \n\t\n\t\\item Let $b_i,~i=1,\\ldots,m$ denote the $m$ blocks. Assign observations to each block by randomly sampling, $s-d$ observations without replacement from the noncensored group. In addition, randomly sample $d$ observations without replacement from the censored-group and assign to each block $b_i,~i=1,\\ldots,m.$ Thus, each block would contain $d$ and $s-d$ observations that are censored and noncensored respectively.\n\t\n\t\\item Sample $m$ times with replacement from $b_1, b_2,\\ldots, b_m$ and place them together to form the bootstrap sample. Note that, more than $m$ blocks may be sampled, in the case, $n\\approx m\\times s,$ for the bootstrap sample to be approximately equal to the original sample size, $n.$ \n\t\n\t\\item Repeat (5) a large number of times, $B,$ to obtain $B$ bootstrap samples.\n\t\n\\end{enumerate}\n\nIn the case, $\\wp< 0.5,$ the above procedure can be used to constitute the blocks. However, the allocations should be done such that each block contains at least one noncensored observation.\n\n\n\\subsubsection{Simulation algorithm}\nThe following algorithm is used to obtain performance measures of the estimators of $\\gamma_1:$\n\\begin{enumerate}[\\hspace{0.5cm}\\textbf{A}1.]\n\t\n\t\\item Generate $n$ observations from $Y$ and $C$ respectively, and hence, obtain $Z^{(1)}=\\{Z_1,\\ldots,Z_n\\}$ and $\\delta^{(1)}=\\{\\delta_1,\\ldots,\\delta_n\\}.$ Repeat a large number of times $R-1~ (R=1000)$ to obtain $R$ pairs of $(Z^{(i)}, \\delta^{(i)}),~i=1,\\ldots,R$ samples. \n\t\n\t\\item Select the pair of samples, $(Z^{(1)}, \\delta^{(1)}).$ Draw $B~(B=1000)$ bootstrap samples each of size $n$ using the conditional block bootstrap procedure in Section \\ref{cond.bootstap}. \\label{CP: Step 2}\n\t\n\t\\item Compute the bootstrap replicates, $\\hat{\\gamma}^{*(c,.)}_{1,1}, \\ldots, \\hat{\\gamma}^{*(c,.)}_{1,B},$ using the estimators of $\\gamma_1.$ \\label{CP: Step 3}\n\t\n\t\\item Compute the $100(1-\\alpha)\\%,$ bootstrap confidence interval. \\label{CP: Step 4}\n\t\n\t\\item Repeat A\\ref{CP: Step 2} through to A\\ref{CP: Step 4} for the remainder of the pairs of samples, $(Z^{(j)}, \\delta^{(j)}), ~j=2, \\ldots, R$ to obtain $R$ confidence intervals for $\\gamma_1.$ \\label{CP: Step 5}\n\t\n\t\\item Compute the properties of confidence intervals i.e. coverage probability and average interval length using the $R$ confidence intervals in A\\ref{CP: Step 5}.\n\t\n\\end{enumerate}\n\n\\subsection{Results and Discussions}\n\nIn this section, we discuss the results of the simulation study for each distribution. General comments across the various distribution are presented in the last section. The simulation results for the Burr, Pareto and Fr\\'{e}chet distributions are presented in Appendices A, B and C respectively. In most cases, estimators having small values of MAD and median bias generally give better coverage probability and interval length. Therefore, our performance measuring criterion focuses on the coverage probability (CP) and interval lengths. Generally, we regard a good estimator as having a coverage probability of at least 0.90 and a reasonable interval length among these estimators.\n\n\\subsubsection{Burr Distribution}\n\\begin{itemize}\n\t\\item \\textbf{For $\\gamma_1=0.1:$}\n\t\n\tThe ERM estimator is undoubtedly the best confidence interval estimator of $\\gamma_1=0.1$ as it has small bias, MAD, CP approximately equal to the nominal level and shorter average confidence interval length. For percentage of censoring in the right tail, $\\wp=10\\%$ (or more generally $\\wp\\le 10\\%$), other estimators of $\\gamma_1=0.1$ including MOM, PMom and occasionally POT.L, have good CP values. However, these estimators have wider average interval lengths compared to the ERM estimator. Moreover, in the case of $\\wp>10\\%,$ ERM is the only estimator that has coverage probability close to the nominal level and has a shorter confidence interval length. Also, POT.L has good CP values but larger interval lengths, and hence, not recommended for estimating $\\gamma_1=0.1.$ The apparent poor performance of most of the estimators of $\\gamma_1$ may be due to the second-order parameter $\\rho \\to 0.$\n\t\n\t\\item \\textbf{For $\\gamma_1=0.5:$}\n\t\n\tHill, MVRB, Zipf, WW.KM and WW.L are the best estimators for intervals for small percentage of censoring less than or equal to 10\\%. These estimators have CP values close to the nominal level and small average interval lengths. As the percentage of censoring increases, the MomR, ERM and POT.L estimators have the best CP values: the other estimators have poor coverage. In the case of large $\\wp$ values, ERM and POT.L are the top two estimators of $\\gamma_1=0.5.$ Overall, ERM and POT.L are the estimators which have good CP values and can be considered for estimating $\\gamma_1=0.5.$ However, POT.L has wider interval lengths and may not be appropriate for estimating $\\gamma_1=0.5.$\n\t\n\t\\item \\textbf{For $\\gamma_1=0.9:$}\n\t\n\tMost of the confidence interval estimators perform very well for the estimation of $\\gamma_1=0.9$ compared with $\\gamma_1 \\le 0.5.$ The Hill, MVRB, WW.L, ERM and POT.L estimators generally give CP values close to the desired level of 0.95 regardless of the percentage of censoring in the right tail. Among these estimators, POT.L has the largest average interval length followed by ERM. In addition, WW.KM and MomR are much better than the preceding estimators in terms of the average confidence interval lengths. In particular, the MomR estimator provides the best estimator of the EVI when there is heavy censoring: it has smallest interval length compared the estimators having CP values of approximately 0.95. However, its CP is less good at lower censoring. \n\\end{itemize}\n\n\\subsubsection{Pareto Distribution}\n\n\\begin{itemize}\n\t\n\t\\item \\textbf{For $\\gamma_1=0.1:$}\n\tIn this case, regardless of the percentage of censoring in the right tail, few estimators of $\\gamma_1$ have CP values close to the nominal level and moderate interval lengths. These include UH, MOM, PMom and POT. The rest of the estimators have poor CP values close to zero except ERM and POT.L. However, POT.L has larger interval length, and hence, may not be appropriate an appropriate estimator of $\\gamma_1.$ Thus, UH, MOM, PMom and POT are the most robust to censoring when estimating $\\gamma_1=0.1.$\n\t\n\t\n\t\\item \\textbf{For $\\gamma_1=0.5:$}\n\t\n\tIn the case of the estimation of $\\gamma_1=0.5,$ more estimators satisfy the CP-Interval length criterion when compared to $\\gamma_1=0.1.$ Estimators such as UH, ERM, MOM, PMom, POT and POT.L mostly have high CP values close to 0.95 regardless of the value of $\\wp.$ Again, the POT.L estimator has the largest interval length. Overall, the MOM and ERM are the preferred estimators as they have better CP values and moderate interval lengths compared with the others. \n\t\n\t\\item \\textbf{For $\\gamma_1=0.9:$}\n\t\n\tFor small percentage of censoring in the right tail, $\\wp=10\\%,$ most of the estimators have good CP values. The exceptions to this include UH, PMom, WW.KM and WW.L. Also, when $\\wp=0.35$ and $0.65$ the WW.KM, MOM, POT, POT.L and ERM estimators have good CP values and relatively moderate interval lengths. However, POT.L always has the largest interval length of at least twice the estimator with the shortest interval length. Therefore, ERM, MOM and POT can be considered as more robust for the estimation of $\\gamma_1=0.9,$ as $\\wp$ increases. \n\t\n\t\n\\end{itemize}\n\n\\subsubsection{Fr\\'{e}chet Distribution}\n\n\\begin{itemize}\n\t\\item \\textbf{For $\\gamma_1=0.1:$}\n\t\n\tIn the estimation of $\\gamma_1=0.1,$ for small percentage of censoring, $\\wp\\le10\\%,$ several confidence interval estimators with the exception of POT.L and UH provide good coverage probabilities and reasonable interval lengths. Among these estimators, Hill, MVRB, WW.L, WW.KM and ERM have CP values close to 0.95. In addition, for $\\wp\\ge0.35,$ similar performance is observed as with $\\wp\\le10.$ Here, we noticed a better performance in CP values of WW.L compared with WW.KM. This is in conformity to the simulation results reported in \\citet{Worms2014}. Generally, the Hill, MVRB and ERM are the most appropriate for estimating $\\gamma_1=0.1$ for various levels of censoring in the right tail.\n\t\n\t\n\t\\item \\textbf{For $\\gamma_1=0.5:$}\n\t\n\tAt 10\\% censoring in the right tail, the ERM, POT.L, POT, MomR and Zipf estimators provide good coverage probabilities. In terms of interval length, Zipf and MomR provide approximately half of the average interval lengths of the other estimators. Thus, these two estimators are the most appropriate estimators of $\\gamma_1=0.5.$ However, as the percentage of censoring in the right tail increases, the ERM, POT.L and MOM estimators provide the best CP values. Moreover, the POT.L estimator has larger interval lengths, and hence, the ERM estimator is regarded as the most appropriate for estimating $\\gamma_1=0.5.$\n\t\n\t\n\t\\item \\textbf{For $\\gamma_1=0.9:$}\n\t\n\tIn the case of $\\wp=10\\%,$ most of the estimators of $\\gamma_1$ performed well with CP values close to the the nominal level of 0.95 except Hill, MVRB, WW.KM and Zipf. The ERM, POT.L, POT, and PMom estimators consistently have CP values close to 0.95 and relatively good interval lengths. In addition, as with the case $\\wp=10\\%,$ the estimators of $\\gamma_1=0.9$ exhibited similar performance when $\\wp$ was increased to 35\\% or 65\\%. Overall, ERM, MOM and POT can be used as estimators of $\\gamma_1=0.9$ that are more robust to censoring. \n\t\n\t\n\\end{itemize}\n\n\n\n\\subsubsection{General Comments}\n\nAs may be expected, no single estimator is universally the best for estimating the EVI across distributions, size of the EVI and percentage of censoring in the right tail. However, some common underlying behaviours exist. In what follows, we present some general comments on the estimators in all the distributions considered.\n\nIn the first place, we found that the estimators' performance diminish with increasing levels of the percentage of censoring. In this regard, we noticed either a decline in the values of the coverage probability or a wider confidence interval lengths as the percentage of censoring in the right tail increases.\n\nSecondly, most estimators exhibit large bias when estimating small values of $\\gamma_1,$ especially in the Burr and Pareto distributions. However, the proposed ERM estimator is an exception to this as it exhibits high coverage even for the Burr distribution. \n\n\nThirdly, in the case of specific distributions, the following observations were made. In the Burr distribution, ERM, MOM and MomR are generally the best estimators of the EVI. For samples from the Fr\\'{e}chet distribution, ERM and MOM are universally good for estimating various sizes of the EVI and most robust to censoring whereas in the case of samples from the Pareto distribution, ERM, PMom and POT estimators of the EVI appear to be the best. \n\nLastly, we found the two estimators, ERM and MOM as the most appropriate for the estimation of the EVI across all the distributions. In addition, these estimators are the most robust to censoring and the size of the EVI. More importantly, the proposed ERM estimator was observed to be consistently robust for the estimation of the EVI regardless of latter's size and the percentage of censoring. Moreover, the estimation of from the exponential regression, the basis of the ERM estimator, leads to estimators of the second order parameters. These second-order parameters can be used to obtain reduced-bias estimators of quantiles and exceedance probabilities. \n\n\n\\section{Practical Application}\\label{sec_prac}\n\nIn this section, we present an application of the estimators of the EVI discussed in the previous section to study the tails of the distribution of the survival time of AIDS patients. Data was obtained from \\citet{Venables2002} based on a study by Dr. P. J. Solomon and the Australian National Centre in HIV Epidemiology and Clinical Research. \n\nThe data consists of 2,843 patients of which 1,761 patients died while the remaining were right censored. Out of the total number of patients, 2,754 were males, of which 1,708 died and the remaining 1,046 were right censored. In this study, we consider the male patients only. \n\nThis data has been studied in the extreme value theory literature in \\citet{Einmahl2008} and \\citet{Ndao2014}. In the former, the EVI is used to assess the tail heaviness of the right tail of the survival function, $1-F,$ and extreme quantiles are estimated to obtain an indication of how long a healthy man can survive AIDS. The latter uses survival time as a response variable with the age of the patient at diagnosis as covariate to obtain conditional EVI (or tail index) and extreme quantiles. Thus, the tails of the distribution of the survival time of male AIDS patients is studied conditional on the age at diagnosis.\n\nFigure \\ref{AIDS} shows the scatter plot and histogram of the Australian AIDS survival data. The scatter plot indicates that most of the males who survive longer are censored and the histogram indicates that there is a lower chance of survival after 7 years of diagnosis with AIDS. \n\nThe estimation of the EVI has been shown in the simulation to be sensitive to the value of $\\wp.$ The values of $\\wp$ must be reasonably moderate in the top order statistics to enable the application of the estimators of the EVI. Therefore, it is necessary in applications to assess the percentage (or proportion) of censoring in the right tail. The left panel of Figure \\ref{SurvEVIQp} shows a plot of the proportion of censoring as a function of $k.$ \\citet{Einmahl2008} chose the proportion of censoring as $\\wp=0.28$ and justified the selection as corresponding to the most stable part of the graph i.e. $60\\le k\\le200.$ However, owing to the sensitivity of the estimators of $\\gamma_1$ to $\\wp,$ we compute our estimates using the actual $\\wp$ values in the data. \n\nFrom the conclusions drawn from the simulation study and in order to make it less cumbersome, we selected five estimators for illustration. These estimators are ERM, POT, MOM, WW.KM and Zipf. The estimators of the EVI, $\\gamma_1,$ are presented in the right panel of Figure \\ref{SurvEVIQp}. As with the UH estimator used in \\citet{Einmahl2008}, the estimators of $\\gamma_1$ are relatively constant for $k\\ge 200.$ \n\nAlso, in practice, when a set of EVI estimators are to be taken into account,\n\\citet{Henriques2011} provide a simple heuristic approach to aid in selecting an appropriate threshold. \nWe follow a modification of the heuristic approach of selecting an optimal $k$ instead of a percentage of the sample size as used in Section 3. Let $\\gamma_1^{(i)},~i\\in\\Omega$ be the list of estimators under consideration where $\\Omega=\\{\\mbox{Zipf, WW.KM, ERM, MOM, POT} \\}.$ The optimum value of $k,$ is chosen as \n\n\\begin{equation}\\label{k_min}\nk_{\\mbox{opt}}=\\underset{k}{\\mathrm{argmin}}\\sqrt{\\sum_{(i,j)\\in\\Omega,~i\\ne j}^{}\\left(\\hat{\\gamma}_1^{(i)}-\\hat{\\gamma}_1^{(j)}\\right)^2}.\n\\end{equation}\n\n\n\\begin{figure}[htp!]\n\t\\centering\n\t\n\t\\subfloat[]\n\t\t\\includegraphics[height=7cm,width=.48\\textwidth]{Surv_Scat.eps}}\\hfill\n\t\\subfloat[]\n\t\t\\includegraphics[height=6cm,width=.48\\textwidth]{Surv_Prop.eps}}\\hfill\n\t\\caption{ (a) Survival time of AIDS patients. (b) Estimates of the $\\wp.$ }\n\t\\label{AIDS}\n\\end{figure}\n\n\\begin{figure}\n\t\n\t\\centering\n\t\\subfloat[]\n\t\t\\includegraphics[height=6cm,width=0.48\\linewidth]{Surv_EVI.eps}}\\hfill\n\t\\subfloat[]\n\t\t\\includegraphics[height=6cm,width=.48\\textwidth]{kopt_EVI.eps}}\\hfill\n\t\\caption{Estimates of $\\gamma_1,$ left panel; Heuristic choice of the threshold $k,$ right panel}\n\t\\label{SurvEVIQp}\n\t\n\t\n\\end{figure}\n\nWe apply (\\ref{k_min}) to the EVI estimators for the AIDS survival data and the results are presented in the right panel of \\ref{SurvEVIQp}. A closer look at the graph shows a stable region between 200 and 600: we choose $k_{\\mbox{opt}}=339$ (which is equal to 12\\% of the sample size and close to the 10\\% used in the simulation study) for the estimation of $\\gamma_1.$ \n\nThe EVI estimates at $k_{\\mbox{opt}}=339$ are shown in Table \\ref{SurvTab}. In \\citet{Einmahl2008}, only the generalised Hill estimator, $\\hat{\\gamma}_1^{(c,UH)}$ was used for the estimation of the EVI. The estimate of $\\hat{\\gamma}_1^{(c,UH)}$ was found to be 0.14. In addition, \\citet{Ndao2014} estimates $\\gamma_1$ as 0.304, 0.340 and 0.323 for males diagnosed with AIDS at ages 27, 37 and 47 years respectively. \n\nTherefore, with the exception of the Zipf estimator, all the other estimators considered give estimates within the range of the values provided by \\citet{Einmahl2008} and \\citet{Ndao2014}. In particular, our ERM estimator of $\\gamma_1$ and the WW.KM give estimates close to that of \\citet{Ndao2014}, although age was not considered as a factor. Moreover, the ERM estimator is quite stable for most part of the values of $k.$\n\n\n\\begin{table}[htp!]\n\t\\centering\n\t\\caption{Estimates of the EVI and the corresponding extreme quantile at $k_{opt}$}\n\t\\begin{tabular}{llllll}\n\t\t\\toprule\n\t\t&\\multicolumn{5}{c}{EVI} \\\\\\cmidrule{2-6}\n\t\tEstimator & WW.KM & Zipf & MOM & POT& ERM \\\\\n\t\tEstimate &0.334 & 0.587& 0.244& 0.193 & 0.334 \\\\\\bottomrule\n\t\\end{tabular}\n\t\\label{SurvTab}\n\\end{table}\n\n\n\n\n\\section{Conclusions}\\label{sec_conc}\n\nThis paper reviews various estimators of extreme value index when observations are subject to right random censoring. In addition, an estimator based on exponential regression model was proposed among others. Since the asymptotic distributions are not known for all the estimators, theoretical comparison was not possible. Therefore, a simulation study was conducted to compare the performance of the various estimators under different distribution, size of the EVI and percentage of censoring in the right tail. The performance criterion used were bias, MAD, confidence interval length and coverage probability. The simulation results show that the performance of the estimators differ, depending on: the undelying distribution; EVI size; and percentage of censoring in the right tail. Therefore, no estimator was shown to be universally the best across all these scenarios. However, certain estimators perform reasonably well across most distributions. These are the estimators that we recommend as appropriate for the estimation of the EVI. In this regard, if a practitioner is interested in estimators that perform well across distributions in the sense of having good coverage and small interval size, then we recommend the proposed ERM and MOM estimators. The estimators that performed well in the simulation study were illustrated using real data on the survival of AIDS patients. Generally, we recommend that practitioners should assess the distribution of a dataset, size of $\\gamma_1$ and proportion of censoring using other external information. This includes graphical plots to assist in knowing the tail behaviour of the underlying distribution and plot of the proportion of censoring at different values of $k.$ In addition, several estimators can be used to compute estimates of $\\gamma_1$ to assess the possible size of $\\gamma_1,$ and hence, the selection of an appropriate estimator. We believe that the findings from this simulation will help practitioners in the selection of estimators of EVI when data is subject to right random censoring.\n\n\\renewcommand{\\bibname}{References}\n\\bibliographystyle{apalike}\n\\nocite{*}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\\label{introduction}\n\n\nThe majority of existing wireless communication systems operate in the sub-$6$ GHz microwave spectrum, which has now become very crowded.\nAs a result, {\\em millimeter wave} (mmWave) spectrum ranging from $30$ to $300$ GHz has been considered as an alternative to achieve very high data rates in\nthe next generation wireless systems. At these frequencies, a signal bandwidth of 1GHz with {\\em Signal-to-Noise Ratio} (SNR) between 0dB and 3dB yields data rates $\\sim 1$Gb\/s per data stream.\nA mmWave {\\em Base Station} (BS) supporting multiple data streams through the use of \nmultiuser {\\em Multiple-Input Multiple-Output} (MIMO) can achieve tens of Gb\/s of aggregate rate, \nthus fulfilling the requirements of enhanced Mobile Broad Band (eMBB) in 5G \\cite{boccardi2014five,andrews2014will}.\n\nA main challenge of communication at mmWaves is the very short range of isotropic propagation.\nAccording to Frii's Law \\cite{Zhaojie2016BA}, the effective area of an isotropic antenna decreases polynomially with frequency, therefore, the isotropic pathloss\nat mmWaves is considerably larger compared with sub-$6$ GHz counterpart.\nMoreover, signal propagation through scattering elements also suffers from a large attenuation at high frequencies.\nFortunately, the small wavelength of mmWave signals enables to pack a large number of antenna elements\nin a small form factor, such that one can cope with the severe isotropic pathloss by using large antenna arrays both at the BS side and the {\\em User Equipment} (UE) side, providing an overall large beamforming gain. An essential component to obtain such large antenna gains consists of identifying \nsuitable narrow beam combinations, i.e., a pair of {\\em Angle of Departure} (AoD) at the BS and {\\em Angle of Arrival} (AoA) at the UE, \nyielding a sufficiently large beamforming gain through the scatterers in the channel.\n\\footnote{We refer to AoD for the BS and AoA for the UE since the proposed scheme consists of downlink probing from the BS to the UEs.\tOf course, due to the propagation angle reciprocity, the role of AoA and AoD is referred in the uplink.}\nThe problem of finding an AoA-AoD pair with a large channel gain is referred to as\n{\\em Initial Beam Training, Acquisition, or Alignment} in the literature (see references in Section \\ref{related-work}).\nConsistently with our previous work \\cite{sxsBA2017}, we shall refer to it simply as {\\em Beam Alignment} (BA).\n\n\nIt is important to note under which conditions the BA operation must be performed. In this work, we\nfocus on MIMO devices with a {\\em Hybrid Digital Analog} (HDA) structure.\nHDA MIMO is widely proposed especially for mmWave systems, since the size and power consumption of all-digital architectures prevent the integration of many antenna elements on a small space. In a HDA implementation, the signal processing is done via the concatenation of an analog part implementing the beamforming functions, and a digital part implementing the baseband processing \\cite{molisch2017hybrid,OverviewHeath2016}.\nThis poses some specific challenges: {i) The signal received at the antennas passes through an analog beamforming network with only a limited number\nof {\\em Radio Frequency} (RF) chains, much smaller than the number of antennas.\nHence, the baseband signal processing has access to only a low-dimension projection of the whole received signal;\n{ii) Due to the large isotropic pathloss, the received signal power is very low before beamforming, i.e., at every antenna port. Therefore, \nthe implementation of BA is confronted with a very low SNR;\n{iii) Because of the large number of antennas at both sides, the size of the channel matrix between each UE and the BS is very large. However,\nextensive channel measurements have shown that mmWave channels typically exhibit an average of up to $3$ multipath components, each corresponding to\na scattering cluster with small delay\/angle spreading \\cite{Rappaport2014Capacity,Sayeed2014Sparse}. \nAs a result, a suitable BA scheme requires identifying a very sparse\nset of AoA-AoDs in a very low-dimension channel matrix \\cite{RobertSOMP2017, Rappaport2017lowrank}.\n\nThe other fundamental aspect to the BA problem is that this is the first operation that a UE must accomplish in order to communicate with the BS. Hence,\nwhile coarse frame and carrier frequency synchronization may be assumed (especially for the non-stand alone system, assisted by some other existing\ncell operating at lower frequencies), the fine timing and Doppler shift compensation cannot be assumed. It follows that the BA operation must cope with significant timing offsets and\nDoppler shifts. In addition, in a multpath propagation environment with paths coming from different directions, each path may be affected by a different Doppler shift.\nIn multicarrier (OFDM-based) systems, this may lead to significant inter-carrier interference, which has been typically ignored in \nmost of the current literature.\n\n\n\n\\subsection{Related Work}\\label{related-work}\n\nThe most straightforward BA method is an exhaustive search, where the BS and the UE scan all the AoA-AoD beam pairs until they find a strong one \\cite{Rappaport2014Capacity}. This is, however, prohibitively time-consuming, especially considering the very large dimension of the channel matrix due to very large number of antennas.\nSeveral BA algorithms have been recently proposed in the literature. All these algorithms, in some way,\naim at achieving reliable BA while using less overhead than the exhaustive scheme.\n\nIn \\cite{Palacios2017PseudoExhaus}, a two-stage pseudo-exhaustive BA scheme was proposed, where in the first stage,\nthe BS isotropically probes the channel, while the UE performs beam sweeping to find the best AoA, and in the second stage, the UE probes the channel along\nthe estimated AoA from the first stage, while the BS performs beam sweeping to find the best AoD.\nA main limitation of \\cite{Palacios2017PseudoExhaus} is that, due to the isotropic BS beamforming at the first stage, the scheme suffers from a very low pre-beamforming SNR \\cite{Ghosh2013Backhaul, SaeidBA2016,RobertSOMP2017}, which may impairs the whole BA performance. \n\nSome mmWave standards such as IEEE 802.11ad \\cite{80211ad} proposed to use multi-level hierarchical BA schemes (e.g., see also\n \\cite{alkhateeb2014channel,Branka2016Overlap,Noh2017bisection,Hussain2017bisection}).\n The underlying idea is to start with sectors of wide beams to do a coarse BA and then shrink the beamwidth adaptively and successively to obtain a more refined BA.\nThe drawback of such schemes, however, is that each UE has its own specific AoA as seen from the BS side, thus, the BS needs to interact with each UE individually. As a result, all these\nhierarchical schemes require a coordination among the UEs and the BS, which is difficult to have at the initial channel acquisition stage.\nMoreover, since hierarchical schemes requires interactive uplink-downlink communication \nbetween the BS and each individual UE, it is not clear how the overhead of such schemes scales in small cell scenarios \nwith significant mobility of users across cells, where the BA procedure should be repeated at each handover. \n\n\n\nThe sparse nature of mmWave channels, i.e., large-dimension channel matrices along with very sparse scatterers in the AoA-AoD domain \\cite{Rappaport2014Capacity,Sayeed2014Sparse}, motivates the application of {\\em Compressed Sensing} (CS) methods \nto speed up the BA. There are two groups of CS-based methods in the literature.\nThe first group (e.g., see \\cite{AhmedFreqOMP2015, RobertSOMP2017,Time2017,AlkhateebTimeDomain2017}) applies \nCS to estimate the complex baseband channel coefficients. These algorithms are efficient and particularly attractive for multiuser scenarios, \nbut they are based on the assumption that the instantaneous channel remains invariant during the whole\nprobing\/measuring stage. As anticipated before, this assumption is difficult to meet at mmWaves because of the large Doppler spread between the \nmultipath components coming from different angles, implying significant time-variations of the channel coefficients even for UEs with small mobility \\cite{WeilerMeasure2014,HeathVariation2017,Rappaport2017lowrank}.\\footnote{Notice that the channel delay spread and time-variation are greatly reduced {\\em after BA is achieved}, since once the beams are aligned, the communication occurs only through a single multipath component\n\twith small effective angular spread, whose delay and Doppler shift can be well compensated \\cite{HeathVariation2017}.\n\tHowever, {\\em before BA is achieved} the channel delay spread and time-variation can be large due to the presence of\n\tseveral mulipath components, each with its own delay and Doppler shift.\n\tIn this case, even a small motion of a few centimeters traverses several wavelengths, potentially producing multiple\n\tdeep fades \\cite{WeilerMeasure2014}.}\nThe second group of CS-based schemes focuses on estimating the second-order statistics of the channel, i.e., the covariance of the channel matrix,\nwhich is very robust to channel variations. In \\cite{Rappaport2017lowrank} for example, a {\\em Maximum Likelihood} (ML) method was proposed to estimate the covariance of the channel matrix. However, this scheme suffers from low SNR and the BA is achieved only at the UE side because of isotropic probing \nat the BS. In our previous work \\cite{sxsBA2017}, we proposed an efficient BA scheme that jointly estimates the two-sided AoA-AoD of the strongest path from the second-order statistic of the channel matrix. A limitation of \\cite{sxsBA2017} as well as most works\nbased on OFDM signaling \\cite{Rappaport2017lowrank,RobertSOMP2017} is the assumption of perfect OFDM frame synchronization and no inter-carrier interference. This is in fact difficult to achieve at mmWaves due to the potentially large multipath delay spread, \nDoppler shifts, and very low SNR before BA.\nThese weaknesses, together with the fact that OFDM signaling suffers from large {\\em Peak-to-Average Power Ratio} (PAPR), has motivated the proposal of\nsingle-carrier transmission \\cite{Ghosh2014singleCarrier,Colavolpe2017singleCarrier} as a more favorable option at mmWaves.\nRecently, \\cite{AlkhateebTimeDomain2017,Time2017} proposed a time-domain BA approach based on CS techniques for single-carrier mmWave systems.\nHowever, as in \\cite{AhmedFreqOMP2015, RobertSOMP2017}, this work focuses on estimating the instantaneous complex channel coefficients,\nwith the assumption that these complex coefficients remain invariant over the whole training stage, which is an unrealistic assumption, as discussed above \\cite{WeilerMeasure2014,HeathVariation2017,Rappaport2017lowrank, sxsBA2017}.\n\n\n\n\n\\subsection{Contributions}\n\nIn this paper, we propose an efficient BA scheme for single-carrier mmWave communications with HDA transceivers \nand frequency-selective multipath channels. \nIn the proposed scheme, each UE independently estimates its best AoA-AoD pair over the reserved beacon slots (see Section \\ref{systemmodel}), \nduring which the BS periodically broadcasts its probing time-domain sequences. We exploit the sparsity of the mmWave channel in both angle and delay domains \\cite{NasserAngleTime2016} to reduce the training overhead. We also pose the estimation of the strongest AoA-AoD pair as a {\\em Non-Negative Least Squares} (NNLS) problem, which can be efficiently solved by standard techniques. Our main contribution can be summarized as follows:\n\n\n{1) {\\em Pure Time-Domain Operation.} Unlike our prior work in \\cite{sxsBA2017} and other works based on OFDM signaling \\cite{Rappaport2017lowrank,RobertSOMP2017}, the scheme proposed in this paper takes place completely in the \ntime-domain and uses \\textit{Pseudo-Noise} (PN) sequences with \\textit{good} correlation properties that suits single-carrier mmWave systems. \n\n{2) {\\em More General and Realistic mmWave Channel Model.} We consider a quite general mmWave wireless channel model, taking into account \nthe fundamental features of mmWave channels such as fast time-variation due to Doppler, frequency-selectivity, and the AoA-AoD sparsity \\cite{WeilerMeasure2014,Rappaport2017lowrank,Tim2018}. As in \\cite{sxsBA2017, Rappaport2017lowrank}, we design a suitable signaling scheme to collect quadratic measurements, yielding estimates of the channel second-order statistics in the AoA-AoD domain. Consequently, the proposed scheme is highly robust to the channel time-variations. \n\n{3) {\\em Tolerance to Large Doppler shifts.} Unlike our prior work in \\cite{sxs2017Time} and the work in \\cite{OverviewHeath2016}, which model Doppler as a piecewise constant phase shift changing across blocks of symbols, here we consider a continuous linear (in time) phase shift within the whole beacon slot. \nAs a by-product of our refined Doppler model, we notice that longer PN sequences do not necessarily exhibit better performances since they undergo a larger phase rotations\tdue to the Doppler. We illustrate by numerical experiments that there is an optimal sequence length based on the given set of parameters, using which the proposed scheme achieves better performances in the presence of large Doppler shifts encountered at mmWaves.\n\t\n{4) {\\em Effectiveness of Single-Carrier Modulation.} Our proposed time-domain BA scheme is tailored to single-carrier mmWave systems. In particular, we show that, after achieving BA, the effective channel reduces essentially to a single path with a single delay and Doppler shift, with relatively large SNR due to the high beamforming gain. This means that single-carrier modulation needs no time-domain equalization and the baseband signal processing\nbecomes very simple, since it requires only standard timing and {\\em carrier frequency offset} (CFO) estimation and compensation. \n\n{{\\mathbb f} Notation}: We denote vectors by boldface small (e.g., ${\\bf a}$) and matrices by boldface capital (e.g., ${\\bf A}$) letters. Scalars are denoted by non-boldface letters (e.g., $a$, $A$). We represent sets by calligraphic letter ${\\cal A}$ and their cardinality with $|{\\cal A}|$. We use ${\\mathbb E}$ for the expectation, $\\otimes$ for the Kronecker product of two matrices, ${\\bf A}^{\\sf T}$ for transpose, ${\\bf A}^*$ for conjugate, and ${\\bf A}^{{\\sf H}}$ for conjugate transpose \nof a matrix ${\\bf A}$. For an integer $k\\in\\field{Z}$, we use the shorthand notation $[k]$ for the set of non-negative integers $\\{1,...,k\\}$.\n\n\n\n\n\\section{Problem Statement}\n\nIn this section, we provide a general overview of the BA problem based on the channel second-order statistics.\n\\begin{figure}[t]\n\t\\centerline{\\includegraphics[width=8.5cm]{1_bread3.pdf}}\n\t\\caption{{\\small Illustration of the channel sparsity in the {\\em Angle of Arrival} (AoA), {\\em Angle of Departure} (AoD), and delay ($\\tau$) domains. (a) Slices of the channel power spread function over discrete delay taps, where only a few slices contain scattering components with large power. (b) Marginal power spread function of the channel in the AoA-AoD domain obtained from the integration of the power spread function along all the delay taps.}}\n\t\\label{1_bread}\n\\end{figure}\n\\subsection{Channel Second-Order Statistics}\nWe consider a standard and widely used mmWave scattering channel (e.g., see \\cite{Rappaport2014Capacity,Sayeed2014Sparse}) \nillustrated in \\figref{1_bread} (a).\nThe propagation channel between the BS and a generic UE consists of a sparse collection of multipath\ncomponents in the AoA-AoD-delay $(\\phi,\\theta,\\tau)$ domain, including a possible {\\em Line-of-Sight} (LOS) component as well as\nsome {Non-Line-of-Sight} (NLOS) reflected paths \\cite{Colavolpe2017singleCarrier}.\nThe scattering channel is modeled as locally {\\em Wide-Sense Stationary} (WSS)\nprocess with \\textit{Power Spread Function} (PSF) $f_p(\\phi, \\theta, \\tau)$ at the AoA-AoD-Delay $[\\phi, \\phi + d\\phi) \\times [\\theta, \\theta + d\\theta) \\times [\\tau, \\tau + d\\tau)$.\nThe PSF encodes the second-order statistics of the channel and it is locally time-invariant as long as the propagation geometry does not change significantly.\nThe time scale over which the PSF is time-invariant is very large with respect to the inverse of the signaling bandwidth, justifying the locally WSS assumption.\nPractical channel measurements have shown that only a few tapped delay elements are enough to represent the sparse channel \\cite{Rappaport2014Capacity,Sayeed2014Sparse,NasserAngleTime2016}. This is illustrated in \\figref{1_bread} (a), where only a few slices of the PSF contain\nscattering components with large power. The marginal PSF of the channel in the AoA-AoD domain\nis obtained by integrating over the whole delay taps as\n\\begin{align}\nf_p( \\phi, \\theta)=\\int_{\\tau} f_p(\\phi, \\theta, d\\tau),\\label{gam_th_th}\n\\end{align}\nand it is typically very sparse (see, e.g., Fig.\\,\\ref{1_bread} (b)).\n\n\n\n\n\n\\subsection{Beam-Alignment Using Second-order Statistics}\nIn terms of BA, we are interested in finding an AoA-AoD pair corresponding to\nstrong communication path between the UE and the BS.\nIf the marginal PSF of the channel in the AoA-AoD domain $f_p(\\phi, \\theta)$ as in \\eqref{gam_th_th} is {\\em a-priori} known, the BA problem simply boils down to finding the support of $f_p(\\phi, \\theta)$\n(e.g., see the two bubbles in \\figref{1_bread} (b)). In practice, however, $f_p(\\phi, \\theta)$ is not a priori known and should be\nestimated via a suitable signaling scheme. With this in mind, we can pose the BA problem as follows.\n\n\\vspace{1mm}\n\\noindent\n\\colorbox{gray!40}{\n\t\\begin{minipage}{1.0\\textwidth\n\t\t\\noindent{{\\mathbb f} BA Problem}: Design a suitable signaling between the BS and the UE, find an estimate of the AoA-AoD PSF $f_p(\\phi, \\theta)$, and identify an AoA-AoD pair $(\\phi_0, \\theta_0)$ with a sufficiently large strength $f_p(\\phi_0,\\theta_0)$. \t\n\t\\end{minipage}\n}\n\\vspace{1mm}\n\n\n\n\nIn this paper, we use pseudo-random waveforms with nice auto- \/ cross-correlation properties as the channel probing signal. We will show that, using the proposed signaling,\neach UE is able to collect its own quadratic measurements which yields a noisy linear projection of the PSF in the AoA-AoD domain $f_p(\\phi, \\theta)$.\nWe exploit the sparsity and non-negativity of the PSF to reformulate the estimation of $f_p(\\phi, \\theta)$ as a NNLS problem, which yields a good estimate of the channel second-order statistics.\n\n\n\n\\begin{figure*}[t]\n\t\\centerline{\\includegraphics[width=14cm]{2_scheme.pdf}}\n\\caption{{\\small (a) (Top) Frame structure of the proposed \\textit{Beam Alignment} (BA) scheme. (Bottom) Each beacon slot consists of $S$ PN sequences indexed by $s'\\in [S]$, and all the PN sequences have access to the whole effective bandwidth $B'\\leq B$. (b) Illustration of the proposed BA process between the BS and a generic UE, where the procedures (\\#2$\\sim$\\#5) are independently done at each UE, and all the UEs share the same BS beamforming codebook (\\#1).}}\n\\label{2_frame_timeline}\n\\end{figure*}\n\n\n\\figref{2_frame_timeline} (a) illustrates the proposed frame structure which consists of three parts: the downlink beacon slot, the {\\em Random Access Control CHannel} (RACCH) slot, and the data slot. An overview of the proposed initial acquisition and BA protocol is illustrated in \\figref{2_frame_timeline} (b). As in \\cite{sxsBA2017,sxs2017Time}, the measurements are collected at the UEs from downlink beacon slots broadcasted by the BS.\tEach UE selects its strongest AoA-AoD pair $(\\phi_0, \\theta_0)$ in the estimated $f_p(\\phi, \\theta)$ as the joint beamforming directions during the data transmission. Then, the acquisition protocol proceeds as described in \\cite{sxsBA2017,sxs2017Time}.\tNamely, the UE sends a beamformed packet to the BS in the RACCH slot, during which the BS stays in listening mode. This packet contains basic information such as user ID and the index of the beam corresponding to the selected AoD $\\theta_0$. The BS responds with a data packet with piggybacked acknowledgment in the data subslot of a next frame, and from this moment on the BS and the UE are connected. Further beam refinement and tracking is possible to adapt to small variations of the propagation geometry. However, this can be achieved by rather standard array processing and goes outside the scope of this paper. \n\t\n\t\n\n\n\\subsection{Equivalent Channel after Beam-Alignment}\nAfter completing a BA cycle as described above, the UE and the BS focus their beams\non a specific AoA-AoD pair $(\\phi_0, \\theta_0)$ to boost the SNR as much as possible.\nAs a result, the equivalent channel after beamforming along $(\\phi_0, \\theta_0)$ can be represented by a \\textit{Single-Input Single-Output} (SISO) channel.\nThe PSF of the resulting SISO channel can be well approximated by $f_p(\\tau)=f_p(\\phi_0, \\theta_0, \\tau)$ in the delay domain, which simply corresponds to the PSF obtained\nby focusing the transmitted signal power along the estimated AoA-AoD $(\\phi_0, \\theta_0)$. Due to the underlying channel sparsity \\cite{Rappaport2014Capacity,Sayeed2014Sparse,NasserAngleTime2016}, we expect that, after BA, the PSF $f_p(\\tau)$ consists of almost a single scattering element with a specific delay $\\tau_0$\nand Doppler shift $\\nu_0$, which can be estimated and compensated by a standard timing and frequency offset synchronization subsystem. Since this is operated after BA, the operating SNR is not at all critical. Therefore, standard techniques for single-carrier synchronization can be used. Furthermore, since the effective channel\tafter BA reduces to a single multipath component, it is essentially frequency-flat. It follows that near-optimal performance can be achieved by single-carrier communication without the need of equalization. This is confirmed by the results in Section \\ref{results}, where we use the effective SISO channel to derive upper and lower bounds on the achievable ergodic rate after BA, showing that time-domain equalization is effectively not needed.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Mathematical Modeling}\\label{systemmodel}\n\n\n\n\\subsection{Channel Model}\nConsider a generic UE in a mmWave system served by a specific BS. Suppose that the BS is equipped with a {\\em Uniform Linear Array} (ULA) having $M$ antennas and $M_{\\text{RF}}\\ll M$ RF chains. The UE also has a ULA with $N$ antennas and $N_{\\text{RF}}\\ll N$ RF chains.\nWe assume that both the BS and the UE apply a hybrid beamforming consisting of an analog precoder\/combiner and a digital precoder\/combiner. In this paper, we will focus mainly on training the analog precoders\/combiners in the initial BA phase. We assume that the propagation channel between the BS and the UE consists of $L\\ll \\max\\{M,N\\}$ multipath components,\nwhere the $N\\times M$ baseband equivalent impulse response of the channel at time slot $s$ is given by\n\\begin{align}\\label{ch_mod_disc_mp}\n{\\sf H}_s(t,\\tau)&=\\sum_{l=1}^L \\rho_{s,l} e^{j2\\pi \\nu_{l}t}{\\bf a}_{\\text{R}}(\\phi_l) {\\bf a}_{\\text{T}}(\\theta_l)^{{\\sf H}} \\delta(\\tau-\\tau_l),\n\\end{align}\nwhere $(\\phi_l, \\theta_l, \\tau_l, \\nu_l)$ denote the AoA, AoD, delay, and Doppler shift \nof the $l$-th component, and $\\delta(\\cdot)$ denotes the Dirac delta function.\nThe vectors $ {\\bf a}_{\\text{T}}(\\theta_l)\\in {\\mathbb C}^{M}$ and $ {\\bf a}_{\\text{R}}(\\phi_l)\\in {\\mathbb C}^{N}$ are the array response vectors of the BS and UE at AoD $\\theta_l$ and AoA $\\phi_l$ respectively, with elements given by\n\\begin{subequations} \\label{array-resp}\n\t\\begin{align}\n\t[{\\bf a}_{\\text{T}}(\\theta)]_m&=e^{j (m-1)\\pi \\sin(\\theta)}, m \\in[M], \\label{a_resp_BS}\\\\\n\t[{\\bf a}_{\\text{R}}(\\phi)]_n&=e^{j (n-1)\\pi \\sin(\\phi)},\\ n\\in[N],\\label{a_resp_UE}\n\t\\end{align}\n\\end{subequations}\nwhere we assume that the spacing of the ULA antennas equals to half wavelength.\n\n\nFor simplicity, in the channel model \\eqref{ch_mod_disc_mp}, we neglect the effect of pulse shaping \nand assume a frequency-flat pulse response over the signal bandwidth \\cite{Rappaport2017lowrank}.\nAlso, for the sake of modeling simplicity, we assume here that each multipath component has a very narrow footprint over the AoA-AoD and delay domain.\nExtension to more widely spread multipath clusters is straightforward and will be applied in the numerical simulations.\nMoreover, we make the very standard assumption in array processing that the array response vectors are invariant with \nfrequency over the signal bandwidth (i.e., that wavelength $\\lambda$ over the frequency interval $f\\in[f_0-B\/2,f_0+B\/2]$ \ncan be approximated as $\\lambda_0 = c\/f_0$ where $c$ denotes the speed of light. \nThis is indeed well verified when $B$ is less than $1\/10$ of the carrier frequency \n(e.g., $B = 1$GHz with carrier between $30$ and $70$ GHz).\nEach scatterer corresponding to a AoA-AoD-Delay $(\\phi_l, \\theta_l, \\tau_l)$ has a Doppler shift $\\nu_l = \\frac{\\Delta v_{l}f_0}{c}$ where\n$\\Delta v_{l}$ indicates the relative speed of the receiver, the $l$-th scatterer, and the transmitter \\cite{OverviewHeath2016}. \nWe adopt a block fading model, where the channel gains $\\rho_{s,l}$ remain invariant over the\nchannel \\textit{coherence time} $\\Delta t_c$ but change i.i.d. randomly across different \\textit{coherence times} \\cite{Rappaport2017lowrank}.\nSince each scatterer in practice is a superposition of many smaller components that have (roughly) the same AoA-AoD and delay, we assume a general \nRice fading model given by\n\\begin{align}\\label{rice_fading}\n\\rho_{s,l}\\sim \\sqrt{\\gamma_l} \\left(\\sqrt{\\frac{\\eta_l}{1+\\eta_l}}+\\frac{1}{\\sqrt{1+\\eta_l}}\\check{\\rho}_{s,l}\\right),\n\\end{align}\nwhere $\\gamma_l$ denotes the overall multipath component strength, \n$\\eta_l\\in[0,\\infty)$ indicates the strength ratio between the LOS and the NLOS components, \nand $\\check{\\rho}_{s,l} \\sim {{\\cal C}{\\cal N}}(0, 1)$ is a zero-mean unit-variance complex Gaussian random variable. \nIn particular, $\\eta_l\\to \\infty$ indicates a pure LOS path while $\\eta_l=0$ indicates a pure NLOS path, affected by standard Rayleigh fading.\n\n\n\n\n\n\n\\subsection{Proposed Signaling Scheme}\\label{signal_procedure}\n\nWe assume that the BS can simultaneously transmit up to $M_{\\text{RF}}\\ll M$ different pilot streams.\nIn this paper, we consider a time-domain signaling where a unique PN sequence is assigned to each RF chain (pilot stream), similar to standard {\\em Code Division Multiple Access} (CDMA).\nUnlike our previous work in \\cite{sxsBA2017}, where different pilot streams are assigned with sets of orthogonal subcarriers, such that\n(in the absence of inter-carrier interference) they can be perfectly separated by the UE in the frequency domain,\nin the proposed scheme, different pilot streams are generally non-orthogonal (non-separable) but become almost separable\nif the assigned PN sequences have good cross-correlation properties and are sufficiently long.\nLet $x_{s,i}(t)$, $t \\in [st_0, (s+1)t_0)$, be the continuous-time baseband equivalent PN signal corresponding\nto the $i$-th ($i\\in[M_{\\text{RF}}]$) pilot stream transmitted over $s$-th slot, given by\n\\begin{align}\\label{time_PN_sig}\nx_{s,i}(t)=\\sum_{n=1}^{N_c}\\varrho_{n,i}p_{r}(t-nT_c),\\quad \\varrho_{n,i}\\in\\{1,-1\\},\n\\end{align}\nwhere $t_0$ denotes the duration of the PN sequence, $p_{r}(t)$ is the normalized band-limited pulse shaping filter response with normalized energy $\\int |p_{r}(t)|^2dt=1$. We assume that the PN sequence has a chip duration of $T_c$, a bandwidth of $B'=1\/T_c \\leq B$, and a total of $N_c=t_0\/T_c=t_0B'$ chips, where $B$ is the maximum available bandwidth. We shall choose a suitable PN sequence length $N_c$, such that the resulting time-domain signal \\eqref{time_PN_sig} is transmitted in a sufficiently small time-interval $t_0$ over which the channel can be considered time-invariant, i.e., $t_0\\leq \\Delta t_c$.\n\n\n\n\n\nTo transmit the $i$-th pilot stream, the BS applies a beamforming vector ${\\bf u}_{s,i} \\in {\\mathbb C}^M$. Without loss of generality, the beamforming vectors are normalized such that $\\|{\\bf u}_{s,i}\\|=1$. As mentioned before, we consider a HDA beamforming architecture where the beamforming function is implemented in the analog RF domain. Hence,\nthe beamforming vectors ${\\bf u}_{s,i}$, $i \\in [M_{\\text{RF}}]$, are independent of frequency and constant over the whole bandwidth.\nThe transmitted signal at slot $s$ is given by\n\\begin{align}\n{\\bf x}_s(t) & =\\sum_{i=1}^{M_{\\text{RF}}}\\sqrt{\\frac{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} T_c}{M_{\\text{RF}}}} x_{s,i}(t) {\\bf u}_{s,i}\n= \\sum_{i=1}^{M_{\\text{RF}}}\\sum_{n=1}^{N_c}\\sqrt{\\frac{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} T_c}{M_{\\text{RF}}}}\\varrho_{n,i}p_{r}(t-nT_c){\\bf u}_{s,i},\n\\end{align}\nwhere ${P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}}$ is the total transmit power which is equally distributed into the $M_{\\text{RF}}$ RF chains from BS. Consequently, the received basedband equivalent signal at the UE array is\n\\begin{align}\\label{receiveTT}\n{\\bf r}_s(t)&=\\int {\\sf H}_s(t,d\\tau) {\\bf x}_s(t-\\tau) =\\sum_{l=1}^L {\\sf H}_{s,l}(t){\\bf x}_s(t-\\tau_l)\\nonumber\\\\\n&=\\sum_{i=1}^{M_{\\text{RF}}} \\sum_{l=1}^L\\sqrt{\\frac{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} T_c}{M_{\\text{RF}}}} {\\sf H}_{s,l}(t)x_{s,i}(t-\\tau_l){\\bf u}_{s,i} ,\n\\end{align}\nwhere ${\\sf H}_{s,l}(t) := \\rho_{s,l} e^{j2\\pi \\nu_{l}t} {\\bf a}_{\\text{R}}(\\phi_l) {\\bf a}_{\\text{T}}(\\theta_l)^{{\\sf H}}$, $l\\in[L]$ are the time-varying\nMIMO channel ``taps'' corresponding to the $L$ multipath components \\eqref{ch_mod_disc_mp}. \n\n\n\n\n\n\nWith a hybrid MIMO structure, the UE does not have direct access to (a sampled version of) the components of ${\\bf r}_s(t)$. Instead, at each measurement slot $s$, the UE must apply some beamforming vector in the analog domain obtaining a projection\nof the received signal. Since the UE has $N_{\\text{RF}}$ RF chains, it can obtain up to $N_{\\text{RF}}$ such projections per slot.\nThe analog RF signal received at the UE antenna array is distributed across the $N_{\\text{RF}}$ RF chains for demodulation.\nThis is achieved by signal splitters that divide the signal power by a factor of $N_{\\text{RF}}$.\nThus, the received signal at the output of the $j$-th RF chain at the UE side is given by\n\\begin{align}\\label{eq:j_out}\ny_{s,j}(t)&=\\!\\frac{1}{\\sqrt{N_{\\text{RF}}}} {\\bf v}_{s,j}^{{\\sf H}} {\\bf r}_s(t)+z_{s,j}(t) =\\!\\!\\sum_{i=1}^{M_{\\text{RF}}}\\sum_{l=1}^L\\!\\!\\sqrt{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}}{\\bf v}_{s,j}^{{\\sf H}}{\\sf H}_{s,l}(t)x_{s,i}(t\\!-\\!\\tau_l){\\bf u}_{s,i}\\!+\\!z_{s,j}(t),\n\\end{align}\nwhere ${P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}} = \\frac{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} T_c}{M_{\\text{RF}}N_{\\text{RF}}}$ indicates the power distribution to the multiple RF chains on both sides, ${{\\bf v}_{s,j} \\in {\\mathbb C}^N}$ denotes the normalized beamforming vector of the $j$-th RF chain at the UE side with $\\|{\\bf v}_{s,j}\\|=1$, $z_{s,j}(t)$ is the continuous-time complex {\\em Additive White Gaussian Noise} (AWGN) at the output of the $j$-th RF chain, with a {\\em Power Spectral Density} (PSD) of $N_0$ Watt\/Hz. The noise at the receiver is mainly introduced by the RF chain electronics, e.g., filter, mixer, and A\/D conversion. The factor $\\frac{1}{\\sqrt{N_{\\text{RF}}}}$ in \\eqref{eq:j_out} takes into account the power split said above, assuming that this only applies to the useful signal and not the thermal noise. Therefore, this received signal model is a conservative worst-case assumption.\n\nWe adopt a simplified continuous linear Doppler phase shift model in ${\\sf H}_{s,l}(t)$, given by\n\\begin{align}\\label{doppler_model}\n{\\sf H}_{s,l}(t)|_{t\\in [nT_c,(n+1)T_c)}\\approx\\rho_{s,l} e^{j2\\pi (\\check{\\nu}_{s,l}+\\nu_{l}nT_c)} {\\bf a}_{\\text{R}}(\\phi_l) {\\bf a}_{\\text{T}}(\\theta_l)^{{\\sf H}}={\\sf H}_{s,l}e^{j2\\pi \\nu_{l}nT_c},\\quad n\\in[N_c]\n\\end{align}\nwhere ${\\sf H}_{s,l}:=\\rho_{s,l} e^{j2\\pi\\check{\\nu}_{s,l}} {\\bf a}_{\\text{R}}(\\phi_l) {\\bf a}_{\\text{T}}(\\theta_l)^{{\\sf H}}$, and where $\\check{\\nu}_{s,l}$ represents an arbitrary initial value which changes i.i.d randomly over different beacon slots.\nAs a result, the product term ${\\sf H}_{s,l}(t)x_{s,i}(t-\\tau_l)$ in \\eqref{eq:j_out} can be written as\n\\begin{align}\\label{x_l}\n{\\sf H}_{s,l}(t)x_{s,i}(t\\!-\\!\\tau_l)&={\\sf H}_{s,l}\\!\\sum_{n=1}^{N_c}\\varrho_{n,i}p_{r}(t\\!-\\!nT_c\\!-\\!\\tau_l)e^{j2\\pi \\nu_{l}nT_c}:={\\sf H}_{s,l} x_{s,i}^l(t-\\tau_l),\n\\end{align}\nwhere $x_{s,i}^l(t)$ is given by\n\\begin{align}\\label{x_l_rotate}\nx_{s,i}^l(t)=\\sum_{n=1}^{N_c}\\varrho_{n,i}p_{r}(t-nT_c)e^{j2\\pi \\nu_{l}nT_c}.\n\\end{align}\nWe can interpret $x_{s,i}^l(t)$ as a rotated version of the original transmitted PN sequence $x_{s,i}(t)$, where the $n$-th chip of the original signal $x_{s,i}(t)$ is rotated by a small Doppler shift $e^{j2\\pi \\nu_{l}nT_c}$.\nSubstituting \\eqref{x_l} into \\eqref{eq:j_out}, we can write the received signal $y_{s,j}(t)$ in \\eqref{eq:j_out} as\n\\begin{align}\\label{eq:j_out_simp}\ny_{s,j}(t)\\!=\\!\\sum_{i=1}^{M_{\\text{RF}}}\\sum_{l=1}^L\\!\\!\\sqrt{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}}{\\bf v}_{s,j}^{{\\sf H}}{\\sf H}_{s,l} x_{s,i}^l(t\\!-\\!\\tau_l){\\bf u}_{s,i}\\!+\\!z_{s,j}(t).\n\\end{align}\n\n\nSince the PN sequences assigned to the $M_{\\text{RF}}$ RF chains are mutually (roughly) orthogonal, the $M_{\\text{RF}}$ pilot streams transmitted from the BS side can be approximately separated at the UE by correlating the received signal with a desired matched filter $x_{s,i}^*(-t)=\\sum_{n=1}^{N_c}\\varrho_{n,i}p_{r}^*(-t+nT_c)$. Consequently, the $i$-th BS pilot stream received through the $j$-th RF chain at the UE is given by\n\\begin{align}\\label{eq:j_outFrom_i}\ny_{s,i,j}(t)&=\\int y_{s,j}(\\tau)x_{s,i}^*(\\tau-t)d\\tau=\\sum_{l=1}^L\\sum_{i'=1}^{M_{\\text{RF}}}\\!\\!\\sqrt{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}}{\\bf v}_{s,j}^{{\\sf H}}{\\sf H}_{s,l} R_{i',i}^{x^l}(t\\!-\\!\\tau_l){\\bf u}_{s,i}\\!+\\!z_{s,j}^c(t)\\nonumber\\\\\n&\\overset{(a)}{\\approx} \\sum_{l=1}^L\\sqrt{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}}{\\bf v}_{s,j}^{{\\sf H}}{\\sf H}_{s,l} R_{i,i}^{x^l}(t\\!-\\!\\tau_l){\\bf u}_{s,i}\\!+\\!z_{s,j}^c(t)\n\\end{align} \nwhere $\\forall i,i'\\in[M_{\\text{RF}}]$, $R_{i',i}^{x^l}(t):=\\int x_{s,i'}^l(\\tau)x_{s,i}^*(\\tau-t)d\\tau$ represents the correlation between the Doppler-rotated sequence $x_{s,i'}^l(t)$ given by \\eqref{x_l_rotate} and the desired matched filter $x_{s,i}^*(-t)$, and $z_{s,j}^c(t) = \\int z_{s,j}(\\tau)x_{s,i}^*(\\tau-t)d\\tau$ denotes the noise at the output of the matched filter. The approximation $(a)$ in \\eqref{eq:j_outFrom_i} follows the fact that, the cross-correlations between different PN sequences are nearly zero, i.e., $R_{i'\\!,i}^x(t)=\\int x_{s,i'}(\\tau)x_{s,i}^*(\\tau-t)d\\tau\\approx 0$, for $i'\\neq i$. Also note that, the phase rotation for each chip in $x_{s,i}^l(t)$ is very small ($\\nu_{l}T_c\\ll 1$), hence, we can apply the following approximation $R_{i'\\!,i}^{x^l}(t)=\\int x_{s,i'}^l(\\tau)x_{s,i}^*(\\tau-t)d\\tau\\approx 0$, for $i'\\neq i$. In numerical simulations, we will consider the general case where the sequences are not necessarily orthogonal and the Doppler shift can be moderately large.\n\nConsider \\eqref{eq:j_outFrom_i} and suppose that the output signal at the UE side is sampled at chip-rate, the resulting discrete-time signal can be written as\n\\begin{align}\\label{eq:j_outFrom_i_sample}\ny_{s,i,j}[k]&=y_{s,i,j}(t)|_{t=kT_c}=\\!\\sum_{l=1}^L\\!\\!\\sqrt{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}}{\\bf v}_{s,j}^{{\\sf H}}{\\sf H}_{s,l} R_{i,i}^{x^l}(k T_c\\!-\\!\\tau_l){\\bf u}_{s,i}\\!+\\!z_{s,j}^c[k],\n\\end{align}\nwhere $k\\in[\\check{N}_c]$, $\\check{N}_c\\geq N_c+\\frac{\\Delta \\tau_{\\max}}{T_c}$ indicates the sampling indices, and where $\\Delta \\tau_{\\max}=\\max\\{|\\tau_l-\\tau_{l'}|: {l,l'\\in[L]}\\}$ denotes the maximum delay spread of the channel.\nNote that for PN sequences, the sequence $k \\mapsto |R_{i,i}^{x^l}(kT_c\\!-\\!\\tau_l)|$ in \\eqref{eq:j_outFrom_i_sample}, seen as a function of the discrete index $k$, has sharp peaks at indices $k_l \\approx \\frac{\\tau_l}{T_c}$ corresponding to the delay of the scatterers. Intuitively speaking, the output $y_{s,i,j}[k]$ at those indices $k_l$ yields Gaussian variables whose power is obtained by projecting the AoA-AoD-Delay PSF $f_p(\\phi, \\theta, \\tau)$ along beamforming vectors $({\\bf u}_{s,i},{\\bf v}_{s,j})$ in the angular domain and along the $k_l$-th slice in the delay domain with $\\tau \\in [k_lT_c, (k_l+1)T_c]$. The slicing in the delay domain results from the fact that $|R_{i,i}^{x^l}(kT_c\\!-\\!\\tau_l)|$ is well localized around $k_l$. We refer to \\figref{1_bread} (a) for an illustration and will use this property later on in the paper to design our BA algorithm.\n\n\n\n\n\n\\subsection{Sparse Beamspace Representation}\n\nIn practice, the AoA-AoDs $(\\phi_l, \\theta_l)$ in \\eqref{ch_mod_disc_mp} can take on arbitrary values in the contnuum of AoA-AoDs. Following the widely used approach of \\cite{SayeedVirtualBeam2002}, known as {\\em beamspace representation}, we obtain a finite-dimensional representation of the channel by quantizing the antenna-domain channel response \\eqref{ch_mod_disc_mp} with respect to a discrete dictionary in the AoA-AoD (angle) domain. More specifically, we consider the discrete set of AoA-AoDs\n\\begin{subequations} \\label{theta-phi}\n\t\\begin{align}\\label{gridtheta}\n\t\\Phi&:=\\{\\check{\\phi}: (1+\\sin(\\check{\\phi}))\/2=\\frac{n-1}{N}, \\, n \\in [N]\\},\\\\\n\t\\Theta&:=\\{\\check{\\theta}: (1+\\sin(\\check{\\theta}))\/2=\\frac{m-1}{M}, m\\in [M]\\},\n\t\\end{align}\n\\end{subequations}\nand use the corresponding array responses ${\\cal A}_{\\text{R}}:=\\{{\\bf a}_{\\text{R}}(\\check{\\phi}): \\check{\\phi} \\in \\Phi\\}$ and ${\\cal A}_{\\text{T}}:=\\{{\\bf a}_{\\text{T}}(\\check{\\theta}): \\check{\\theta} \\in \\Theta\\}$ as a discrete dictionary to represent the channel response. For the ULAs considered in this paper, the dictionary ${\\cal A}_{\\text{R}}$ and ${\\cal A}_{\\text{T}}$, after suitable normalization, yields the orthonormal bases that corresponds to the {\\em Discrete Fourier Transformation} (DFT) matrices ${\\bf F}_{N}\\in {\\mathbb C}^{N\\times N}$ and ${\\bf F}_{M}\\in {\\mathbb C}^{M\\times M}$, respectively, with elements\n\\begin{subequations}\n\t\\begin{align}\n\t[{\\bf F}_{N}]_{n,n'}&=\\frac{1}{\\sqrt{N}}e^{j2\\pi (n-1)(\\frac{n'-1}{N}-\\frac{1}{2})}, n,n'\\in[N],\\\\\n\t[{\\bf F}_{M}]_{m,m'}&=\\frac{1}{\\sqrt{M}}e^{j2\\pi (m-1)(\\frac{m'-1}{M}-\\frac{1}{2})}, m,m'\\in[M].\n\t\\end{align}\n\\end{subequations}\nUsing this discrete quantized dictionaries, we obtain the virtual angle-domain channel representation of $\\check{{\\sf H}}_s(t,\\tau)$, given by\n\\begin{align}\\label{beamspacechannel}\n\\check{{\\sf H}}_s(t,\\tau)& = {\\bf F}_{N}^{{\\sf H}}{\\sf H}_s(t,\\tau){\\bf F}_{M} = \\sum_{l=1}^{L}\\check{{\\sf H}}_{s,l}(t)\\delta(\\tau-\\tau_l),\n\\end{align}\nwhere $\\check{{\\sf H}}_{s,l}(t) := {\\bf F}_{N}^{{\\sf H}}{\\sf H}_{s,l}(t){\\bf F}_{M}$. We have shown in our earlier work \\cite{sxsBA2017} that, as the number of antennas $M$ at the BS and $N$ at the UE increases, the DFT basis provides a good sparsification of the propagation channel. As a result, $\\check{{\\sf H}}_s(t,\\tau)$ can be approximated as a $L$-sparse matrix, with $L$ non-zero elements in the locations corresponding to the AoA-AoDs of the $L$ scatterers. We may encounter a grid error in \\eqref{beamspacechannel}, since the AoAs\/AoDs do not necessarily fall into the uniform grid $\\Phi\\times\\Theta$. Nevertheless, as shown in \\cite{sxsBA2017}, the grid error becomes negligible by increasing the number of antennas (grid resolution). We will evaluate this off-grid effect in the numerical results. \n\n\n\n\n\n\\section{Proposed Beam Alignment Scheme}\\label{proposedscheme}\n\n\\subsection{BS Channel Probing and UE Sensing}\n\nConsider the scattering channel model in \\eqref{ch_mod_disc_mp} and its virtual angle-domain representation in \\eqref{beamspacechannel}. In our proposed scheme, at each beacon slot $s$, the BS probes the channel along $M_{\\text{RF}}$ beamforming vectors ${\\bf u}_{s,i}$, $i\\in[M_{\\text{RF}}]$, each of which is applied to a unique PN sequence signal $x_{s,i}(t)$. We select the beamforming vectors at the BS side according to a pre-defined pseudo-random codebook, which is a collection of the angle sets ${\\cal C}_\\text{T}:= \\{{\\cal U}_{s,i} : s \\in [T], i \\in [M_{\\text{RF}}] \\}$, where ${\\cal U}_{s,i}$ denotes the angle-domain support of the beamforming vector ${\\bf u}_{s,i}$, i.e., the indices of the quantized angles in the virtual angle-domain representation of ${\\bf u}_{s,i}$, and where $T$ is the effective period of beam training. We assume that the beamforming vector ${\\bf u}_{s,i}$ sends equal power along the directions in ${\\cal U}_{s,i}$ with the number of active angles given by $|{\\cal U}_{s,i}|=:\\kappa_u\\leq M$, which we assume to be the same for all $(s,i)$. We call $\\kappa_u$ the power spreading factor or the transmit ``beamwidth''. Consequently, we obtain the beamforming vectors at the BS given by ${\\bf u}_{s,i} = {{\\mathbb f} F}_M \\check{{\\bf u}}_{s,i}$, where\n$\\check{{\\bf u}}_{s,i}=\\frac{{{\\mathbb f} 1}_{{\\cal U}_{s,i}}}{\\sqrt{\\kappa_u}}$, and where ${{\\mathbb f} 1}_{{\\cal U}_{s,i}}$ denotes a vector with $1$\nat components in the support set ${\\cal U}_{s,i}$ and $0$ elsewhere. One can simply imagine the vector $\\check{{\\bf u}}_{s,i}$ as a finger-shaped beam pattern in the angle-domain as illustrated in \\figref{cluster} (a). We assume that the angle indices in ${\\cal U}_{s,i}$ in the codebook ${\\cal C}_\\text{T}$ are a priori generated in a random manner and shared to all the UEs in the system, thus, we call ${\\cal C}_\\text{T}$ a pseudo-random codebook.\n\\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=14cm]{3_beamProbing.pdf}\n\t\\caption{{\\small {\\em (a) Illustration of the subset of AoA-AoDs at time slot $s$ probed by the $i$-th beacon stream\n\t\t\t\ttransmitted by the BS and received by the $j$-th RF chain of the UE, for $M = N = 10$. \n\t\t\t\tThe AoD subset is given by ${\\cal U}_{s,i}=\\{1,3,4,6,8,10\\}$ { (numbered counterclockwise)} with beamforming vector \n\t\t\t\t$\\check{{\\bf u}}_{s,i}=\\frac{1}{\\sqrt{6}}[1,0,1,0,1,0,1,1,0,1]^{\\sf T}$. The AoA subset is given by \n\t\t\t\t${\\cal V}_{s,j}=\\{2,4,5,7,9\\}$ { (numbered counterclockwise)} with receive beamforming vector $\\check{{\\bf v}}_{s,j}=\\frac{1}\n\t\t\t\t{\\sqrt{5}}[0,1,0,1,1,0,1,0,1,0]^{\\sf T}$.\n\t\t\t\t(b) The channel gain matrix $\\check{\\boldsymbol{\\Gamma}}$ (with two strong MPCs indicated by the dark spots) \n\t\t\t\tmeasuring along ${\\cal V}_{s,j} \\times {\\cal U}_{s,i}$}}.}\n\t\\label{cluster}\n\\end{figure*}\nAt the UE side, each UE can locally customize its own receive beamforming codebook defined as ${\\cal C}_\\text{R}:= \\{ {\\cal V}_{s,j} : s \\in [T], j \\in [N_{\\text{RF}}]\\}$, where ${\\cal V}_{s,j}$, with $|{\\cal V}_{s,j}| = \\kappa_v \\leq N$ for all $(s,j)$, is the angle-domain support, defining the directions from which the receiver beam patterns collect the signal power. We define the beamforming vectors at the UE side by ${\\bf v}_{s,j} = {{\\mathbb f} F}_N \\check{{\\bf v}}_{s,j}$, where $\\check{{\\bf v}}_{s,j}=\\frac{{{\\mathbb f} 1}_{{\\cal V}_{s,j}}}{\\sqrt{\\kappa_v}}$ again defines the finger-shaped beam patterns as shown in \\figref{cluster} (a). Similar to the power spreading factor $\\kappa_u$ at the BS, the parameter $\\kappa_v$ controls the spread of the sensing beam patterns at the UE.\n\nNote that in our proposed scheme, the UEs collect their measurements independently and simultaneously, without any influence or coordination to each other. Therefore, the proposed scheme is quite scalable for multiuser scenarios, where the overhead of training all the UEs does not increase with the number of UEs. This is obviously superior to traditional multi-level BA schemes, where the training overhead grows proportionally with the number of UEs.\n\n\n\n\n\n\\subsection{UE Measurement Sparse Formulation}\n\nDuring the $s$-th beacon slot, the UE applies the receive beamforming vector ${\\bf v}_{s,j}$ to its $j$-th RF chain. Assuming that the probing PN signals $x_{s,i}(t)$ are approximately orthogonal in the time domain as before, each RF chain at the UE side can almost perfectly separate the transmitted $M_{\\text{RF}}$ pilot streams. Thus, using the virtual channel representation in \\eqref{beamspacechannel}, we can write \\eqref{eq:j_outFrom_i_sample} as \n\\begin{align}\\label{eq:j_outFrom_i_sample_beamspace}\ny_{s,i,j}[k]\\!=\\!\\sum_{l=1}^L\\!\\!\\sqrt{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}}\\check{{\\bf v}}_{s,j}^{{\\sf H}}\\check{{\\sf H}}_{s,l}R_{i,i}^{x^l}(kT_c\\!-\\!\\tau_l)\\check{{\\bf u}}_{s,i}\\!+\\!z_{s,j}^c[k],\n\\end{align}\nwhere $\\check{{\\bf u}}_{s,i}={\\bf F}_M^{{\\sf H}} {\\bf u}_{s,i}$ and $\\check{{\\bf v}}_{s,j}= {\\bf F}_N^{{\\sf H}} {{\\bf v}}_{s,j}$ are the beamforming vectors in the virtual beam domain. Here, we used the unitary property of the DFT matrices, i.e., ${{\\mathbb f} F}_M^{{\\sf H}}{{\\mathbb f} F}_M={\\bf I}_{M}$ and ${{\\mathbb f} F}_N^{{\\sf H}}{{\\mathbb f} F}_N={\\bf I}_{N}$, where ${\\bf I}_{M}$ and ${\\bf I}_{N}$ are identity matrices of dimension $M$ and $N$ respectively.\n\nTo formulate the sparse estimation problem, we define $\\check{{\\bf h}}_{s,l} = 1\/\\sqrt{N_{\\text{RF}}}\\cdot\\vec{(\\check{{\\sf H}}_{s,l})}$, $l\\in[L]$, as the channel vectors corresponding to the $L$ paths contained in the whole propagation channel and result in a reformulated channel matrix $\\check{{\\bf H}}_s=[\\check{{\\bf h}}_{s,1},\\,\\cdots\\, ,\\check{{\\bf h}}_{s,L}]$, where $\\vec(\\cdot)$ denotes the vectorization operator. We also define a vector ${\\bf c}^{i}_k=[R_{i,i}^{x^1}(kT_c-\\tau_1),\\,\\cdots\\,,R_{i,i}^{x^L}(kT_c-\\tau_L)]^{\\sf T}\\cdot \\sqrt{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}}$, which can be regarded as the \\textit{Power Delay Profile} (PDP) of the $i$-th pilot stream transmitted along the $L$ paths and sampled at the $k$-th delay tap $k T_c$. Consequently, we can express the received beacon signal \\eqref{eq:j_outFrom_i_sample_beamspace} at the UE as\n\\begin{align}\\label{eq:j_out_beamspace}\ny_{s,i,j}[k]&=\\sum_{l=1}^L\\!\\!\\sqrt{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}}\\check{{\\bf v}}_{s,j}^{{\\sf H}}\\check{{\\sf H}}_{s,l}R_{i,i}^{x^l}(kT_c\\!-\\!\\tau_l)\\check{{\\bf u}}_{s,i}\\!+\\!z_{s,j}^c[k]=(\\check{{\\bf u}}_{s,i}\\otimes \\check{{\\bf v}}^*_{s,j})^{\\sf T} \\check{{\\bf H}}_s {\\bf c}^{i}_k+z_{s,j}^c[k]\\nonumber\\\\\n&={\\bf g}_{s,i,j}^{\\sf T} \\check{{\\bf H}}_s {\\bf c}^{i}_k+z_{s,j}^c[k],\n\\end{align}\nwhere we used the well-known identity $\\vec({\\bf A} {\\bf B} {\\bf C})=({\\bf C}^{\\sf T} \\otimes {\\bf A}) \\vec({\\bf B})$, and where ${\\bf g}_{s,i,j}:=\\check{{\\bf u}}_{s,i}\\otimes \\check{{\\bf v}}^*_{s,j}$ denotes the combined angle-domain beamforming vector corresponding to the $i$-th RF chain at the BS and the $j$-th RF chain at the UE. \n\nNote that for the high data rates at mmWaves, e.g., the chip rate used in IEEE 802.11ad preamble is $1760$ MHz \\cite{Time2017}, it is impractical to use different beamforming vectors for consecutive symbols within the same beacon slot since this would involve a very fast switching of the analog RF beamforming network, which is either impossible or too power consuming to implement.\nIn a more flexible way, we assume that each beacon slot consists of $S$ symbols, during which the combined beamforming vector ${\\bf g}_{s,i,j}$ remains constant whereas $\\check{{\\bf H}}_s$ only changes because of the Doppler shifts $\\nu_{l}$. Over different beacon slots, in contrast, we assume that the beamforming vector ${\\bf g}_{s,i,j}$ changes periodically according to the\npre-defined pseudo-random beamforming codebook ${\\cal U}_{s,i} \\times {\\cal V}_{s,j}$. With a slight abuse of notation, we index the received symbols belonging to the $(s+1)$-th beacon slot as $sS+s'$, $s'\\in[S]$. It follows that the received signal through the $i$-th RF chain at the BS and the $j$-th RF chain at the UE after matched filtering (refer to \\eqref{eq:j_out_beamspace}) can be written as\n \\begin{align}\\label{eq:j_out_beamspaceT1T2}\n y_{sS+s',i,j}[k]={\\bf g}_{s,i,j}^{\\sf T} \\check{{\\bf H}}_{sS+s'} {\\bf c}^{i}_k+z_{sS+s',j}^c[k].\n \\end{align} \n\nTo ensure that the proposed BA scheme is highly robust to the rapid channel variations \\cite{Rappaport2017lowrank}, we focus on the second-order statistics of the channel coefficients. More specifically, we accumulate the energy at the output of the matched filter across all the $\\check{N}_c$ delay taps, given by\n\\begin{align}\\label{q_check}\n&\\check{q}_{sS+s',i,j}=\\sum_{k=1}^{\\check{N}_c}|y_{sS+s',i,j}[k]|^2\\nonumber\\\\\n&={\\bf g}_{s,i,j}^{\\sf T}\\!\\left(\\!\\sum_{l=1}^{L}\\check{{\\bf h}}_{sS+s',l}\\check{{\\bf h}}_{sS+s',l}^{{\\sf H}}\\!\\sum_{k=1}^{\\check{N}_c}{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}|R_{i,i}^{x^l}(kT_c\\!-\\!\\tau_l)|^2\\!\\right)\\!{\\bf g}_{s,i,j}\\nonumber\\\\\n&+\\sum_{k=1}^{\\check{N}_c}|z_{sS+s',j}^c[k]|^2+\\sum_{k=1}^{\\check{N}_c}\\xi_{sS+s',i,j}^{h}+\\sum_{k=1}^{\\check{N}_c}2\\xi_{sS+s',i,j}^{z},\n\\end{align}\nwhere the first two terms represent the signal and the noise contributions respectively. Note that in the signal part in \\eqref{q_check}, only $L$ out of $\\check{N}_c$ slices which correspond to the $L$ scatterers in the delay domain contains signal power, whereas all the other slices are approximately zero due to the low cross-correlation property of PN sequences. Moreover, the remaining two terms in \\eqref{q_check} are given by\n\\begin{align}\\label{crossH}\n\\xi_{s,i,j}^{h} &=\\sum_{l\\neq l'}^L {P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}\\cdot {\\bf g}_{s,i,j}^{\\sf T}\\check{{\\bf h}}_{sS+s',l}\\check{{\\bf h}}_{sS+s',l'}^{{\\sf H}} R_{i,i}^{x^l}(kT_c\\!-\\!\\tau_l) R_{i,i}^{x^l}(kT_c\\!-\\!\\tau_{l'})^{{\\sf H}}{\\bf g}_{s,i,j},\n\\end{align}\n\\begin{align}\\label{crosshZ}\n\\xi_{sS+s',i,j}^z=2\\field{R}\\left \\{ {\\bf g}_{s,i,j}^{{\\sf H}} \\check{{\\bf H}}_{sS+s'} {\\bf c}^{i}_k \\cdot z_{sS+s',j}^c[k]^{{\\sf H}} \\right \\},\n\\end{align}\nrespectively, where \\eqref{crossH} denotes the cross product of channel vectors corresponding to different paths, and \\eqref{crosshZ} denotes the cross product between channel vectors and the noise.\n\nTo obtain a more reliable statistical measurement, we take an average of \\eqref{q_check} over the $S$ sequences within each beacon slot. Note that the channel coefficients corresponding to different paths are independent, also, the channel coefficients and the noise are always independent of each other. Consequently, the cross terms \\eqref{crossH} and \\eqref{crosshZ} have a zero mean. Thus, when the number of symbols $S$ (over which the instantaneous energy $\\check{q}_{sS+s',i,j}$ is averaged) is large, theses cross-terms contribute negligibly to \\eqref{q_check} and can be treated as a small residual term. As a result, we obtain the following approximation in each beacon slot $s$, given by\n\\begin{align}\\label{q_nocheck}\nq_{s,i,j}&=\\frac{1}{S}\\sum_{s'=1}^{S} \\check{q}_{sS+s',i,j}\\nonumber\\\\\n&=\\frac{{\\bf g}_{s,i,j}^{\\sf T}}{S}\\!\\!\\sum_{s'=1}^{S}\\!\\!\\left(\\!\\sum_{l=1}^{L}\\check{{\\bf h}}_{sS+s'\\!,l}\\check{{\\bf h}}_{sS+s'\\!,l}^{{\\sf H}}\\!\\sum_{k=1}^{\\check{N}_c}\\!\\!{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}|R_{i,i}^{x^l}(kT_c\\!-\\!\\tau_l)|^2\\!\\right)\\!\\!{\\bf g}_{s,i,j}\\nonumber\\\\\n&+\\frac{1}{S}\\sum_{s'=1}^{S}\\left(\\sum_{k=1}^{\\check{N}_c}|z_{sS+s',j}^c[k]|^2\\right)+w_{s,i,j},\n\\end{align}\nwhere $w_{s,i,j}$ represents the residual fluctuation obtained from the averaged cross-terms \\eqref{crossH} \\eqref{crosshZ}. \n\n\n\n\nAs we explained in Section \\ref{signal_procedure}, neglecting the effect of the noise, the output $y_{s,i,j}[k]$ is a Gaussian variables whose power is obtained by projecting the AoA-AoD-Delay PSF $f_p(\\phi,\\theta , \\tau)$ along beamforming vectors ${\\bf u}_{s,i}$ and ${\\bf v}_{s,j}$ in the angular domain (due to ${\\bf g}_{s,i,j}:=\\check{{\\bf u}}_{s,i}\\otimes \\check{{\\bf v}}^*_{s,j}$) and along the $k$-th slice corresponding to $\\tau \\in [kT_c, (k+1)T_c]$ in the delay domain, where the slicing in the delay domain results from the fact that the correlation function $|R_{i,i}^{x^l}(t)|$ between the Doppler-rotated sequence $x_{s,i}^l(t)$ given by \\eqref{x_l_rotate} and the desired matched filter $x_{s,i}^*(-t)$ is well localized around $t=0$; we refer to \\figref{1_bread} (a) for an illustration. Consequently, the summation of the instantaneous powers of $y_{sS+s',i,j}[k]$ along all the delay taps $k \\in [N_c]$ yields an estimate of PSF in the AoA-AoD domain as in \\eqref{gam_th_th}.\nThis has been illustrated in Fig.\\,\\ref{cluster} (b), where it is seen that each projection computes the summation of those AoA-AoD PSF at the grid points lying at the intersection of probing directions ${\\cal U}_{s,i}$ at the BS and ${\\cal V}_{s,j}$ at the UE. In the following, we provide a more rigorous formulation of this property.\n\n\n\nWithout loss of generality, we assume that the energy contained in each PN sequence is constant given by $R^x(0)=R_{i,i}^x(0)=N_c$, $\\forall i\\in[M_{\\text{RF}}]$. Assuming, for simplicity, that the Doppler phase rotation for each chip is very small ($\\nu_{l}T_c\\ll 1$), we make the following approximation\n\\begin{align}\\label{R_app}\n\t|R_{i,i}^{x^l}(t)|&\\leq |R_{i,i}^{x^l}(0)|\\approx R^x(0), \\forall i\\in[M_{\\text{RF}}].\n\\end{align} \nwhere, we assume that, due to the large bandwidth for each chip $B'=1\/T_c$, the matched-filtering loss caused by the Doppler shift is negligible (we take into account all the imperfections due to the Doppler and also non-orthogonal PN sequences in the simulations).\n\nLet $\\boldsymbol{\\Gamma}$ denote the all-zero $N\\times M$ matrix with positive elements corresponding to the angle-domain second-order statistics of the channel coefficients, given by\n\\begin{align}\\label{averageGamma}\n[\\boldsymbol{\\Gamma}]_{n,m} = \\sum_{l=1}^L{\\mathbb E}\\left[|[\\check{{\\sf H}}_{sS+s',l}]_{n,m}|^2\\right]\\cdot \\frac{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}}{\\kappa_u\\kappa_v }\\cdot |R^{x^l}(0)|^2.\n\\end{align} \nAlso, from the well known property of the autocorrelation function \\cite{Gubnerbook}, we have\n\\begin{align}\\label{averageNoise}\n\\frac{1}{S}\\sum_{s'=1}^{S}|z_{sS+s',j}^c[k]|^2\\to{\\mathbb E}[|z_{sS+s',j}^c[k]|^2]=N_0R^x(0).\n\\end{align} \nConsequently, we can approximate \\eqref{q_nocheck} by\n\\begin{align}\\label{q_nocheck3}\nq_{s,i,j} = {\\bf b}_{s,i,j}^{\\sf T} \\vec{(\\boldsymbol{\\Gamma})} + \\check{N}_cN_0R^x(0) + w_{s,i,j},\n\\end{align}\nwhere ${\\bf b}_{s,i,j} = {\\bf g}_{s,i,j}\\sqrt{\\kappa_u\\kappa_v}\\in{\\mathbb C}^{MN}$ denotes the binary vector corresponding to the combined probing window with the angle support ${\\cal U}_{s,i} \\times {\\cal V}_{s,j}$ from the beamforming codebook, which contains $1$ at the probed AoA-AoD components and $0$ elsewhere. An example of the probing geometry is illustrated in \\figref{cluster} (b). Here we implicitly assume that the differences between the empirical and the statistical averages in \\eqref{averageGamma} \\eqref{averageNoise} are also absorbed into the residual term $w_{s,i,j}$.\n\n\n\nFollowing this procedure, over $T$ beacon slots, the UE obtains a total number of $M_{\\text{RF}}N_{\\text{RF}}T$ equations, which can be written in the form\n\\begin{align}\\label{UE_equations}\n{\\bf q}={\\bf B} \\cdot\\vec{(\\boldsymbol{\\Gamma})} + \\check{N}_cN_0R^x(0)\\cdot {{\\mathbb f} 1} + {\\bf w},\n\\end{align}\nwhere the vector ${{\\bf q}=[q_{1,1,1}, \\dots q_{1,M_{\\text{RF}},N_{\\text{RF}}}, \\dots, q_{T,M_{\\text{RF}},N_{\\text{RF}}}]^{\\sf T}}\\in {\\mathbb R}^{M_{\\text{RF}}N_{\\text{RF}}T}$ consists of all $M_{\\text{RF}}N_{\\text{RF}}T$ measurements achieved as in (\\ref{q_nocheck3}), ${{\\bf B}=[{\\bf b}_{1,1,1}, \\dots, {\\bf b}_{1,M_{\\text{RF}},N_{\\text{RF}}}, \\dots, {\\bf b}_{T,M_{\\text{RF}},N_{\\text{RF}}}]^{\\sf T}}\\in {\\mathbb R}^{M_{\\text{RF}}N_{\\text{RF}}T\\times MN}$ is uniquely defined by the pseudo-random beamforming codebook of the BS and the local beamforming codebook of the UE, and ${\\bf w} \\in {\\mathbb R}^{M_{\\text{RF}}N_{\\text{RF}}T}$ denotes the residual fluctuations. \n\n\nFor later use, we define the SNR in each delay tap at the output of the matched filter \\eqref{eq:j_out_beamspaceT1T2} as\n\t\\begin{align}\\label{snrPerchip}\n\t{\\mathsf{SNR}}^{y_{s,i,j}[k]}& \\!=\\! \\frac{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}} |R^{x^l}(0)|^2\\sum_{l=1}^{L}\\frac{\\gamma_l+\\eta_l}{1+\\eta_l}\\cdot {{\\mathbb f} 1}_{\\{kt_p=\\tau_l\\}}\\!\\cdot\\! MN}{{\\mathbb E}[|z_{s,j}^c[k]|^2]\\cdot \\kappa_u\\kappa_v}\\nonumber\\\\\n\t&\\overset{(a)}{\\approx} \\frac{{P_{{\\mathtt d} {\\mathtt i} {\\mathtt m}}}|R^x(0)|^2\\sum_{l=1}^{L}\\frac{\\gamma_l+\\eta_l}{1+\\eta_l}\\cdot {{\\mathbb f} 1}_{\\{kt_p=\\tau_l\\}}\\!\\cdot\\! MN}{N_0R^x(0)\\cdot \\kappa_u\\kappa_v}\\nonumber\\\\\n\t&=\\frac{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} T_cN_c\\sum_{l=1}^{L}\\frac{\\gamma_l+\\eta_l}{1+\\eta_l}\\cdot {{\\mathbb f} 1}_{\\{kt_p=\\tau_l\\}}\\cdot MN}{\\kappa_u\\kappa_vM_{\\text{RF}}N_{\\text{RF}}N_0}\n\t\\end{align}\nwhere ${{\\mathbb f} 1}_{\\{kt_p=\\tau_l\\}}$ is the indicator function, equaling to $1$ if $kt_p=\\tau_l$ and $0$ otherwise, and where in $(a)$ we applied the approximation \\eqref{R_app} by neglecting the matched-filtering loss due to the Doppler. It can be seen from \\eqref{snrPerchip} that, large PN sequence duration (i.e., large $N_c$) implies an increase of the SNR in each sample \\eqref{eq:j_out_beamspaceT1T2}, however, in order to avoid the effect of Doppler (e.g., in $(a)$), the PN sequence duration should not be longer than the channel coherence time $\\Delta t_c$, i.e., $t_0=N_cT_c\\leq \\Delta t_c$, where $\\Delta t_c$ is in general very small in mmWaves \\cite{Rappaport2017lowrank}.\n\nNote that, in our formulation, the effective observation over each beacon slot is actually the averaged energy $q_{s,i,j}$ defined by \\eqref{q_nocheck}. For later analysis, we define the SNR in \\eqref{q_nocheck} (or equivalently in \\eqref{q_nocheck3}) as\n\\begin{align}\\label{snrSum}\n{\\mathsf{SNR}}^{q_{s,i,j}} &=\\frac{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} T_cN_c\\sum_{l=1}^{L}\\frac{\\gamma_l+\\eta_l}{1+\\eta_l}\\cdot MN}{\\check{N}_c\\kappa_u\\kappa_vM_{\\text{RF}}N_{\\text{RF}}N_0}\\overset{(a)}{\\approx} \\frac{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} T_c\\sum_{l=1}^{L}\\frac{\\gamma_l+\\eta_l}{1+\\eta_l}\\cdot MN}{\\kappa_u\\kappa_vM_{\\text{RF}}N_{\\text{RF}}N_0}\n\\end{align}\nwhere $(a)$ follows the fact that $\\check{N}_c\\approx N_c$, i.e., when $N_c$ is large enough (more precisely, when $t_0 \\gg \\Delta \\tau_{\\max}$), the relative difference between $\\check{N}_c\\geq N_c+\\frac{\\Delta \\tau_{\\max}}{T_c}$ and $N_c$ becomes negligible, since the delay spread in mmWave channels is small \\cite{Rappaport2014Capacity,Sayeed2014Sparse}.\n\nTo effectively capture the channel quality before beamforming (before BA), we also define the SNR before beamforming (BBF) by\n\\begin{align}\\label{snrBBF}\n{\\mathsf{SNR}_\\text{BBF}} =\\frac{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} \\sum_{l=1}^{L}\\frac{\\gamma_l+\\eta_l}{1+\\eta_l}}{N_0B}.\n\\end{align}\nThis is the SNR obtained when a single pilot stream ($M_{\\text{RF}} = 1$) is transmitted through a single BS antenna and is received in a single UE antenna (isotropic transmission) via a single RF chain ($N_{\\text{RF}} = 1$) over the whole bandwidth $B$. As mentioned before, one of the challenges of BA and in general communication at mmWaves is that the SNR before beamforming ${\\mathsf{SNR}_\\text{BBF}}$ in \\eqref{snrBBF} is typically very low.\n\n\n\n\n\n\\subsection{Path Strength Estimation via Non-Negative Least Squares}\n\nWe assume that the PSD $N_0$ of the AWGN channel is known for each UE \\cite{SaeidBA2016}. In order to identify the AoA-AoD directions of the strongest scatterers, the UE needs to estimate the $MN$-dim vector $\\vec{(\\boldsymbol{\\Gamma})}$ from the $M_{\\text{RF}}N_{\\text{RF}}T$-dim observation \\eqref{UE_equations} in presence of the measurement noise ${\\bf w}$, where in general, $MN$ is significantly larger than $M_{\\text{RF}}N_{\\text{RF}}T$. There are a great variety of algorithms to solve \\eqref{UE_equations}. The key observation here is that $\\boldsymbol{\\Gamma}$ is sparse (by sparse nature of mmWave channels) and non-negative (by second-order statistic construction of our scheme). As discussed in our previous work \\cite{sxsBA2017}, recent results in CS show that when the underlying parameter $\\boldsymbol{\\Gamma}$ is non-negative, the simple non-negative constrained {\\em Least Squares} (LS) given by\n\\begin{align}\\label{eq:NNLS}\n\\boldsymbol{\\Gamma}^\\star=\\mathop{\\rm arg\\, min}_{\\boldsymbol{\\Gamma} \\in {\\mathbb R}_+^{N\\times M}} \\|{\\bf B} \\cdot\\vec{(\\boldsymbol{\\Gamma})} + \\check{N}_cN_0R^x(0)\\cdot {{\\mathbb f} 1} - {\\bf q}\\|^2, \n\\end{align}\nis sufficient to impose the sparsity of the solution $\\boldsymbol{\\Gamma}^\\star$ \\cite{slawski2013non,peter2018nnls}, without any need for an explicit\nsparsity-promoting regularization term in the objective function as in the classical LASSO algorithm \\cite{tibshirani1996regression}.\nThe (convex) optimization problem \\eqref{eq:NNLS} is generally referred to as {\\em Non-Negative Least Squares} (NNLS),\nand has been well investigated in the literature. As discussed in \\cite{slawski2013non}, NNLS implicitly performs $\\ell_1$-regularization and promotes the sparsity of the resulting solution provided that the measurement matrix ${\\bf B}$ satisfies the $\\mathcal{M}^+$-criterion \\cite{peter2018nnls}, i.e., there exits a vector ${\\bf d} \\in {\\mathbb R}_+^{M_{\\text{RF}}N_{\\text{RF}}T}$ such\nthat ${\\bf B}^{\\sf T} {\\bf d} >0$. In our case, this criterion can be simply interpreted as the fact that the set of $M_{\\text{RF}}N_{\\text{RF}}T$ measurement beam patterns should hit all the $MN$ AoA-AoD pairs at least once, which is almost fully satisfied in our scheme because of the finger-shaped beam patterns in each beacon slot, also because of the pseudo-random property of the designed beamforming codebook.\n\nIn terms of numerical implementations, the NNLS can be posed as an unconstrained LS problem over the positive orthant\nand can be solved by several efficient techniques such as Gradient Projection, Primal-Dual techniques, etc., with an affordable computational complexity \\cite{bertsekas2015convex}, generally significantly less than CS algorithms for problems of the same size and sparsity level. We refer to \\cite{kim2010tackling, nguyen2015anti} for the recent progress on the numerical solution of\nNNLS and a discussion on other related work in the literature.\n\n\n\n\\section{Performance Evaluation}\\label{results}\nWe consider a system with $M=32$ antennas, $M_{\\text{RF}}=3$ RF chains at the BS, and $N=32$ antennas, $N_{\\text{RF}}=2$ RF chains at a generic UE. We assume a short preamble structure used in IEEE 802.11ad \\cite{Time2017,ParameterPerahia2010}, where the beacon slot is of duration $t_0S=1.891\\,\\mu$s. The system is assumed to work at $f_0=70$ GHz, has a maximum available bandwidth of $B=1.76$ GHz, namely, each beacon slot amounts to more than $3200$ chips as in \\cite{Time2017,AlkhateebTimeDomain2017}. We assume the channel contains $L=3$ links given by $(\\gamma_l=1,\\eta_1=100)$, $(\\gamma_l=0.6,\\eta_1=10)$ and $(\\gamma_l=0.6,\\eta_1=0)$, where $\\gamma_l$ denotes the scatterer strength, $\\eta_l$ indicates the strength ratio between the LOS and the NLOS propagation as in \\eqref{rice_fading}. Thus, the first scatterer can be roughly regarded as the LOS path, while the remaining scatterers represent the NLOS paths corresponding to two off-grid clusters. This is consistent with the practical mmWave MIMO channel measurements in \\cite{Tim2018}, where the relative power level of the NLOS path is around $10$ dB lower than the desired LOS path. We assume that the relative speed $\\Delta v_{l}$ for each path is around $0\\sim8$ m$\/$s. We announce a success if the location of the strongest component in $\\boldsymbol{\\Gamma}^\\star$ (see \\eqref{eq:NNLS}) coincides with the LOS path \\footnote{In the case that there is no LOS link, one can announce a success if the location of the strongest component in $\\Gamma^\\star$ coincides with the central AoA-AoD of the strongest scatterer cluster.}. \n\n\\begin{figure}[t]\n\t\\centerline{\\includegraphics[width=8cm]{4_changeKvKu.pdf}}\n\\caption{Detection probability $P_D$ of the proposed time-domain scheme with respect to different power spreading factors ($\\kappa_u$, $\\kappa_v$), where $M=N=32$, $M_{\\text{RF}}=3$, $N_{\\text{RF}}=2$, $B'=B$, $N_c=64$, ${\\mathsf{SNR}_\\text{BBF}}=-14$ dB, the relative speed of the strongest path $\\Delta v_{l^\\star}=5$ m$\/$s.}\n\\label{changeKvKu}\n\\end{figure}\n\nIn the following simulations \\footnote{We will use \\texttt{lsqnonneg.m} in\\ {MATLAB\\textcopyright\\,} to solve the NNLS optimization problem in \\eqref{eq:NNLS}.}, we evaluate the performance of our time-domain BA scheme according to two criteria: \n{i) We study the effect of various system parameters on the achieved BA probability. \nWe also show the superiority of our proposed scheme in comparison with other recently proposed time-domain BA schemes \\cite{AlkhateebTimeDomain2017,Time2017}; {ii) We consider the effectiveness of the BA scheme in the context of \nsingle-carrier modulation. This is obtained by computing upper and lower bounds on the ergodic achievable rate for the effective SISO \nchannel after BA. These bounds show that BA yields essentially a flat channel even in the presence of multipath components. Therefore, \nsingle-carrier modulation without time-domain equalization works very well. \n\n\n\n\n\\subsection{Success Probability of the Proposed BA Scheme}\n\n{{\\mathbb f} Dependence on the beam spreading factors $(\\kappa_u,\\kappa_v)$}. As discussed in our previous work \\cite{sxsBA2017,sxs2017Time}, the \nspatial spreading factors $(\\kappa_u,\\kappa_v)$ impose a trade-off between the angle coverage of the measuring matrix ${\\bf B}$ and the SNR of received \nsignal at the UE side. It can be seen from \\figref{changeKvKu} that, increasing the spreading factor from $\\kappa_u=\\kappa_v=4$ to $\\kappa_u=\\kappa_v=8$ improves the performance. However, the performance keeps degrading when $(\\kappa_u,\\kappa_v)$ are increased to $\\kappa_u=\\kappa_v=16,22$. \n\n\n\n\n\n\\begin{figure}[t]\n\t\\centerline{\\includegraphics[width=15cm]{5_changeNc_changeVV.pdf}\n\t\\caption{ (a) Detection probability $P_D$ of the proposed time-domain scheme with respect to (a) different PN sequence lengths $N_c$, where the relative speed of the strongest path $\\Delta v_{l^\\star}=5$ m$\/$s; (b) different relative speed values of the strongest path $\\Delta v_{l^\\star}$. In both cases, $M=N=32$, $M_{\\text{RF}}=3$, $N_{\\text{RF}}=2$, $\\kappa_u=\\kappa_v=8$, $B'=B$, ${\\mathsf{SNR}_\\text{BBF}}=-14$ dB.}\n\t\\label{changeNc-changeVV}\n\\end{figure}\n\n{{\\mathbb f} Dependence on the PN sequence length $N_c$ and robustness to Doppler shifts}. In general, larger PN sequence length $N_c$ provides better correlation properties, such that different pilot streams can be well separated at the UE. However, increasing $N_c$ increases the whole duration $t_0=N_c T_c$ of the transmitted signal. Thus, because of the Doppler shift, the received PN sequence undergoes larger phase rotation of the chips. This rotation\ndegrades the PN sequence correlation property. This is illustrated in \\figref{changeNc-changeVV} (a). As we can see, increasing the PN sequence length $N_c$ from $N_c=16$ to $N_c=32,64$ improves the performance of the proposed scheme. However, the performance degrades slightly when $N_c$ is increased to $N_c=128, 256$. Moreover, as shown in \\figref{changeNc-changeVV} (b), the proposed scheme is highly insensitive to the Doppler spread between different multipath components. For example, varying the relative speed difference between the paths \nfrom $0$ to $8$ m$\/$s, the BA success probability remains virtually unchanged. \nThis provides a significant advantage with respect to schemes based on OFDM signaling, \nwhich is known to be fragile to uncompensated Doppler shifts yielding inter-carrier interference.\n\n\n\n\n{{\\mathbb f} Comparison with other time-domain methods.}\n\\figref{nnls_omp} compares the performance of our proposed scheme with a recently proposed time-domain \napproach \\cite{Time2017,AlkhateebTimeDomain2017} based on the OMP CS technique. \nThe approach in \\cite{Time2017,AlkhateebTimeDomain2017} assumes that the channel vector coefficients \nremain constant over the whole training stage (in other words, it assumes a completely stationary situation with zero Doppler shifts).\nIt can be seen from \\figref{nnls_omp} that the proposed scheme exhibits much more robust performance with respect to the channel time-variations whereas the approach in \\cite{Time2017,AlkhateebTimeDomain2017} fails when the channel is fast time-varying.\n\n\\begin{figure}[t]\n\t\\centerline{\\includegraphics[width=8cm]{7_compare.pdf}}\n\t\\caption{{ Comparison of the proposed scheme based on NNLS with that in \\cite{AlkhateebTimeDomain2017,Time2017} based on OMP for both slow-varying and fast-varying channels, where $M=N=32$, $M_{\\text{RF}}=3$, $N_{\\text{RF}}=2$, $\\kappa_u=\\kappa_v=8$, $B'=B$, $N_c=64$, ${\\mathsf{SNR}_\\text{BBF}}=-14$ dB}.}\n\t\\label{nnls_omp}\n\\end{figure}\n\n\n\n\\subsection{Effectiveness of Single-Carrier Modulation}\n\n Assume that after a BA procedure as proposed in Section \\ref{proposedscheme}, the strongest component in $\\boldsymbol{\\Gamma}^\\star$ corresponds to the $l^\\star$-th scatterer between the BS and the UE. Hence, the estimated beamforming vectors for the data transmission are given by ${\\bf u}_{l^\\star} = {{\\mathbb f} F}_M \\check{{\\bf u}}_{l^\\star}$ at the BS and ${\\bf v}_{l^\\star} = {{\\mathbb f} F}_N \\check{{\\bf v}}_{l^\\star}$ at the UE respectively, where $ \\check{{\\bf u}}_{l^\\star}\\in{\\mathbb C}^M$ is an all-zero vector with a $1$ at the component corresponding to the AoD of the $l^\\star$-th scatterer, and $ \\check{{\\bf v}}_{l^\\star}\\in{\\mathbb C}^N$ is an all-zero vector with a $1$ at the component corresponding to the AoA of the $l^\\star$-th scatterer. We assume that in the downlink data transmission phase, the BS and the UE employ a single RF chain, therefore, with a slight abuse of notation, we assume that transmitted waveform, consisting of $N_d$ information symbols, is given by $x(t)=\\sum_{n=1}^{N_d}\\sqrt{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} T_d}\\cdot d_{n} p_{r}(t-nT_d)$, where $p_{r}(t)$ denotes the normalized band-limited pulse shaping filter (such as a raised cosine pulse), $T_d=1\/B$ indicates the symbol transmission rate over the whole bandwidth $B$, and $\\forall n\\in[N_d]$, $d_{n}\\in\\{1,-1\\}$ indicate the information symbols. From \\eqref{receiveTT} and \\eqref{eq:j_out}, the received signal after passing through the beamforming vectors $({\\bf v}_{l^\\star},{\\bf u}_{l^\\star})$ is given by\n\\begin{align}\\label{data_out}\n\\hat{y}(t)&={\\bf v}_{l^\\star}^{{\\sf H}}\\! \\int\\! {\\sf H}(t,d\\tau) x(t-\\tau){\\bf u}_{l^\\star}\\! +\\!z(t)\\nonumber\\\\\n&=\\!\\sum_{l=1}^L \\sum_{n=1}^{N_d}C_{l}p_{r}(t-nT_d-\\tau_l)e^{j2\\pi (\\check{\\nu}_{l}+\\nu_{l}nT_d)}\\!+\\!z(t),\n\\end{align}\nwhere $C_{l} := \\sqrt{{P_{{\\mathtt t} {\\mathtt o} {\\mathtt t}}} T_d}\\cdot\\rho_{l}{\\bf v}_{l^\\star}^{{\\sf H}}{\\bf a}_{\\text{R}}(\\phi_l) {\\bf a}_{\\text{T}}(\\theta_l)^{{\\sf H}}{\\bf u}_{l^\\star}$. We assume that the UE performs a matched filter to $p_{r}(t)$, where the signal at the output of the matched filter can be written as\n\\begin{align}\\label{data_filter}\ny(t)|_{t=k T_d}&=\\int \\hat{y}(\\tau)p_{r}^*(\\tau-kT_d)d\\tau\\nonumber\\\\\n&=\\!\\sum_{n=1}^{N_d}C_{l^\\star}\\varphi\\left((k-n)T_d-\\tau_{l^\\star}\\right)\\!+\\!\\sum_{l\\neq l^\\star}\\sum_{n=1}^{N_d}C_{l}\\varphi\\left((k-n)T_d-\\tau_l\\right)+z^c(t),\n\\end{align} \nwhere $z^c(t)$ denotes the noise at the output of the matched filter with a PSD $N_0$ (multiplied by $\\int |p_{r}(t)|^2dt=1$), we define $\\varphi(t-nT_d-\\tau_l)|_{t=kT_d} = \\int p_{r}((k-n)T_d-\\tau_l)p_{r}^*(\\tau-kT_d)d\\tau\\cdot e^{j2\\pi (\\check{\\nu}_{l}+\\nu_{l}nT_d)}$, $\\forall l\\in[L]$, where the amplitude of $\\varphi\\left((k-n)T_d-\\tau_l\\right)$ approximately equals to $1$ when $(k-n)T_d-\\tau_l=0$. It can be seen from \\eqref{data_filter} that, the first term denotes the desired signal, whereas the last two terms correspond the inter-symbol interference and noise. we assume, for simplicity, that the delay tap of the strongest path $l^\\star$ coincides with one of the sampling points at the UE side. Consequently, the ergodic achievable rate can be upper and lower bounded by \\cite{caire2017Bound}\n\\begin{align}\\label{ub_ergodic}\nR^{ub^\\star} ={\\mathbb E}\\left[\\log_2\\left(1+\\frac{\\sum_{l=1}^{L}|C_{l}\\varphi(\\tau_{l^\\star}-\\tau_l)|^2}{N_0}\\right)\\right],\n\\end{align} \n\\begin{align}\\label{lb_ergodic}\nR^{lb^\\star}=\\log_2\\left(1+\\frac{|{\\mathbb E}[C_{l^\\star}\\varphi(0)]|^2}{N_0+{{\\mathbb V}{\\sf a}{\\sf r}}(C_{l^\\star}\\varphi(0))+\\sum_{l\\neq l^\\star}{\\mathbb E}[|C_l\\varphi(\\tau_{l^\\star}-\\tau_l)|^2]}\\right).\n\\end{align} \n\n\n\n\nThe upper bound \\eqref{ub_ergodic} is obtained via the \\textit{Maximum Ratio Combining} for the case where all the delayed versions of the transmitted signal are separately observable (this is sometimes referred to as ``matched filter upper bound''). The lower bound is actually achieved\nby a simple receiver that treats all the \\textit{Inter-symbol Interference} (ISI) as a Gaussian noise.\nAs already noticed a few times in this paper, once BA is achieved, the effective channel angular spread is very small since essentially only \nthe selected AoA\/AoD path collects all the signal power. As a result, the channel consists of only a single dominant delay tap with a fixed Doppler shift, \nwhich can be well compensated by applying conventional timing and frequency synchronization\nand channel estimation techniques. Due to the Doppler compensation, the channel time-variations are significantly reduced after BA \\cite{HeathVariation2017},\nsuch that $C_{l^\\star}$ can be treated as an almost deterministic channel gain with a very large amplitude $|{\\mathbb E}[C_{l^\\star}\\varphi(0)]|$ and a very small variance ${{\\mathbb V}{\\sf a}{\\sf r}}(C_{l^\\star}\\varphi(0))$.\n\n\\begin{figure}[t]\n\t\\centerline{\\includegraphics[width=8cm]{9_bounds.pdf}}\n\t\\caption{The ergodic achievable rate after a successful {\\em Beam Alignment} using the proposed time-domain scheme, where $M=N=32$, $B'=B$, $N_c=64$, the relative speed of the strongest path $\\Delta v_{l^\\star}=5$ m$\/$s. }\n\t\\label{rate}\n\\end{figure}\n\n{{\\mathbb f} Ergodic achievable rate bounds}.\nIn \\figref{rate}, we illustrate the lower and upper bounds on the achievable ergodic rate \\eqref{ub_ergodic} \\eqref{lb_ergodic} as a function of ${\\mathsf{SNR}_\\text{BBF}}$.\nWhile it is clear that the lower bound is interference-limited while the upper bound is not, we notice that the gap between the bounds is quite small in the regime of low pre-beamforming SNR (${\\mathsf{SNR}_\\text{BBF}} < 10$\\,dB), which is relevant in mmWave applications. At the same time, the achievable ergodic spectral efficiency\tin this regime can be quite high. In particular, we remark here that the lower bound refers to the case of single-carrier transmission without any equalization. For example, focusing on a realistic spectral efficiency between 1 and 2 bit\/s\/Hz, we notice that single-carrier with the proposed BA scheme and no equalization (just standard\tpost-beamforming timing and frequency synchronization) achieves the relevant spectral efficiency in the range of ${\\mathsf{SNR}_\\text{BBF}}$ between -30 and -20 dB, and suffers from a very small gap with respect to the best possible equalization (given by the upper bound).\n\n\n\n\n\n\n\n\n\\begin{figure}[t]\n\t\\centerline{\\includegraphics[width=8.5cm]{8_pdp.pdf}}\n\\caption{{Illustration of the {\\em Power Delay Profile} (PDP) with multipath ($L=3$) channel in \\eqref{eq:j_outFrom_i_sample}. $(a)$ Before {\\em Beam Alignment}. $(b)$ After {\\em Beam Alignment}}}\n\\label{DIP}\n\\end{figure}\n{{\\mathbb f} Power Delay Profile (PDP) before and after Beam Alignment (BA)}. \\figref{DIP} compares the average PDP of the mmWave channel with $L=3$ multipath components before and after BA. It can be seen from \\figref{DIP} (a) that, before BA, the channel has a relatively large delay spread and is highly frequency selective. Moreover, since different multipath components are mixed with each other and since each one has its own delay and Doppler shift, the time-domain channel is highly time-varying. In contrast, as seen from \\figref{DIP} (b), after BA, the channel effectively consists of a single multipath component, thus, it is almost flat in frequency. Also, note that in contrast with the former case where different multipath components were mixed with different Doppler frequencies, in the latter case the Doppler frequency of the single multipath component can be easily compensated by standard timing, frequency, and phase synchronization techniques at the receiver.\n\n\\section{Conclusion}\n\nIn this paper, we proposed a novel time-domain {\\em Beam Alignment} (BA) scheme for \nmmWave MIMO systems with a HDA architecture. The proposed scheme is particularly suited for single-carrier multiuser mmWave communication, \nwhere each user has access to the whole bandwidth, and where all the users within the BS coverage can be trained simultaneously. \nWe focused on the channel second-order statistics, incorporating both the random channel gains and Doppler shifts into the channel matrix to further capture the realistic features of mmWave channels. We applied the recently developed {\\em Non-Negative Least Squares} (NNLS) technique to efficiently find the strongest path for each user. Simulation results showed that the proposed scheme incurs moderately low training overhead, achieves very good robustness to fast time-varying channels, and it is very robust to large Doppler shifts\namong different multipath components. Furthermore, we have shown that the multipath channel after BA reduces essentially to a single giant tap\nthat collects almost all the signal energy. Hence, single-carrier signaling can perform very efficiently and requires just standard timing and frequency synchronization (that works well at high SNR after beamforming)\nwhile it requires no time-domain equalization. This makes the proposed BA scheme together with single-carrier signaling a strong contender for future mmWave systems, especially in outdoor mobile scenarios.\n\n\n\\balance\n{\\footnotesize\n\\bibliographystyle{IEEEtran}\n\n\\subsection*{}%\n\\hbox{\\hspace{.05\\hsize}\\defbox{\\medskip#1\\bigskip}}%\n\\subsection*{}}\n\n\\def{}\n\n\n\n\n\\def{\\hbox{\\it\\tiny T}}{{\\hbox{\\it\\tiny T}}}\n\n\\def{\\text{diag}}{{\\text{diag}}}\n\n\\def\\phantom{-}{\\phantom{-}}\n\n\\def\\buildrel{\\rm w}\\over\\longrightarrow{\\buildrel{\\rm w}\\over\\longrightarrow}\n\\def\\buildrel{\\rm v}\\over\\longrightarrow{\\buildrel{\\rm v}\\over\\longrightarrow}\n\\def\\tvlimit#1{\\buildrel{\\scriptscriptstyle #1}\\over\\Longrightarrow}\n\\def\\buildrel{\\rm d}\\over\\longrightarrow{\\buildrel{\\rm d}\\over\\longrightarrow}\n\n\\def\\goes#1{\\buildrel{#1}\\over\\longrightarrow}\n\\def\\wiggles#1{\\buildrel{#1}\\over\\leadsto}\n\n\n\\def\\mathsf{Tr}{\\mathsf{Tr}}\n\n\\def{\\rm rank\\,}{{\\rm rank\\,}}\n\\def\\hbox{\\rm deg\\thinspace}{\\hbox{\\rm deg\\thinspace}}\n\\def{\\rm sign}{{\\rm sign}}\n\\def{\\rm supp\\,}{{\\rm supp\\,}}\n\n\n\\newsavebox{\\junk}\n\\savebox{\\junk}[1.6mm]{\\hbox{$|\\!|\\!|$}}\n\\def{\\usebox{\\junk}}{{\\usebox{\\junk}}}\n\n\n\\def{\\mathop{\\rm det}}{{\\mathop{\\rm det}}}\n\\def\\mathop{\\rm lim\\ sup}{\\mathop{\\rm lim\\ sup}}\n\\def\\mathop{\\rm lim\\ inf}{\\mathop{\\rm lim\\ inf}}\n\\def\\mathop{\\rm arg\\, min}{\\mathop{\\rm arg\\, min}}\n\n\\def\\mathop{\\rm arg\\, max}{\\mathop{\\rm arg\\, max}}\n\n\n\n\n\n\n\n\n\\def\\field{A}{{\\sf A}}\n\\def{\\sf K}{{\\sf K}}\n\\def{\\sf U}{{\\sf U}}\n\\def{\\sf V}{{\\sf V}}\n\\def{\\sf W}{{\\sf W}}\n\n\\def{\\mathchoice{\\sf X}\\sfX{\\hbox{\\scriptsize\\sf X}}\\smallsfX}{{\\sf X}}\n\\def{\\mathchoice{\\sf Y}\\sfY{\\hbox{\\scriptsize\\sf Y}}\\smallsfY}{{\\sf Y}}\n\n\\def{\\sf Z}{{\\sf Z}}\n\n\n\n\n\n\\def{\\mathbb y}{{\\cal B}({\\mathchoice{\\sf Y}\\sfY{\\hbox{\\scriptsize\\sf Y}}\\smallsfY})}\n\n\\def\\tinyplus{\\vbox{\\hrule width 3pt depth -1.5pt height 1.9pt\n \\vskip -1.5pt\n \\hbox{\\hskip 1.3pt\\vrule height 3pt depth 0pt width .4pt}}}\n\n\n\\def{\\sf X^{\\rm z}}{{\\sf X^{\\rm z}}}\n\\def{\\cal B}(\\state^\\hbox{\\rm\\tiny z}){{\\cal B}({\\mathchoice{\\sf X}\\sfX{\\hbox{\\scriptsize\\sf X}}\\smallsfX}^\\hbox{\\rm\\tiny z})}\n\\def{\\sf X^{{\\rm z}_{ \\tinyplus}}}{{\\sf X^{{\\rm z}_{ \\tinyplus}}}}\n\\def({\\sf X},{\\cal B}({{\\mathbb f} X}),{{\\mathbb f} m}){({\\sf X},{\\cal B}({{\\mathbb f} X}),{{\\mathbb f} m})}\n\\def{\\mathbb x}{{{\\cal B}({\\mathchoice{\\sf X}\\sfX{\\hbox{\\scriptsize\\sf X}}\\smallsfX})}}\n\\def{\\cal B}(\\state^{{\\rm z}_{\\tinyplus}} )} \\def\\bxplus{{{\\cal B}^+(\\state)}{{\\cal B}({\\mathchoice{\\sf X}\\sfX{\\hbox{\\scriptsize\\sf X}}\\smallsfX}^{{\\rm z}_{\\tinyplus}} )} \\def\\bxplus{{{\\cal B}^+({\\mathchoice{\\sf X}\\sfX{\\hbox{\\scriptsize\\sf X}}\\smallsfX})}}\n\n\n\\newcommand{\\field}[1]{\\mathbb{#1}}\n\n\n\n\\def\\field{R}_+{\\field{R}_+}\n\\def\\field{R}{\\field{R}}\n\n\\def\\field{A}{\\field{A}}\n\n\n\\def\\mbox{\\rm{\\large{1}}}{\\mbox{\\rm{\\large{1}}}}\n\\def\\mbox{\\rm{\\large{0}}}{\\mbox{\\rm{\\large{0}}}}\n\n\n\\def\\field{I}{\\field{I}}\n\n\n\\def\\field{T}{\\field{T}}\n\\def\\field{Z}{\\field{Z}}\n\n\n\\def\\field{Z}{\\field{Z}}\n\n\\def\\field{Z}_+{\\field{Z}_+}\n\n\n\\def\\field{Q}{\\field{Q}}\n\n\\def\\field{C}{\\field{C}}\n\n\n\n\n\n\\def{\\check{{\\cal F}}}{{\\check{{\\cal F}}}}\n\\def{\\check{{\\mathchoice{\\bfatom}{\\bfatom}{\\alpha}{\\alpha}}}}{{\\check{{\\mathchoice{\\bfatom}{\\bfatom}{\\alpha}{\\alpha}}}}}\n\\def\\check{{\\sf E}}{\\check{{\\sf E}}}\n\\def\\check{\\Phi}{\\check{\\Phi}}\n\\def\\check{\\bfmath{\\Phi}}{\\check{\\bfmath{\\Phi}}}\n\\def\\check{{\\sf P}}{\\check{{\\sf P}}}\n\\def\\cProb\\!{\\check{{\\sf P}}\\!}\n\\def\\check{\\Phi}{\\check{\\Phi}}\n\n\\def\\check{\\pi}{\\check{\\pi}}\n\\def\\check{A}{\\check{A}}\n\n\\def{\\check{G}}{{\\check{G}}}\n\n\\def{\\check{P}}{{\\check{P}}}\n\\def{\\cal B}^+({\\check{\\state}}){{\\cal B}^+({\\check{\\state}})}\n\\def{\\check{\\state}}{{\\check{{\\mathchoice{\\sf X}\\sfX{\\hbox{\\scriptsize\\sf X}}\\smallsfX}}}}\n\\def{\\cal B}(\\cstate){{\\cal B}({\\check{\\state}})}\n\n\\def\\bfmath{\\Phi}^{\\Lambda}{\\bfmath{\\Phi}^{\\Lambda}}\n\n\n\n\n\\def{\\mathbb A}{{\\mathbb A}}\n\\def{\\mathbb B}{{\\mathbb B}}\n\\def{\\mathbb C}{{\\mathbb C}}\n\\def{\\mathbb D}{{\\mathbb D}}\n\\def{\\mathbb E}{{\\mathbb E}}\n\\def{\\mathbb F}{{\\mathbb F}}\n\\def{\\mathbb G}{{\\mathbb G}}\n\\def{\\mathbb H}{{\\mathbb H}}\n\\def{\\mathbb I}{{\\mathbb I}}\n\\def{\\mathbb J}{{\\mathbb J}}\n\\def{\\mathbb K}{{\\mathbb K}}\n\\def{\\mathbb L}{{\\mathbb L}}\n\\def{\\mathbb M}{{\\mathbb M}}\n\\def{\\mathbb N}{{\\mathbb N}}\n\\def{\\mathbb O}{{\\mathbb O}}\n\\def{\\mathbb P}{{\\mathbb P}}\n\\def{\\mathbb Q}{{\\mathbb Q}}\n\\def{\\mathbb R}{{\\mathbb R}}\n\\def{\\mathbb S}{{\\mathbb S}}\n\\def{\\mathbb T}{{\\mathbb T}}\n\\def{\\mathbb U}{{\\mathbb U}}\n\\def{\\mathbb V}{{\\mathbb V}}\n\\def{\\mathbb W}{{\\mathbb W}}\n\\def{\\mathbb X}{{\\mathbb X}}\n\\def{\\mathbb Y}{{\\mathbb Y}}\n\\def{\\mathbb Z}{{\\mathbb Z}}\n\n\n\n\n\n\n\\def{\\mathbb a}{{\\mathbb a}}\n\\def{\\mathbb b}{{\\mathbb b}}\n\\def{\\mathbb c}{{\\mathbb c}}\n\\def{\\mathbb d}{{\\mathbb d}}\n\\def{\\mathbb e}{{\\mathbb e}}\n\\def{\\mathbb f}{{\\mathbb f}}\n\\def{\\mathbb g}{{\\mathbb g}}\n\\def{\\mathbb h}{{\\mathbb h}}\n\\def{\\mathbb i}{{\\mathbb i}}\n\\def{\\mathbb j}{{\\mathbb j}}\n\\def{\\mathbb k}{{\\mathbb k}}\n\\def{\\mathbb l}{{\\mathbb l}}\n\\def{\\mathbb m}{{\\mathbb m}}\n\\def{\\mathbb n}{{\\mathbb n}}\n\\def{\\mathbb o}{{\\mathbb o}}\n\\def{\\mathbb p}{{\\mathbb p}}\n\\def{\\mathbb q}{{\\mathbb q}}\n\\def{\\mathbb r}{{\\mathbb r}}\n\\def{\\mathbb s}{{\\mathbb s}}\n\\def{\\mathbb t}{{\\mathbb t}}\n\\def{\\mathbb u}{{\\mathbb u}}\n\\def{\\mathbb v}{{\\mathbb v}}\n\\def{\\mathbb w}{{\\mathbb w}}\n\\def{\\mathbb x}{{\\mathbb x}}\n\\def{\\mathbb y}{{\\mathbb y}}\n\\def{\\mathbb z}{{\\mathbb z}}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\def{\\bf A}{{{\\mathbb f} A}}\n\\def{\\bf B}{{{\\mathbb f} B}}\n\\def{\\bf C}{{{\\mathbb f} C}}\n\\def{\\bf D}{{{\\mathbb f} D}}\n\\def{\\bf E}{{{\\mathbb f} E}}\n\\def{\\bf F}{{{\\mathbb f} F}}\n\\def{\\bf G}{{{\\mathbb f} G}}\n\\def{\\bf H}{{{\\mathbb f} H}}\n\\def{\\bf I}{{{\\mathbb f} I}}\n\\def{\\bf J}{{{\\mathbb f} J}}\n\\def{\\bf K}{{{\\mathbb f} K}}\n\\def{\\bf L}{{{\\mathbb f} L}}\n\\def{\\bf M}{{{\\mathbb f} M}}\n\\def{\\bf N}{{{\\mathbb f} N}}\n\\def{\\bf O}{{{\\mathbb f} O}}\n\\def{\\bf P}{{{\\mathbb f} P}}\n\\def{\\bf Q}{{{\\mathbb f} Q}}\n\\def{\\bf R}{{{\\mathbb f} R}}\n\\def{\\bf S}{{{\\mathbb f} S}}\n\\def{\\bf T}{{{\\mathbb f} T}}\n\\def{\\bf U}{{{\\mathbb f} U}}\n\\def{\\bf V}{{{\\mathbb f} V}}\n\\def{\\bf W}{{{\\mathbb f} W}}\n\\def{\\bf X}{{{\\mathbb f} X}}\n\\def{\\bf Y}{{{\\mathbb f} Y}}\n\\def{\\bf Z}{{{\\mathbb f} Z}}\n\n\\def{\\bf a}{{{\\mathbb f} a}}\n\\def{\\bf b}{{{\\mathbb f} b}}\n\\def{\\bf c}{{{\\mathbb f} c}}\n\\def{\\bf d}{{{\\mathbb f} d}}\n\\def{\\bf e}{{{\\mathbb f} e}}\n\\def{\\bf f}{{{\\mathbb f} f}}\n\\def{\\bf g}{{{\\mathbb f} g}}\n\\def{\\bf h}{{{\\mathbb f} h}}\n\\def{\\bf i}{{{\\mathbb f} i}}\n\\def{\\bf j}{{{\\mathbb f} j}}\n\\def{\\bf k}{{{\\mathbb f} k}}\n\\def{\\bf l}{{{\\mathbb f} l}}\n\\def{\\bf m}{{{\\mathbb f} m}}\n\\def{\\bf n}{{{\\mathbb f} n}}\n\\def{\\bf o}{{{\\mathbb f} o}}\n\\def{\\bf p}{{{\\mathbb f} p}}\n\\def{\\bf q}{{{\\mathbb f} q}}\n\\def{\\bf r}{{{\\mathbb f} r}}\n\\def{\\bf s}{{{\\mathbb f} s}}\n\\def{\\bf t}{{{\\mathbb f} t}}\n\\def{\\bf u}{{{\\mathbb f} u}}\n\\def{\\bf v}{{{\\mathbb f} v}}\n\\def{\\bf w}{{{\\mathbb f} w}}\n\\def{\\bf x}{{{\\mathbb f} x}}\n\\def{\\bf y}{{{\\mathbb f} y}}\n\\def{\\bf z}{{{\\mathbb f} z}}\n\n\n\n\\def{\\mathfrak{A}}{{\\mathfrak{A}}}\n\\def{\\mathfrak{B}}{{\\mathfrak{B}}}\n\\def{\\mathfrak{C}}{{\\mathfrak{C}}}\n\\def{\\mathfrak{D}}{{\\mathfrak{D}}}\n\\def{\\mathfrak{E}}{{\\mathfrak{E}}}\n\\def{\\mathfrak{F}}{{\\mathfrak{F}}}\n\\def{\\mathfrak{G}}{{\\mathfrak{G}}}\n\\def{\\mathfrak{H}}{{\\mathfrak{H}}}\n\\def{\\mathfrak{I}}{{\\mathfrak{I}}}\n\\def{\\mathfrak{J}}{{\\mathfrak{J}}}\n\\def{\\mathfrak{K}}{{\\mathfrak{K}}}\n\\def{\\mathfrak{L}}{{\\mathfrak{L}}}\n\\def{\\mathfrak{M}}{{\\mathfrak{M}}}\n\\def{\\mathfrak{N}}{{\\mathfrak{N}}}\n\\def{\\mathfrak{O}}{{\\mathfrak{O}}}\n\\def{\\mathfrak{P}}{{\\mathfrak{P}}}\n\\def{\\mathfrak{Q}}{{\\mathfrak{Q}}}\n\\def{\\mathfrak{R}}{{\\mathfrak{R}}}\n\\def{\\mathfrak{S}}{{\\mathfrak{S}}}\n\\def{\\mathfrak{T}}{{\\mathfrak{T}}}\n\\def{\\mathfrak{U}}{{\\mathfrak{U}}}\n\\def{\\mathfrak{V}}{{\\mathfrak{V}}}\n\\def{\\mathfrak{W}}{{\\mathfrak{W}}}\n\\def{\\mathfrak{X}}{{\\mathfrak{X}}}\n\\def{\\mathfrak{Y}}{{\\mathfrak{Y}}}\n\\def{\\mathfrak{Z}}{{\\mathfrak{Z}}}\n\n\\def{\\mathfrak a}{{\\mathfrak a}}\n\\def{\\mathfrak b}{{\\mathfrak b}}\n\\def{\\mathfrak c}{{\\mathfrak c}}\n\\def{\\mathfrak d}{{\\mathfrak d}}\n\\def{\\mathfrak e}{{\\mathfrak e}}\n\\def{\\mathfrak f}{{\\mathfrak f}}\n\\def{\\mathfrak g}{{\\mathfrak g}}\n\\def{\\mathfrak h}{{\\mathfrak h}}\n\\def{\\mathfrak i}{{\\mathfrak i}}\n\\def{\\mathfrak j}{{\\mathfrak j}}\n\\def{\\mathfrak k}{{\\mathfrak k}}\n\\def{\\mathfrak l}{{\\mathfrak l}}\n\\def{\\mathfrak m}{{\\mathfrak m}}\n\\def{\\mathfrak n}{{\\mathfrak n}}\n\\def{\\mathfrak o}{{\\mathfrak o}}\n\\def{\\mathfrak p}{{\\mathfrak p}}\n\\def{\\mathfrak q}{{\\mathfrak q}}\n\\def{\\mathfrak r}{{\\mathfrak r}}\n\\def{\\mathfrak s}{{\\mathfrak s}}\n\\def{\\mathfrak t}{{\\mathfrak t}}\n\\def{\\mathfrak u}{{\\mathfrak u}}\n\\def{\\mathfrak v}{{\\mathfrak v}}\n\\def{\\mathfrak w}{{\\mathfrak w}}\n\\def{\\mathfrak x}{{\\mathfrak x}}\n\\def{\\mathfrak y}{{\\mathfrak y}}\n\\def{\\mathfrak z}{{\\mathfrak z}}\n\n\n\n\n\n\n\n\\def{\\mathscr{A}}{{\\mathscr{A}}}\n\\def{\\mathscr{B}}{{\\mathscr{B}}}\n\\def{\\mathscr{C}}{{\\mathscr{C}}}\n\\def{\\mathscr{D}}{{\\mathscr{D}}}\n\\def{\\mathscr{E}}{{\\mathscr{E}}}\n\\def{\\mathscr{F}}{{\\mathscr{F}}}\n\\def{\\mathscr{G}}{{\\mathscr{G}}}\n\\def{\\mathscr{H}}{{\\mathscr{H}}}\n\\def{\\mathscr{I}}{{\\mathscr{I}}}\n\\def{\\mathscr{J}}{{\\mathscr{J}}}\n\\def{\\mathscr{K}}{{\\mathscr{K}}}\n\\def{\\mathscr{L}}{{\\mathscr{L}}}\n\\def{\\mathscr{M}}{{\\mathscr{M}}}\n\\def{\\mathscr{N}}{{\\mathscr{N}}}\n\\def{\\mathscr{O}}{{\\mathscr{O}}}\n\\def{\\mathscr{P}}{{\\mathscr{P}}}\n\\def{\\mathscr{Q}}{{\\mathscr{Q}}}\n\\def{\\mathscr{R}}{{\\mathscr{R}}}\n\\def{\\mathscr{S}}{{\\mathscr{S}}}\n\\def{\\mathscr{T}}{{\\mathscr{T}}}\n\\def{\\mathscr{U}}{{\\mathscr{U}}}\n\\def{\\mathscr{V}}{{\\mathscr{V}}}\n\\def{\\mathscr{W}}{{\\mathscr{W}}}\n\\def{\\mathscr{X}}{{\\mathscr{X}}}\n\\def{\\mathscr{Y}}{{\\mathscr{Y}}}\n\\def{\\mathscr{Z}}{{\\mathscr{Z}}}\n\n\\def{\\mathscr a}{{\\mathscr a}}\n\\def{\\mathscr b}{{\\mathscr b}}\n\\def{\\mathscr c}{{\\mathscr c}}\n\\def{\\mathscr d}{{\\mathscr d}}\n\\def{\\mathscr e}{{\\mathscr e}}\n\\def{\\mathscr f}{{\\mathscr f}}\n\\def{\\mathscr g}{{\\mathscr g}}\n\\def{\\mathscr h}{{\\mathscr h}}\n\\def{\\mathscr i}{{\\mathscr i}}\n\\def{\\mathscr j}{{\\mathscr j}}\n\\def{\\mathscr k}{{\\mathscr k}}\n\\def{\\mathscr l}{{\\mathscr l}}\n\\def{\\mathscr m}{{\\mathscr m}}\n\\def{\\mathscr n}{{\\mathscr n}}\n\\def{\\mathscr o}{{\\mathscr o}}\n\\def{\\mathscr p}{{\\mathscr p}}\n\\def{\\mathscr q}{{\\mathscr q}}\n\\def{\\mathscr r}{{\\mathscr r}}\n\\def{\\mathscr s}{{\\mathscr s}}\n\\def{\\mathscr t}{{\\mathscr t}}\n\\def{\\mathscr u}{{\\mathscr u}}\n\\def{\\mathscr v}{{\\mathscr v}}\n\\def{\\mathscr w}{{\\mathscr w}}\n\\def{\\mathscr x}{{\\mathscr x}}\n\\def{\\mathscr y}{{\\mathscr y}}\n\\def{\\mathscr z}{{\\mathscr z}}\n\n\n\n\\def{\\mathtt{A}}{{\\mathtt{A}}}\n\\def{\\mathtt{B}}{{\\mathtt{B}}}\n\\def{\\mathtt{C}}{{\\mathtt{C}}}\n\\def{\\mathtt{D}}{{\\mathtt{D}}}\n\\def{\\mathtt{E}}{{\\mathtt{E}}}\n\\def{\\mathtt{F}}{{\\mathtt{F}}}\n\\def{\\mathtt{G}}{{\\mathtt{G}}}\n\\def{\\mathtt{H}}{{\\mathtt{H}}}\n\\def{\\mathtt{I}}{{\\mathtt{I}}}\n\\def{\\mathtt{J}}{{\\mathtt{J}}}\n\\def{\\mathtt{K}}{{\\mathtt{K}}}\n\\def{\\mathtt{L}}{{\\mathtt{L}}}\n\\def{\\mathtt{M}}{{\\mathtt{M}}}\n\\def{\\mathtt{N}}{{\\mathtt{N}}}\n\\def{\\mathtt{O}}{{\\mathtt{O}}}\n\\def{\\mathtt{P}}{{\\mathtt{P}}}\n\\def{\\mathtt{Q}}{{\\mathtt{Q}}}\n\\def{\\mathtt{R}}{{\\mathtt{R}}}\n\\def{\\mathtt{S}}{{\\mathtt{S}}}\n\\def{\\mathtt{T}}{{\\mathtt{T}}}\n\\def{\\mathtt{U}}{{\\mathtt{U}}}\n\\def{\\mathtt{V}}{{\\mathtt{V}}}\n\\def{\\mathtt{W}}{{\\mathtt{W}}}\n\\def{\\mathtt{X}}{{\\mathtt{X}}}\n\\def{\\mathtt{Y}}{{\\mathtt{Y}}}\n\\def{\\mathtt{Z}}{{\\mathtt{Z}}}\n\n\\def{\\mathtt a}{{\\mathtt a}}\n\\def{\\mathtt b}{{\\mathtt b}}\n\\def{\\mathtt c}{{\\mathtt c}}\n\\def{\\mathtt d}{{\\mathtt d}}\n\\def{\\mathtt e}{{\\mathtt e}}\n\\def{\\mathtt f}{{\\mathtt f}}\n\\def{\\mathtt g}{{\\mathtt g}}\n\\def{\\mathtt h}{{\\mathtt h}}\n\\def{\\mathtt i}{{\\mathtt i}}\n\\def{\\mathtt j}{{\\mathtt j}}\n\\def{\\mathtt k}{{\\mathtt k}}\n\\def{\\mathtt l}{{\\mathtt l}}\n\\def{\\mathtt m}{{\\mathtt m}}\n\\def{\\mathtt n}{{\\mathtt n}}\n\\def{\\mathtt o}{{\\mathtt o}}\n\\def{\\mathtt p}{{\\mathtt p}}\n\\def{\\mathtt q}{{\\mathtt q}}\n\\def{\\mathtt r}{{\\mathtt r}}\n\\def{\\mathtt s}{{\\mathtt s}}\n\\def{\\mathtt t}{{\\mathtt t}}\n\\def{\\mathtt u}{{\\mathtt u}}\n\\def{\\mathtt v}{{\\mathtt v}}\n\\def{\\mathtt w}{{\\mathtt w}}\n\\def{\\mathtt x}{{\\mathtt x}}\n\\def{\\mathtt y}{{\\mathtt y}}\n\\def{\\mathtt z}{{\\mathtt z}}\n\n\n\n\n\n\n\n\n\\def{\\sf A}{{\\sf A}}\n\\def{\\sf B}{{\\sf B}}\n\\def{\\sf C}{{\\sf C}}\n\\def{\\sf D}{{\\sf D}}\n\\def{\\sf E}{{\\sf E}}\n\\def{\\sf F}{{\\sf F}}\n\\def{\\sf G}{{\\sf G}}\n\\def{\\sf H}{{\\sf H}}\n\\def{\\sf I}{{\\sf I}}\n\\def{\\sf J}{{\\sf J}}\n\\def{\\sf K}{{\\sf K}}\n\\def{\\sf L}{{\\sf L}}\n\\def{\\sf M}{{\\sf M}}\n\\def{\\sf N}{{\\sf N}}\n\\def{\\sf O}{{\\sf O}}\n\\def{\\sf P}{{\\sf P}}\n\\def{\\sf Q}{{\\sf Q}}\n\\def{\\sf R}{{\\sf R}}\n\\def{\\sf S}{{\\sf S}}\n\\def{\\sf T}{{\\sf T}}\n\\def{\\sf U}{{\\sf U}}\n\\def{\\sf V}{{\\sf V}}\n\\def{\\sf W}{{\\sf W}}\n\\def{\\sf X}{{\\sf X}}\n\\def{\\sf Y}{{\\sf Y}}\n\\def{\\sf Z}{{\\sf Z}}\n\n\\def{\\sf a}{{\\sf a}}\n\\def{\\sf b}{{\\sf b}}\n\\def{\\sf c}{{\\sf c}}\n\\def{\\sf d}{{\\sf d}}\n\\def{\\sf e}{{\\sf e}}\n\\def{\\sf f}{{\\sf f}}\n\\def{\\sf g}{{\\sf g}}\n\\def{\\sf h}{{\\sf h}}\n\\def{\\sf i}{{\\sf i}}\n\\def{\\sf j}{{\\sf j}}\n\\def{\\sf k}{{\\sf k}}\n\\def{\\sf l}{{\\sf l}}\n\\def{\\sf m}{{\\sf m}}\n\\def{\\sf n}{{\\sf n}}\n\\def{\\sf o}{{\\sf o}}\n\\def{\\sf p}{{\\sf p}}\n\\def{\\sf q}{{\\sf q}}\n\\def{\\sf r}{{\\sf r}}\n\\def{\\sf s}{{\\sf s}}\n\\def{\\sf t}{{\\sf t}}\n\\def{\\sf u}{{\\sf u}}\n\\def{\\sf v}{{\\sf v}}\n\\def{\\sf w}{{\\sf w}}\n\\def{\\sf x}{{\\sf x}}\n\\def{\\sf y}{{\\sf y}}\n\\def{\\sf z}{{\\sf z}}\n\n\n\n\n\n\n\n\n\n\\def\\bfmath#1{{\\mathchoice{\\mbox{\\boldmath$#1$}}%\n{\\mbox{\\boldmath$#1$}}%\n{\\mbox{\\boldmath$\\scriptstyle#1$}}%\n{\\mbox{\\boldmath$\\scriptscriptstyle#1$}}}}\n\n\n\n\n\n\n\\def\\bfmath{a}{\\bfmath{a}}\n\\def\\bfmath{b}{\\bfmath{b}}\n\\def\\bfmath{d}{\\bfmath{d}}\n\\def\\bfmath{e}{\\bfmath{e}}\n\\def\\bfmath{m}{\\bfmath{m}}\n\\def\\bfmath{q}{\\bfmath{q}}\n\\def\\bfmath{r}{\\bfmath{r}}\n\\def\\bfmath{v}{\\bfmath{v}}\n\\def\\bfmath{x}{\\bfmath{x}}\n\\def\\bfmath{y}{\\bfmath{y}}\n\\def\\bfmath{u}{\\bfmath{u}}\n\\def\\bfmath{w}{\\bfmath{w}}\n\n\n\\def\\bfmath{\\widehat w}{\\bfmath{{\\hat w}}}\n\n\\def\\bfmath{z}{\\bfmath{z}}\n\n\\def\\bfmath{A}{\\bfmath{A}}\n\\def\\bfmath{B}{\\bfmath{B}}\n\n\\def\\bfmath{C}{\\bfmath{C}}\n\\def\\bfmath{D}{\\bfmath{D}}\n\\def\\bfmath{E}{\\bfmath{E}}\n\\def\\bfmath{F}{\\bfmath{F}}\n\\def\\bfmath{G}{\\bfmath{G}}\n\\def\\bfmath{H}{\\bfmath{H}}\n\\def\\bfmath{I}{\\bfmath{I}}\n\\def\\bfmath{{\\hat I}}{\\bfmath{{\\hat I}}}\n\\def\\bfmath{\\haL}{\\bfmath{\\haL}}\n\\def\\bfmath{L}{\\bfmath{L}}\n\\def\\bfmath{M}{\\bfmath{M}}\n\\def\\bfmath{N}{\\bfmath{N}}\n\\def\\bfmath{Q}{\\bfmath{Q}}\n\\def\\bfmath{R}{\\bfmath{R}}\n\\def\\bfmath{\\bar{R}}{\\bfmath{\\bar{R}}}\n\n\n\n\n\n\\def\\bfmath{S}{\\bfmath{S}}\n\\def\\bfmath{T}{\\bfmath{T}}\n\\def\\bfmath{U}{\\bfmath{U}}\n\n\n\\def\\bfmath{\\widehat N}{\\bfmath{\\widehat N}}\n\\def{\\bfmath{$\\widehat Q$}}{\\bfmath{\\widehat Q}}\n\n\\def\\bfmath{X}{\\bfmath{X}}\n\\def\\tilde \\bfmX{\\bfmath{\\widetilde{X}}}\n\n\\def\\bfmath{\\widetilde{Q}}{\\bfmath{\\widetilde{Q}}}\n\n\\def\\tilde{\\bfmath{q}}{\\tilde{\\bfmath{q}}}\n\n\n\n\n\\def\\bfmath{Y}{\\bfmath{Y}}\n\\def\\bfmath{\\widehat Y}{\\bfmath{\\widehat Y}}\n\n\n\\def\\hbox to 0pt{$\\widehat{\\bfmY}$\\hss}\\widehat{\\phantom{\\raise 1.25pt\\hbox{$\\bfmY$}}}{\\bfmath{\\hhaY}}\n\\def\\hbox to 0pt{$\\widehat{\\bfmY}$\\hss}\\widehat{\\phantom{\\raise 1.25pt\\hbox{$\\bfmY$}}}{\\hbox to 0pt{$\\widehat{\\bfmath{Y}}$\\hss}\\widehat{\\phantom{\\raise 1.25pt\\hbox{$\\bfmath{Y}$}}}}\n\n\n\\def\\bfmath{V}{\\bfmath{V}}\n\\def\\bfmath{W}{\\bfmath{W}}\n\\def\\bfmath{\\widehat W}{\\bfmath{{\\widehat W}}}\n\\def\\bfmath{Z}{\\bfmath{Z}}\n\n\n\\def{\\bfmath{\\widehat Z}}{{\\bfmath{\\widehat Z}}}\n\n\n\\def\\bfmath{\\widehat w}{\\bfmath{\\widehat w}}\n\\def\\bfmath{\\widehat y}{\\bfmath{\\widehat y}}\n\\def\\bfmath{\\widehat z}{\\bfmath{\\widehat z}}\n\\def{\\bfmath{$\\widehat Q$}}{\\bfmath{\\widehat Q}}\n\\def\\bfmath{\\widehat S}{\\bfmath{\\widehat S}}\n\\def\\bfmath{\\widehat U}{\\bfmath{\\widehat U}}\n\\def\\bfmath{\\widehat W}{\\bfmath{\\widehat W}}\n\n\n\n\n\n\n\\def{\\mbox{\\boldmath$\\alpha$}}{\\bfmath{\\theta}}\n\\def\\bfmath{\\Phi}{\\bfmath{\\Phi}}\n\n\n\\def\\bfmath{\\eta}{\\bfmath{\\eta}}\n\n\\def\\bfmath{\\pi}{\\bfmath{\\pi}}\n\n\n\n\\def\\bfmath{\\phi}{\\bfmath{\\phi}}\n\\def\\bfmath{\\psi}{\\bfmath{\\psi}}\n\n\n\\def\\bfmath{\\rho}{\\bfmath{\\rho}}\n\\def\\bfmath{\\zeta}{\\bfmath{\\zeta}}\n\n\\def\\bfmath{\\Upsilon}{\\bfmath{\\Upsilon}}\n\n\n\n\n\n\n\n\\def{\\widehat{\\Psi}}{{\\widehat{\\Psi}}}\n\\def{\\widehat{\\Gamma}}{{\\widehat{\\Gamma}}}\n\\def{\\widehat{\\Sigma}}{{\\widehat{\\Sigma}}}\n\\def{\\widehat{\\bfPhi}}{{\\widehat{\\bfmath{\\Phi}}}}\n\\def{\\widehat P}{{\\widehat P}}\n\n\n\n\n\n\\def{\\hat c}{{\\hat c}}\n\\def\\hat{z}{\\hat{z}}\n\\def{\\hat {\\i}}{{\\hat {\\i}}}\n\\def{\\hat f}{{\\hat f}}\n\\def{\\hat g}{{\\hat g}}\n\\def{\\hat m}{{\\hat m}}\n\\def{\\hat h}{{\\hat h}}\n\\def{\\hat p}{{\\hat p}}\n\\def{\\hat q}{{\\hat q}}\n\n\n\\def{\\bfmath{$\\widehat Q$}}{{\\bfmath{$\\widehat Q$}}}\n\n\\def\\hat v{\\hat v}\n\\def{\\hat w}{{\\hat w}}\n\\def{\\hat y}{{\\hat y}}\n\n\n\\def{\\widehat F}{{\\widehat F}}\n\\def{\\hat I}{{\\hat I}}\n\\def{\\hat J}{{\\hat J}}\n\n\\def\\hat{N}{\\hat{N}}\n\n\\def{\\widehat Q}{{\\widehat Q}}\n\\def{\\widehat T}{{\\widehat T}}\n\\def{\\widehat W}{{\\widehat W}}\n\n\\def{\\hat\\eta}{{\\hat\\eta}}\n\n\\def{\\hat\\theta}{{\\hat\\theta}}\n\\def{\\hat\\psi}{{\\hat\\psi}}\n\\def{\\hat\\pi}{{\\hat\\pi}}\n\\def{\\hat\\nu}{{\\hat\\nu}}\n\\def{\\hat\\mu}{{\\hat\\mu}}\n\\def{\\hat\\alpha}{{\\hat\\alpha}}\n\\def{\\hat\\beta}{{\\hat\\beta}}\n\\def{\\hat\\gamma}{{\\hat\\gamma}}\n\n\\def{\\hat\\lambda}{{\\hat\\lambda}}\n\\def{\\hat\\Lambda}{{\\hat\\Lambda}}\n\n\n\\def{\\rm A}{{\\rm A}}\n\\def{\\rm B}{{\\rm B}}\n\\def{\\rm C}{{\\rm C}}\n\\def{\\rm C}{{\\rm C}}\n\\def{\\rm E}{{\\rm E}}\n\\def{\\rm H}{{\\rm H}}\n\\def{\\rm I}{{\\rm I}}\n\\def{\\rm M}{{\\rm M}}\n\\def{\\rm P}{{\\rm P}}\n\\def{\\rm Q}{{\\rm Q}}\n\n\\def{\\rm d}{{\\rm d}}\n\\def{\\rm g}{{\\rm g}}\n\\def{\\rm k}{{\\rm k}}\n\\def{\\rm u}{{\\rm u}}\n\\def{\\rm x}{{\\rm x}}\n\n\\def{\\rm P}{{\\rm P}}\n\\def{\\rm E}{{\\rm E}}\n\n\n\n\n\n\n\\def\\til={{\\widetilde =}}\n\\def{\\widetilde \\Phi}{{\\widetilde \\Phi}}\n\\def{\\widetilde N}{{\\widetilde N}}\n\\def{\\widetilde P}{{\\widetilde P}}\n\\def{\\widetilde Q}{{\\widetilde Q}}\n\\def\\widetilde T{\\widetilde T}\n\\def\\widetilde W{\\widetilde W}\n\n\n\n\n\n\\def{\\tilde \\mu}{{\\tilde \\mu}}\n\\def{\\tilde \\pi}{{\\tilde \\pi}}\n\\def{\\bf\\tilde X}{{{\\mathbb f}\\tilde X}}\n\\def{\\bf \\tilde m}{{{\\mathbb f} \\tilde m}}\n\\def{\\tilde \\theta}{{\\tilde \\theta}}\n\\def\\tilde e{\\tilde e}\n\\def\\tilde \\sigma{\\tilde \\sigma}\n\\def\\tilde \\tau{\\tilde \\tau}\n\n\n\n\n\n\\def{\\cal A}{{\\cal A}}\n\\def{\\cal B}{{\\cal B}}\n\\def{\\cal C}{{\\cal C}}\n\\def{\\cal D}{{\\cal D}}\n\\def{\\cal E}{{\\cal E}}\n\\def{\\cal F}{{\\cal F}}\n\\def{\\cal G}{{\\cal G}}\n\\def{\\cal H}{{\\cal H}}\n\\def{\\cal J}{{\\cal J}}\n\\def{\\cal I}{{\\cal I}}\n\\def{\\cal K}{{\\cal K}}\n\\def{\\cal L}{{\\cal L}}\n\\def{\\cal M}{{\\cal M}}\n\\def{\\cal N}{{\\cal N}}\n\\def{\\cal O}{{\\cal O}}\n\\def{\\cal P}{{\\cal P}}\n\\def{\\cal Q}{{\\cal Q}}\n\\def{\\cal R}{{\\cal R}}\n\\def{\\cal S}{{\\cal S}}\n\\def{\\cal T}{{\\cal T}}\n\\def{\\cal U}{{\\cal U}}\n\\def{\\cal V}{{\\cal V}}\n\\def{\\cal W}{{\\cal W}}\n\\def{\\cal X}{{\\cal X}}\n\\def{\\cal Y}{{\\cal Y}}\n\\def{\\cal Z}{{\\cal Z}}\n\n\n\n\n\n\\def{\\mbox{\\boldmath$\\alpha$}}{{\\mbox{\\boldmath$\\alpha$}}}\n\\def{\\mathchoice{\\bfatom}{\\bfatom}{\\alpha}{\\alpha}}{{\\mathchoice{{\\mbox{\\boldmath$\\alpha$}}}{{\\mbox{\\boldmath$\\alpha$}}}{\\alpha}{\\alpha}}}\n\n\\def\\clG^+(\\gamma){{\\cal G}^+(\\gamma)}\n\\def\\clG^+(s){{\\cal G}^+(s)}\n\n\n\\def{\\mathbb{V}\\sfa \\sfr}{{\\mathbb{V}{\\sf a} {\\sf r}}}\n\n\n\\defA_+{A_+}\n\\def\\barAt#1{\\overline{A_+(#1)}}\n\n\\def\\head#1{\\subsubsection*{#1}}\n\n\n\n\n\\def\\atopss#1#2{\\genfrac{}{}{0cm}{2}{#1}{#2}}\n\n\\def\\atop#1#2{\\genfrac{}{}{0pt}{}{#1}{#2}}\n\n\n \\def\\FRAC#1#2#3{\\genfrac{}{}{}{#1}{#2}{#3}}\n\n\n\n\\def\\fraction#1#2{{\\mathchoice{\\FRAC{0}{#1}{#2}}%\n{\\FRAC{1}{#1}{#2}}%\n{\\FRAC{3}{#1}{#2}}%\n{\\FRAC{3}{#1}{#2}}}}\n\n\\def\\ddt{{\\mathchoice{\\FRAC{1}{d}{dt}}%\n{\\FRAC{1}{d}{dt}}%\n{\\FRAC{3}{d}{dt}}%\n{\\FRAC{3}{d}{dt}}}}\n\n\n\\def\\ddr{{\\mathchoice{\\FRAC{1}{d}{dr}}%\n{\\FRAC{1}{d}{dr}}%\n{\\FRAC{3}{d}{dr}}%\n{\\FRAC{3}{d}{dr}}}}\n\n\n\\def\\ddr{{\\mathchoice{\\FRAC{1}{d}{dy}}%\n{\\FRAC{1}{d}{dy}}%\n{\\FRAC{3}{d}{dy}}%\n{\\FRAC{3}{d}{dy}}}}\n\n\n\\def\\ddtp{{\\mathchoice{\\FRAC{1}{d^{\\hbox to 2pt{\\rm\\tiny +\\hss}}}{dt}}%\n{\\FRAC{1}{d^{\\hbox to 2pt{\\rm\\tiny +\\hss}}}{dt}}%\n{\\FRAC{3}{d^{\\hbox to 2pt{\\rm\\tiny +\\hss}}}{dt}}%\n{\\FRAC{3}{d^{\\hbox to 2pt{\\rm\\tiny +\\hss}}}{dt}}}}\n\n\n\\def\\ddalpha{{\\mathchoice{\\FRAC{1}{d}{d\\alpha}}%\n{\\FRAC{1}{d}{d\\alpha}}%\n{\\FRAC{3}{d}{d\\alpha}}%\n{\\FRAC{3}{d}{d\\alpha}}}}\n\n\n\n\\def\\half{{\\mathchoice{\\FRAC{1}{1}{2}}%\n{\\FRAC{1}{1}{2}}%\n{\\FRAC{3}{1}{2}}%\n{\\FRAC{3}{1}{2}}}}\n\n\\def\\third{{\\mathchoice{\\FRAC{1}{1}{3}}%\n{\\FRAC{1}{1}{3}}%\n{\\FRAC{3}{1}{3}}%\n{\\FRAC{3}{1}{3}}}}\n\n\\def\\fourth{{\\mathchoice{\\FRAC{1}{1}{4}}%\n{\\FRAC{1}{1}{4}}%\n{\\FRAC{3}{1}{4}}%\n{\\FRAC{3}{1}{4}}}}\n\n \\def\\threefourth{{\\mathchoice{\\FRAC{1}{3}{4}}%\n{\\FRAC{1}{3}{4}}%\n{\\FRAC{3}{3}{4}}%\n{\\FRAC{3}{3}{4}}}}\n\n\\def\\sixth{{\\mathchoice{\\FRAC{1}{1}{6}}%\n{\\FRAC{1}{1}{6}}%\n{\\FRAC{3}{1}{6}}%\n{\\FRAC{3}{1}{6}}}}\n\n\\def\\eighth{{\\mathchoice{\\FRAC{1}{1}{8}}%\n{\\FRAC{1}{1}{8}}%\n{\\FRAC{3}{1}{8}}%\n{\\FRAC{3}{1}{8}}}}\n\n\n\n\\def\\buildrel{\\rm dist}\\over ={\\buildrel{\\rm dist}\\over =}\n\n\\def\\mathbin{:=}{\\mathbin{:=}}\n\n\\def\\mu^{\\hbox{\\rm \\tiny Leb}}{\\mu^{\\hbox{\\rm \\tiny Leb}}}\n\\def{\\cal M}{{\\cal M}}\n\n\n\n\\def{\\sf P}{{\\sf P}}\n\\def{\\sf P\\! }{{\\sf P\\! }}\n\\def{\\sf E}{{\\sf E}}\n\n\\def\\taboo#1{{{}_{#1}P}}\n\n\\def{\\rm i.o.}{{\\rm i.o.}}\n\n\n\\def{\\rm a.s.}{{\\rm a.s.}}\n\\def{\\rm vec\\, }{{\\rm vec\\, }}\n\n\n\\def{\\displaystyle\\frac{1}{ N}}{{\\displaystyle\\frac{1}{ N}}}\n\n\\def\\average#1,#2,{{1\\over #2} \\sum_{#1}^{#2}}\n\n\\def\\hbox{ \\rm \\ enters\\ }{\\hbox{ \\rm \\ enters\\ }}\n\n\\def\\eye(#1){{{\\mathbb f}(#1)}\\quad}\n\\def\\varepsilon{\\varepsilon}\n\n\\def\\,\\cdot\\,{\\,\\cdot\\,}\n\n\\def{{\\mathbb V}{\\sf a}{\\sf r}}{{{\\mathbb V}{\\sf a}{\\sf r}}}\n\n\n\\newtheorem{theorem}{{{\\mathbb f} Theorem}}\n\\newtheorem{corollary}{Corollary}\n\\newtheorem{definition}{{{\\mathbb f} Definition}}\n\\newtheorem{remark}{{{\\mathbb f} Remark}}\n\n\\newtheorem{proposition}[theorem]{{{\\mathbb f} Proposition}}\n\\newtheorem{lemma}[theorem]{{{\\mathbb f} Lemma}}\n\n\n\n\\def\\Lemma#1{Lemma~\\ref{t:#1}}\n\\def\\Proposition#1{Proposition~\\ref{t:#1}}\n\\def\\Theorem#1{Theorem~\\ref{t:#1}}\n\\def\\Corollary#1{Corollary~\\ref{t:#1}}\n\\def\\Remark#1{Remark~\\ref{t:#1}}\n\n\n\\def\\Section#1{Section~\\ref{#1}}\n\n\\def\\Figure#1{Figure~\\ref{f:#1}}\n\\def\\Chapter#1{Chapter~\\ref{c:#1}}\n\n\\def\\Appendix#1{Appendix~\\ref{c:#1}}\n\n\\def\\eq#1\/{(\\ref{e:#1})}\n\n\\newcommand{\\inp}[2]{{\\langle #1, #2 \\rangle}}\n\\newcommand{\\inpr}[2]{{\\langle #1, #2 \\rangle}_{\\mathbb R}}\n\n\n\n\\newcommand{\\beqn}[1]{\\notes{#1}%\n\\begin{eqnarray} \\elabel{#1}}\n\n\\newcommand{\\end{eqnarray} }{\\end{eqnarray} }\n\n\n\\newcommand{\\beq}[1]{\\notes{#1}%\n\\begin{equation}\\elabel{#1}}\n\n\\newcommand{\\end{equation}}{\\end{equation}}\n\n\\def\\begin{description}{\\begin{description}}\n\\def\\end{description}{\\end{description}}\n\n\\def{\\overline {a}}{{\\overline {a}}}\n\n\\def{\\overline {c}}{{\\overline {c}}}\n\n\\def{\\overline {f}}{{\\overline {f}}}\n\\def{\\overline {g}}{{\\overline {g}}}\n\\def{\\overline {h}}{{\\overline {h}}}\n\\def{\\overline {l}}{{\\overline {l}}}\n\\def{\\overline {m}}{{\\overline {m}}}\n\\def{\\overline {n}}{{\\overline {n}}}\n\n\\def{\\overline {p}}{{\\overline {p}}}\n\\newcommand{{\\bar{x}}}{{\\bar{x}}}\n\\def{\\overline {y}}{{\\overline {y}}}\n\\def{\\overline {A}}{{\\overline {A}}}\n\\def{\\overline {B}}{{\\overline {B}}}\n\\def{\\overline {C}}{{\\overline {C}}}\n\\def{\\overline {E}}{{\\overline {E}}}\n\\def{\\overline {M}}{{\\overline {M}}}\n\\def{\\overline {P}}{{\\overline {P}}}\n\\def{\\overline {Q}}{{\\overline {Q}}}\n\\def{\\overline {T}}{{\\overline {T}}}\n\n\n\\def{\\underline{n}}{{\\underline{n}}}\n\\def{\\underline{\\rho}}{{\\underline{\\rho}}}\n\n\\def{\\overline{\\atom}}{{\\overline{{\\mathchoice{\\bfatom}{\\bfatom}{\\alpha}{\\alpha}}}}}\n\\def{\\overline{\\rho}}{{\\overline{\\rho}}}\n\\def{\\overline{\\mu}}{{\\overline{\\mu}}}\n\\def{\\overline{\\nu}}{{\\overline{\\nu}}}\n\\def{\\overline{\\alpha}}{{\\overline{\\alpha}}}\n\\def{\\overline{\\beta}}{{\\overline{\\beta}}}\n\n\\def{\\overline{\\alpha}}{{\\overline{\\alpha}}}\n\\def{\\overline{\\eta}}{{\\overline{\\eta}}}\n\n\n\n\\def\\overline{\\bfPhi}{\\overline{\\bfmath{\\Phi}}}\n\\def\\overline{\\Phi}{\\overline{\\Phi}}\n\\def\\overline{\\clF}{\\overline{{\\cal F}}}\n\\def\\overline{\\sigma}{\\overline{\\sigma}}\n\n\\def\\overline{\\Sigma}{\\overline{\\Sigma}}\n\n\n\\def\\overline{\\tau}{\\overline{\\tau}}\n\n\n\\def\\hat{\\sigma}{\\hat{\\sigma}}\n\\def\\hat{\\tau}{\\hat{\\tau}}\n\\def\\hat{P}{\\hat{P}}\n\\def\\hat{\\Phi}{\\hat{\\Phi}}\n\\def\\hat{\\bfPhi}{\\hat{\\bfmath{\\Phi}}}\n\\def\\hat{\\cal F}{\\hat{\\cal F}}\n\n\n\n\n\n\n\\newcounter{rmnum}\n\\newenvironment{romannum}{\\begin{list}{{\\upshape (\\roman{rmnum})}}{\\usecounter{rmnum}\n\\setlength{\\leftmargin}{14pt}\n\\setlength{\\rightmargin}{8pt}\n\\setlength{\\itemindent}{-1pt}\n}}{\\end{list}}\n\n\\newcounter{anum}\n\\newenvironment{alphanum}{\\begin{list}{{\\upshape (\\alph{anum})}}{\\usecounter{anum}\n\\setlength{\\leftmargin}{14pt}\n\\setlength{\\rightmargin}{8pt}\n\\setlength{\\itemindent}{-1pt}\n}}{\\end{list}}\n\n\n\n\n\n\\newenvironment{assumptions}[1]{%\n\\setcounter{keep_counter}{\\value{#1}}\n\\def#1{#1}\n\\begin{list}{\\sf\\, (#1\\arabic{#1})\\mindex{Assumption (#1\\arabic{#1})}\\, }{\\usecounter{#1}%\n\\setlength{\\rightmargin}{\\leftmargin}}\n\\setcounter{#1}{\\value{keep_counter}}}%\n{\\end{list}}\n\n\n\n\\def\\ass(#1:#2){(#1\\ref{#1:#2})}\n\n\n\n\n\\def\\ritem#1{\n\\item[{\\sf \\ass(\\current_model:#1)}]\n}\n\n\\newenvironment{recall-ass}[1]{%\n\\begin{description}\n\\def\\current_model{#1}}{\n\\end{description}\n}\n\n\n\\def\\Ebox#1#2{%\n\\begin{center}\n \\parbox{#1\\hsize}{\\epsfxsize=\\hsize \\epsfbox{#2}}\n\\end{center}}\n\n\n\n\n\n\\def\\cite{meytwe92d}{\\cite{meytwe92d}}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Data sample}\n\nIn this investigation, we capitalize upon the large observational efforts in the past decades to spectroscopically observe and monitor massive O-type stars, and to characterize their multiplicity properties. We use information from over 1800 spectra of 71 O-type objects in six nearby young open clusters -- IC 1805 \\citep{DBRM06,HGB06}, IC 1848 \\citep{HGB06}, NGC 6231 \\citep{SGN08}, NGC 6611 \\citep{SGE09}, Tr 16 \\citep[][and references therein]{RNF09} and IC 2944 \\citep{SJG11}. The O stars in our sample have spectral types from O9.7 to O3, corresponding to a mass range extending from 15 to 60 solar masses. \n\nWith 40 identified spectroscopic systems, the observed binary fraction in our sample is $f_\\mathrm{obs} = 0.56$. Combining new observations from VLT-UVES for long-period systems, and published results from detailed analysis of detected systems in the individual clusters, we consider the properties of these systems as a population. In total, 85\\%\\ and 78\\%\\ of our systems have constrained orbital periods and mass-ratios respectively. This allows us to derive the observed period and mass-ratio distributions of a statistically significant and homogeneous sample of massive stars.\n\nThe comparison between the present sample and the complete sample of Galactic O stars with multi-epoch spectroscopy is discussed in \\citet{SaE11}. Specifically the observed binary fractions from the two samples are identical and the observed distributions of periods, mass-ratios and eccentricities are compatible as revealed through a Kolmogorov-Smirnov (KS) test. We can thus assume that the multiplicity properties of our sample are representative of those of the Galactic O stars in general.\n\n\\begin{figure}[!t]\n\\plotone{hsana_fig1.pdf}\n\\caption{Cumulative number distributions of orbital periods (left panel) and mass-ratios (right panel). Crosses give the observed distributions. Horizontal dashed and dashed-dotted lines indicate the observed and intrinsic number of binaries in our sample, respectively. Dashed and plain lines show, respectively, the best-fit simulated distributions to the data points and the corresponding intrinsic distributions. }\\label{fig1}\n\\end{figure}\n\n\\section{Intrinsic multiplicity properties}\nObserved distributions result from the convolution of the intrinsic distributions by the observational biases. We simulate observational biases using a Monte Carlo approach that incorporates the observational time series of each object in our sample, allowing us to estimate the fraction of undetected binaries and\/or unconstrained orbital parameters. We adopt power laws for the probability density functions of orbital periods (in $\\log_{10}$ space), mass-ratios and eccentricities. These power-law exponents and the intrinsic binary fraction are then simultaneously determined by a comparison of simulated populations of stars with our sample taking into account the observational biases. A more extensive discussion of similarities and differences between our approach and the methods of \\citet{KoF07} and \\citet{KiK12} is given in \\citet{SdKdM12}.\n\nWe find an intrinsic binary fraction of $0.69 \\pm 0.09$, a preference for close pairs ($f_\\mathrm{\\log P}\\propto(\\log P\/\\mathrm{d})^{-0.55}$) and a uniform distribution of the mass ratio ($f_\\mathrm{q}\\propto q^{-0.1}$) for binaries with periods up to about 3000 days. Figure 1 compares our intrinsic, simulated and observed cumulative distributions and shows that observational biases are mostly restricted to the longest periods and most extreme mass-ratios. We find no evidence of a preference for equal mass binaries. Compared to previous studies we obtain a steeper period distribution, thus a larger fraction of short period systems than previously thought. \n\n\\begin{figure}[!t]\n\\plotone[width=4cm]{hsana_fig2.pdf}\n\\caption{Pie chart illustrating the rates of the various interaction scenarios. All numbers are expressed relative to the total number of stars born as O-type, including primaries, secondaries and single stars. }\n\\label{fig2}\n\\end{figure}\n\n\n\n\\section{Evolutionary implication}\n We determine the relative frequencies of binary interaction scenarios by integration of our distribution functions, under assumptions based on detailed binary evolutionary calculations \\citep{PJH92,Pol94,WLB01,dMPH07}. These assumptions are discussed in more details in \\citet{SdMdK12}.\n\nIn short, the adopted upper limit for the orbital period at which mass-transfer interaction plays a significant role is 1500 days. This approximately corresponds to maximum separation at which the initially most massive star will lose nearly all its entire hydrogen-rich envelope before the supernova explosion. The adopted orbital period upper limit for Case A Roche lobe overflow is 6 days. Systems with periods below 2 days are all expected to merge. Longer period Case A systems are assumed to merge if the mass-ratio is less than 0.65. A small fraction of mergers between an evolved star and a main sequence star (cases B and C mergers) are also expected but only affect 4\\%\\ of all O stars.\n\n Interactions that do not lead to coalescence are assumed to strip the primary stars. The secondary stars accrete material and angular momentum and are spun up to critical rotation velocities. Secondary stars which are O stars have been taken into account in our computation of the number of O stars affected by binary interaction, but lower mass (B-type) companions have been ignored. We further ignore lower mass stars that may become O stars during their life by gaining mass via mass accretion or via a binary merger. We also ignore triple systems. \n\nThe fraction of the various binary interaction channels are illustrated in Figure 2.\nWe find that $71 \\pm 8$\\%\\ of all stars born as O stars are a member of a binary system that will interact by Roche lobe overflow. About 40\\% of all O stars will be affected during their main sequence life-time, strongly impacting subsequent evolution. We predict that 33\\%\\ of O stars are stripped of their envelope before they explode as hydrogen-deficient CCSNe (Types Ib, Ic and IIb). This fraction is remarkably close to the observed fraction of hydrogen-poor supernovae, i.e. 37\\%\\ of all CCSNe \\citep{SLF11}. We also find that 20-30\\%\\ of all O stars will merge with a nearby companion.\n\n\n The interaction and merger rates that we compute are respectively two and three times larger than previous estimates and result in a corresponding increase in the number of progenitors of key astrophysical objects produced by binary evolution such as double compact objects, hydrogen-deficient CCSNe and gamma-ray bursts. \n\n\n\n\\acknowledgements\nSupport for this work was provided by NASA through Hubble Fellowship grant HST-HF-51270.01-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. SdM is Hubble fellow. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLearning mixture models is a fundamental problem in statistics and machine learning having numerous applications such as density estimation and clustering. In this work, we consider the special case of mixture models whose component distributions factor into the product of the associated marginals. An example is a mixture of axis-aligned Gaussian distributions, an important class of Gaussian Mixture Models (GMMs). Consider a scenario where different diagnostic tests are applied to patients, and test results are assumed to be independent conditioned on the binary disease status of the patient which is the latent variable. The joint Probability Density Function (PDF) of the tests can be expressed as a weighted sum of two components, and each component factors into the product of univariate marginals. Fitting a mixture model to an unlabeled dataset allows us to cluster the patients into two groups by determining the value of the latent variable using the Maximum a Posteriori (MAP) principle.\n\nMost of the existing literature in this area has focused on the fully-parametric setting, where the mixture components are members of a parametric family, such as Gaussian distributions. The most popular algorithm for learning a parametric mixture model is Expectation Maximization (EM)~\\citep{Dempster1977}. Recently, methods based on tensor decomposition and particularly the Canonical Polyadic Decomposition (CPD) have gained popularity as an alternative to EM for learning various latent variable models~\\citep{AnGeHsu2014b}. What makes the CPD a powerful tool for data analysis is its identifiability properties, as the CPD of a tensor is unique under relatively mild rank conditions~\\citep{SiDeFu2017}.\n\n\nIn this work we propose a two-stage approach based on tensor decomposition for recovering the conditional densities of mixtures of smooth product distributions. We show that when the unknown conditional densities are approximately band-limited it is possible to uniquely identify and recover them from partially observed data. The key idea is to jointly factorize histogram estimates of lower-dimensional PDFs that can be easily and reliably estimated from observed samples. The conditional densities can then be recovered using an interpolation procedure. We formulate the problem as a coupled tensor factorization and propose an alternating-optimization algorithm. We demonstrate the effectiveness of the approach on both synthetic and real data.\n\n\\textbf{Notation}: Bold, lowercase, $\\mathbf{x}$, and uppercase letters, $\\mathbf{X}$, denote vectors and matrices respectively. Bold, underlined, uppercase letters, $\\underline{\\mathbf{X}}$, denote $N$-way (${N \\geq 3}$) tensors. We use the notation $\\mathbf{x}[i]$, $\\mathbf{X}[i,j]$, $\\underline{\\mathbf{X}}[i,j,k]$ to refer to specific elements of a vector, matrix and tensor respectively. \nWe denote the vector obtained by vertically stacking the columns of the tensor $\\underline{\\mathbf{X}}$ into a vector by $\\text{vec}(\\underline{\\mathbf{X}})$. Additionally, $\\textrm{diag}(\\mathbf{x}) \\in \\mathbb{R}^{M \\times M}$ denotes the diagonal matrix with the elements of vector $\\mathbf{x} \\in \\mathbb{R}^{M}$ on its diagonal. The set of integers $\\{1,\\ldots,N\\}$ is denoted as $[N]$. Uppercase, $X$, and lowercase letters, $x$, denote scalar random variables and realizations thereof, respectively.\n\n\\section{Background}\n\n\\subsection{Canonical Polyadic Decomposition}\nIn this section, we briefly introduce basic concepts related to tensor decomposition. An $N$-way tensor ${\\underline{\\mathbf{X}} \\in \\mathbb{R}^{I_1 \\times I_2 \\times \\cdots \\times I_N}}$ \nis a multidimensional array whose elements are indexed by $N$ indices. A polyadic decomposition expresses $\\underline{\\mathbf{X}}$ as a sum of rank-$1$ terms\n\\begin{equation}\n\\underline{\\mathbf{X}} = \\sum_{r=1}^R\\mathbf{A}_1[:,r] \\circ \\mathbf{A}_2[:,r] \\circ \\cdots \\circ \\mathbf{A}_N[:,r],\n\\label{eq:cpd}\n\\end{equation}\nwhere $\\mathbf{A}_n \\in \\mathbb{R}^{I_n \\times R}$, $ 1 \\leq r\\leq R$, $\\mathbf{A}_n[:,r]$ denotes the $r$-th column of matrix $\\mathbf{A}_n$ and $\\circ$ denotes the outer product. \nIf the number of rank-$1$ terms is minimal, then Equation~\\eqref{eq:cpd} is called the CPD of $\\underline{\\mathbf{X}}$ and $R$ is called the rank of $\\underline{\\mathbf{X}}$. \nWithout loss of generality, we can restrict the columns of $\\{\\mathbf{A}_n\\}_{n=1}^N$ to have unit norm and have the following equivalent expression\n\\begin{equation}\n\\underline{\\mathbf{X}} = \\sum_{r=1}^R{ \\boldsymbol{\\lambda}}[r]\\mathbf{A}_1[:,r]\\circ \\mathbf{A}_2[:,r] \\circ \\cdots \\circ \\mathbf{A}_N[:,r],\n\\label{eq:CPD}\n\\end{equation}\nwhere $\\|{\\bf A}_n[:,r]\\|_p=1$ for {a certain} $p\\geq 1$, $\\forall \\; n,r$, and ${ \\boldsymbol{\\lambda}}= \\left [{ \\boldsymbol{\\lambda}}[1],\\ldots,{ \\boldsymbol{\\lambda}}[R] \\right]^T$ `absorbs' the norms of columns. For\nconvenience, we use the shorthand notation ${\\underline{\\mathbf{X}} = [\\![ { \\boldsymbol{\\lambda}}, \\mathbf{A}_1,\\ldots,\\mathbf{A}_N ]\\!]}_R$.\nWe can express the CPD of a tensor in a matricized form. With $\\odot$ denoting the Khatri-Rao (columnwise Kronecker) matrix product, it can be shown that the mode-$n$ matrix unfolding of $\\underline{\\mathbf{X}}$ is given by\n\\begin{equation}\n{\\mathbf{X}}^{(n)} = \\left( \\underset{k \\neq n}{ \\underset{k=1}{ \\overset{N}{\\odot}}} \\mathbf{A}_k \\right) \\textrm{diag}(\\boldsymbol{\\lambda}) \\mathbf{A}_n^T,\n\\end{equation}\nwhere\n$\\underset{k\\neq n}{ \\underset{k=1}{ \\overset{N}{\\odot}}} \\mathbf{A}_k = \\mathbf{A}_N \\odot \\cdots \\odot \\mathbf{A}_{n+1} \\odot \\mathbf{A}_{n-1} \\odot \\cdots \\odot \\mathbf{A}_1.$\nThe CPD can be expressed in a vectorized form as\n\\begin{equation}\n\\text{vec}(\\underline{\\mathbf{X}})= \\left( { \\underset{n=1}{ \\overset{N}{\\odot}}} \\mathbf{A}_n \\right) \\boldsymbol{\\lambda}.\n\\end{equation}\n\nIt is clear that the rank-$1$ terms can be arbitrarily permuted without affecting the decomposition. We say that a CPD of a tensor is unique when it is only subject to this trivial indeterminacy. \n\n\\subsection{Learning Problem}\n\nLet $\\mathcal{X} = \\{X_n\\}_{n=1}^N$ denote a set of $N$ random variables. We say that a PDF $f_{\\mathcal{X}}$ is a mixture of $R$ component distributions if it can be expressed as a weighted sum of $R$ multivariate distributions\n\\begin{equation}\nf_{\\mathcal{X}}(x_1,\\ldots,x_N) = \\sum_{r=1}^R w_r f_{\\mathcal{X}|H}(x_1,\\ldots,x_N |r),\n\\end{equation}\nwhere $f_{\\mathcal{X}|H}$ are conditional PDFs and $\\{w_r\\}_{r=1}^R$ are non-negative numbers such that $\\sum_{r=1}^R w_r = 1$, called mixing weights. When each conditional PDF factors into the product of its marginal densities we have that\n\\begin{equation}\nf_{\\mathcal{X}}(x_1,\\ldots,x_N) = \\sum_{r=1}^R w_r \\prod_{n=1}^N f_{X_n|H}(x_n | r),\n\\label{eq:mixture_of_product}\n\\end{equation}\nwhich can be seen as a continuous extension of the CPD model of Equation~\\eqref{eq:CPD}. A sample from the mixture model is generated by first drawing a component $r$ according to $w$ and then independently drawing samples for every variable $\\{X_n\\}_{n=1}^N$ from the conditional PDFs $ f_{X_n|H}(\\cdot |r)$. The problem of learning the mixture is that of finding the conditional PDFs as well as the mixing weights given observed samples.\n\n\\subsection{Related Work}\nMixture models have numerous applications in statistics and machine learning including clustering and density estimation to name a few~\\citep{McLachlan2000}. A common assumption made in multivariate mixture models is a parametric form of the conditional PDFs. For example, when the conditional PDFs are assumed to be Gaussian, the goal is to recover the mean vectors and covariance matrices defining each multivariate Gaussian component and the mixing weights. Other common choices include categorical, exponential, Laplace or Poisson distributions. The most popular algorithm for learning the parameters of the mixture is the EM algorithm~\\citep{Dempster1977} which maximizes the likelihood function with respect to the parameters. EM-based methods have been also considered for learning mixture models of non-parametric distributions\\footnote{The term non-parametric is used to describe the case in which no assumptions are made about the form of the conditional densities.} by parameterizing the unknown conditional PDFs using kernel density estimators~\\citep{Benaglia2009,levine2011}, which lack however theoretical guarantees. \n\nTensor decomposition methods can be used as an alternative to EM for learning various latent variable models~\\citep{AnGeHsu2014b}. High-order moments of several probabilistic models can be expressed using low-rank CPDs. Decomposing these tensors reveals the true parameters of the probabilistic models. In the absence of noise and model mismatch, algebraic algorithms can be applied to compute the CPD under certain conditions, see~\\citep{SiDeFu2017} and references therein, and \\citep{hsu2013} for the application to GMMs. Tensor decomposition approaches have been proposed for learning mixture models but are mostly restricted to Gaussian or categorical distributions~\\citep{hsu2013,jain2014,gottesman2018}. In practice, mainly due to sampling noise the result of these algorithms may not be satisfactory and EM can be used for refinement~\\citep{zhang2014,ruffini2017}. In the case of non-parametric mixtures of product distributions, identifiability of the components has been established in~\\citep{allman2009}. The authors have shown that it is possible to identify the conditional PDFs given the true joint PDF, if the conditional PDFs of each $X_n$ across different mixture components are linearly independent i.e., the continuous factor ``matrices'' have linearly independent columns. However, the exact true joint PDF is never given -- only samples drawn from it are available in practice, and elements may be missing from any given sample. Furthermore, \\citep{allman2009} did not provide an estimation procedure, which limits the practical appeal of an interesting theoretical contribution. \n\nIn this work, we focus on mixtures of product distributions of continuous variables and do not specify a parametric form of the conditional density functions. We show that it is possible to recover mixtures of \\textit{smooth} product distributions from observed samples. The key idea is to first transform the problem to that of learning a mixture of categorical distributions by decomposing lower-dimensional and (possibly coarsely) discretized joint PDFs. Given that the conditional PDFs are (approximately) band-limited (smooth), they can be recovered from the discretized PDFs under certain conditions.\n\n\\section{Approach}\nOur approach consists of two stages. We express the problem as a tensor factorization problem and show that if $N\\geq 3$, we can recover points of the unknown conditional CDFs. Under a smoothness condition, these points can be used to recover the true conditional PDFs using an interpolation procedure. \n\\subsection{Problem Formulation}\nWe assume that we are given $M$ $N$-dimensional samples $ \\{ \\mathbf{x}_m\\}_{m=1}^M$ that have been generated from a mixture of product distributions as in Equation~\\eqref{eq:mixture_of_product}.\nWe discretize each random variable $X_n$ by partitioning its support into $I$ uniform intervals $\\{ \\Delta_n^{i} = \\bigl (d_n^{i-1},d_n^{i} \\bigr) \\}_{ 1\\leq i \\leq I}$. Specifically, we consider a discretization of the PDF and define the probability tensor (histogram) $\\underline{\\mathbf{X}}[i_1,\\ldots,i_N] \\triangleq {\\sf Pr} \\left( X_1 \\in \\Delta_n^{i_1} ,\\ldots,X_N \\in \\Delta_n^{i_N} \\right)$ given by\n\\begin{multline}\n \\underline{\\mathbf{X}}[i_1,\\ldots,i_N] =\n\\sum_{r=1}^R w_r \\prod_{n=1}^N \\int_{ \\Delta_n^{i_n} } f_{X_n|H}(x_n | r) dx_n \\\\\n= \\sum_{r=1}^R w_r \\prod_{n=1}^N {\\sf Pr} \\left( X_n \\in \\Delta_n^{i_n} \\right | H = r).\n\\label{eq:discrete}\n\\end{multline}\nLet $\\mathbf{A}_n[i_n,r] \\triangleq {\\sf{Pr}}\\left( X_n \\in \\Delta_n^{i_n} \\right | H = r)$, ${\\boldsymbol{\\lambda}[r] \\triangleq w_r}$.\nNote that $\\underline{\\mathbf{X}}$ is an $N$-way tensor and admits a CPD with non-negative factor matrices $\\{\\mathbf{A}_n\\}_{n=1}^N$ and rank $R$, i.e., ${\\underline{\\mathbf{X}} = [\\![ {\\boldsymbol{ \\lambda}}, \\mathbf{A}_1,\\ldots,\\mathbf{A}_N ]\\!]}_R$. From equation~\\eqref{eq:discrete} it is clear that the discretized conditional PDFs are identifiable and can be recovered by decomposing the true joint discretized probability tensor, if $N \\geq 3$ and $R$ is small enough, by virtue of the uniqueness properties of CPD~\\citep{SiDeFu2017}. \n\nIn practice we do not observe ${\\underline{\\mathbf{X}}}$ but we have to deal with perturbed versions. Based on the observed samples, we can compute an approximation of the probability tensor $\\underline{\\mathbf{X}}$ by counting how many samples fall into each bin and normalizing the tensor by dividing with the total number of samples.\nThe size of the probability tensor grows exponentially with the number of variables and therefore the estimate will be highly inaccurate even when the number of discretization intervals is small. More importantly, datasets often contain missing data and therefore its impossible to construct such tensor. On the other hand, it may be possible to estimate low-dimensional discretized joint PDFs of subsets of the random variables which correspond to low-order tensors. For example, in the clustering example given in the introduction some patients may be tested on a subset of the available tests. Finally, the model of Equation~\\eqref{eq:discrete} is just an approximation of our original model, as our ultimate goal is to recover the true conditional PDFs. To address the aforementioned challenges we have to answer the following two questions\n\\begin{enumerate}\n\\item Is it possible to learn the mixing weights and discretized conditional PDFs from missing\/limited data?\n\\item Is it possible to recover non-parametric conditional PDFs from their discretized counterparts?\n\\end{enumerate}\nRegarding the first question, it has been recently shown that a joint Probability Mass Function (PMF) of a set of random variables can be recovered from lower-dimensional joint PMFs if the joint PMF has low enough rank~\\citep{KaSiFu2018}. This result allows us to recover the discretized conditional PDFs from low-dimensional histograms but cannot be extended to the continuous setting in general because of the loss of information induced from the discretization step. We further discuss and provide conditions under which we can overcome these issues.\n\n\\subsection{Identifiability using Lower-dimensional Statistics}\n\\label{sec:ident}\nIn this section we provide insights regarding the first issue. It turns out that realizations of subsets of only three random variables are sufficient to recover ${\\sf{Pr}}\\left( X_n \\in \\Delta_n^{i_n} \\right | H = r)$ and $\\{w_r\\}_{r=1}^R$. Under the mixture model~\\eqref{eq:mixture_of_product}, a histogram of any subset of three random variables $X_{j},X_{k},X_{\\ell}$ denoted as $\\underline{\\mathbf{X}}_{jk\\ell}$, with $\\underline{\\mathbf{X}}_{jk\\ell}[i_j,i_k,i_\\ell] = {\\sf Pr}( X_j \\in \\Delta_j^{i_j}, X_k \\in \\Delta_k^{i_k}, X_\\ell \\in \\Delta_\\ell^{i_\\ell})$ can be written as $\\underline{\\mathbf{X}}_{jk\\ell}[i_j,i_k,i_{\\ell}] = \\sum_{r=1}^R \\boldsymbol{\\lambda} [r] \\mathbf{A}_j[i_j,r] \\mathbf{A}_k[i_k,r] \\mathbf{A}_{\\ell}[i_{\\ell},r],$\nwhich is a third-order tensor of rank $R$. A fundamental result on the uniqueness of tensor decomposition of third-order tensors was given by in~\\citep{Kru1977}. The result states that if ${\\underline{\\mathbf{X}}}$ admits a decomposition ${\\underline{\\mathbf{X}} = [\\![{\\boldsymbol \\lambda}, \\mathbf{A}_1,\\mathbf{A}_2,\\mathbf{A}_3 ]\\!]_R}$, with ${k_{\\mathbf{A}_1} + k_{\\mathbf{A}_2} + k_{\\mathbf{A}_3} \\geq 2R + 2}$ then $\\textrm{rank}(\\underline{\\mathbf{X}}) = R $ and the decomposition of $\\underline{\\mathbf{X}}$ is unique. \nHere, $k_{\\mathbf{A}}$ denotes the Kruskal rank of the matrix $\\mathbf{A}$ which is equal to the largest integer such that every subset of $k_{\\mathbf{A}}$ columns are linearly independent. \nWhen the rank is small and the decomposition is exact, the parameters of the CPD model can be computed exactly via Generalized Eigenvalue Decomposition (GEVD) and related algebraic~ algorithms~\\citep{leurgans1993,domanov2014,SiDeFu2017}.\n\\begin{Theorem}\\label{thm:leurgans}\n\\citep{leurgans1993}\nLet ${\\underline{\\mathbf{X}}}$ be a tensor that admits a polyadic decomposition ${\\underline{\\mathbf{X}} = [\\![{\\boldsymbol \\lambda}, \\mathbf{A}_1,\\mathbf{A}_2,\\mathbf{A}_3 ]\\!]_R}$, $\\mathbf{A}_1\\in \\mathbb{R}^{I_1 \\times R}$, $\\mathbf{A}_2\\in \\mathbb{R}^{I_2 \\times R}$, $\\mathbf{A}_3\\in \\mathbb{R}^{I_3 \\times R}$, $\\boldsymbol{\\lambda} \\in \\mathbb{R}^R$ and suppose that $\\mathbf{A}_1$, $\\mathbf{A}_2$ are full column rank and $k_{\\mathbf{A}_3} \\geq 2$. Then $\\textrm{rank}(\\underline{\\mathbf{X}}) = R $, the decomposition of $\\underline{\\mathbf{X}}$ is unique and can be found algebraically. \n\\end{Theorem}\n\nMore relaxed uniqueness conditions from the field of algebraic geometry have been proven in recent years. \n\\begin{Theorem}\\label{thm:generic}\n\\citep{ChiOtta2012} Let ${\\underline{\\mathbf{X}}}$ be a tensor that admits a polyadic decomposition $\\underline{\\mathbf{X}} = [\\![{\\boldsymbol \\lambda}, \\mathbf{A}_1,\\mathbf{A}_2,\\mathbf{A}_3 ]\\!]$, where $\\mathbf{A}_1 \\in \\mathbb{R}^{I_1 \\times F}$, $\\mathbf{A}_2 \\in \\mathbb{R}^{I_2 \\times F}$, $\\mathbf{A}_3 \\in \\mathbb{R}^{I_3 \\times F}$, $ I_1 \\leq I_2 \\leq I_3$. Let $\\alpha,\\beta$ be the largest integers such that $2^\\alpha \\leq I_1$ and $2^\\beta \\leq I_2$. If $F \\leq 2^{\\alpha + \\beta -2}$ then the decomposition of $\\underline{\\mathbf{X}}$ is essentially unique almost surely.\n\\end{Theorem}\nTheorem~\\ref{thm:generic} is a generic uniqueness result i.e, all non-identifiable parameters form a set of Lebesgue measure zero. To see how the above theorems can be applied in our setup, consider the joint decomposition of the probability tensors $\\underline{\\mathbf{X}}_{jk\\ell}$. Let $\\mathcal{S}_1$, $\\mathcal{S}_2$, and $\\mathcal{S}_3$ denote disjoint ordered subsets of $[N]$, with cardinality $c_1 = |\\mathcal{S}_1|$, $c_2 = |\\mathcal{S}_2|$, and $c_3 = |\\mathcal{S}_3|$, respectively. Let $\\underline{\\mathbf{Y}}$ be the $c_1\\times c_2 \\times c_3$ block tensor whose $(j,k,\\ell)$-th block is the tensor $\\underline{\\mathbf{X}}_{jk\\ell}$, $j \\in \\mathcal{S}_1$, $k \\in \\mathcal{S}_2$, $\\ell \\in \\mathcal{S}_{3}$. It is clear that the tensor $\\underline{\\mathbf{Y}}$ admits a CPD $\\underline{\\mathbf{Y}} = [\\![ {\\boldsymbol{ \\lambda}}, \\widehat{\\mathbf{A}}_1, \\widehat{\\mathbf{A}}_2,\\widehat{\\mathbf{A}}_3 ]\\!]_R $ where $\\widehat{\\mathbf{A}}_1 = [\\mathbf{A}_{\\mathcal{S}_1(1)}^T, \\cdots, \\mathbf{A}_{\\mathcal{S}_1(c_1)}^T ]^T $, $\\widehat{\\mathbf{A}}_2 = [\\mathbf{A}_{\\mathcal{S}_2(1)}^T, \\cdots, \\mathbf{A}_{\\mathcal{S}_2(c_2)}^T ]^T $, $\\widehat{\\mathbf{A}}_3 = [\\mathbf{A}_{\\mathcal{S}_3(1)}^T, \\cdots, \\mathbf{A}_{\\mathcal{S}_3(c_3)}^T ]^T $. By considering the joint decomposition of lower-dimensional discretized PDFs, we have constructed a single virtual non-negative CPD model and therefore uniqueness properties hold. For example, by setting ${\\mathcal{S}_1 = \\{1,\\dots, \\lfloor\\frac{N-1}{2}\\rfloor -1 \\}}$, ${\\mathcal{S}_2 = \\{ \\lfloor \\frac{N-1}{2} \\rfloor,\\dots,N-1 \\}}$, $\\mathcal{S}_3 = \\{N \\}$ we have that\n\\begin{align*}\n&\\underline{\\mathbf{Y}}^{(1)}\n =\n \\left(\n\\begin{bmatrix}\n\\mathbf{A}_{ \\lfloor \\frac{N-1}{2} \\rfloor } \\\\\n\\vdots \\\\\n\\mathbf{A}_{N-1}\n\\end{bmatrix} \\odot\n\\begin{bmatrix}\n\\mathbf{A}_{1} \\\\\n\\vdots \\\\\n\\mathbf{A}_{ \\lfloor \\frac{N-1}{2} \\rfloor -1}\n\\end{bmatrix}\n\\right) {\\rm{diag}(\\boldsymbol{\\lambda})}\\mathbf{A}_N^T.\n\\end{align*}\nAccording to Theorem~\\ref{thm:leurgans}, the CPD can be computed exactly if $R \\leq (\\lfloor \\frac{N-1}{2} \\rfloor -1) I$. Similarly, it is easy to verify that by setting $c_1 = c_2 = \\lfloor \\frac{N}{3} \\rfloor I$, i.e., $\\alpha=\\lfloor \\log_2( \\lfloor \\frac{N}{3} \\rfloor I )\\rfloor$, the CPD of $\\underline{\\mathbf{Y}}$ is generically unique for $R \\leq 2 ^{2(\\alpha-1)}$ according to Theorem~\\ref{thm:generic}. The later inequality is implied by ${R \\leq \\frac{(\\lfloor{\\frac{N}{3}}\\rfloor I+1)^2}{16}}$ which shows that the bound is quadratic in $N$ and $I$.\n\n\n\n\\textbf{Remark 1}: The previous discussion suggests that finer discretization can lead to improved identifiability results. The number of hidden components may be arbitrarily large and we may still be able to identify the discretized conditional PDFs by increasing the dimensions of the sub-tensors i.e., the discretization intervals of the random variables. The caveat is that one will need many more samples to reliably estimate these histograms. Ideally, one would like to have the minimum number of intervals that can guarantee identifiability of the conditional PDFs.\n\n\\textbf{Remark 2}: The factor matrices can be recovered by decomposing the lower-order probability tensors of dimension $N\\geq 3$. It is important to note that histograms of subsets of two variables correspond to Non-negative Matrix Factorization (NMF) which is not identifiable unless additional conditions such as sparsity are assumed for the latent factors~\\citep{fu2018}. Therefore, second-order distributions are not sufficient for recovering dense latent factor matrices.\n\n\\subsection{Recovery of the Conditional PDFs}\nIn the previous section we have shown that given lower-dimensional discretized PDFs, we can uniquely identify and recover discretized versions of the conditional PDFs via joint tensor decomposition. Recovering the true conditional PDFs from the discretized counterparts can be viewed as a signal reconstruction problem. We know that this is not possible unless the signals have some smoothness properties. We will use the following result.\n\\begin{Prop}\nA PDF that is (approximately) band-limited with cutoff frequency $\\omega_c$ can be recovered from uniform samples of the associated CDF taken $\\frac{\\pi}{\\omega_c}$ apart.\n\\end{Prop}\n\\textit{Proof}: Assume that the PDF $f_X$ is band-limited with cutoff frequency $\\omega_c$ i.e., its Fourier transform \n$\\mathcal{F}(\\omega) = 0$, $\\forall \\; |\\omega| \\geq \\omega_c$. Let $F_X$ denote the CDF of $f_X$, $F_X(x) = \\int_{-\\infty}^{x} f_X(t) dt$.\nWe can express the integration as a convolution of the PDF with a unit step function, i.e., $F_X(x) = \\int_{-\\infty}^{\\infty} f_X(t) u(x - t) d\\tau$. The Fourier transform of a convolution is the point-wise product of Fourier transforms. Therefore, we can express the Fourier transform $\\mathcal{G}(\\omega)$ of the CDF as\n\\begin{equation}\n\\mathcal{G}(\\omega) = \\pi \\delta(\\omega) \\mathcal{F}(0) + \\frac{\\mathcal{F}(\\omega)}{j\\omega},\n\\label{eq:fourier}\n\\end{equation}\nwhere $\\delta(\\cdot)$ is the Dirac delta. \nFrom Equation~\\eqref{eq:fourier}, it is clear that the CDF obeys the same band-limit as the PDF . From Shannon's sampling theorem we have that\n\\begin{equation}\nF_{X}(x) = \\sum_{n=-\\infty}^{ \\infty} F_X(n T) \\; {\\rm sinc}\\left( \\frac{x - nT}{T} \\right),\n\\end{equation}\nwhere $T = \\frac{\\pi}{\\omega_c}$. The PDF can then be determined by differentiation, which amounts to linear interpolation of the CDF samples using the derivative of the sinc kernel. Note that for exact reconstruction of $f_X$ an infinite number of data points are needed. In signal processing practice we always deal with finite support signals which are only approximately band-limited; the point is that the bandlimited assumption is accurate enough to afford high-quality signal reconstruction. In our present context, a good example is the Gaussian distribution: even though it is of infinite extent, it is not strictly bandlimited (as its Fourier transform is another Gaussian); but it is approximately bandlimited, and that is good enough for our purposes, as we will see shortly.\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=1\\linewidth]{motivation1.eps}\n\\caption{Illustration of the key idea on a univariate Gaussian mixture. The CDF can be recovered from its samples if $T_s \\leq \\frac{\\pi}{0.8}$.}\n\\label{fig:Gaussian}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width= 0.8 \\linewidth]{motivation3.eps}\n\\caption{KL divergence between the true mixture of Gaussians and different approximations.}\n\\label{fig:Gaussian_samples}\n\\end{figure}\n\nIn section~\\ref{sec:ident}, we saw how lower-dimensional histograms can be used to obtain estimates of the discretized conditional PDFs. Now, consider the conditional PDF of the $n$-th variable given the $r$-th component. The corresponding column of factor matrix $\\mathbf{A}_n$ is \n\\begin{multline*}\n\\mathbf{A}_n[:,r] = [F_{X_n|H}(d_n^{1}|r) - F_{X_n|H}(d_n^{0}|r),\\ldots, \\\\ 1 - F_{X_n|H}(d_n^{I-1}|r) ]^T.\n\\end{multline*}\nSince, $F_{X_n|H}(d_n^{0}|r)=0$, we can compute\n$F_{X_n|H}(d_n^{i}|r)$, $\\forall i \\in[I-1], n \\in [N]$. We also know that $F_{X_n|H}(x_n|r)=1$, $ \\forall x_n \\geq d_n^I$. Therefore, we can recover the conditional CDFs using the interpolation formula\n\\begin{equation}\n\\!\\!\\!\\!\\!\\! F_{X|H}(x_n|r) = \\sum_{k=-L}^{L} F_{X_n|H}(kT|r) \\; {\\rm sinc}\\left( \\frac{x - kT}{T} \\right),\n\\label{eq:cdf}\n\\end{equation}\nwhere $T = d_n^i - d_n^{i-1}$ and $L$ a large integer. The conditional PDF $f_{X_n|H}$ can then be recovered via differentiation.\n\\subsection{Toy example}\nAn example to illustrate the idea is shown in Figure~\\ref{fig:Gaussian}. Assume that the PDF of a random variable is a mixture of two Gaussian distributions with means $\\mu_1=-6$, $\\mu_2=10$ and standard deviations $\\sigma_1 = \\sigma_2 = 5$. \nIt is clear from Figure~\\ref{fig:Gaussian} that $\\mathcal{F}(\\omega) \\approx 0$ for $|\\omega| \\geq \\omega_c = 0.8$ and therefore the PDF is approximately band-limited. The CDF has the same band-limit, thus, it can be recovered from points being $T = \\frac{\\pi}{\\omega_c} \\approx 4$ apart. In this example we have used only $10$ discretization intervals as they suffice to capture $99\\%$ of the data. We use the finite sum formula of Equation~\\eqref{eq:cdf} to recover the CDF and then we recover the PDF by differentiating the CDF. The recovered PDF essentially coincides with the true PDF given a few exact estimates of the CDF as shown in Figure~\\ref{fig:Gaussian}.\n\nFigure~\\ref{fig:Gaussian_samples} shows the approximation error for different methods when we do not have exact points of the CDF but estimate them from randomly drawn samples. We know that a histogram converges to the true PDF as the number of samples grows and the bin width goes to $0$ at appropriate rate. However, when the conditional PDF is smooth, the interpolation procedure using a few discretization intervals leads to a lower approximation error compared to plain histogram estimates as illustrated in the figure. \n\\section{Algorithm}\nIn this section we develop an algorithm for recovering the latent factors of the CPD model given the histogram estimates of lower-dimensional PDFs (Alg.~\\ref{Alg:prop}). We define the following optimization problem\n\\begin{equation}\n\\begin{aligned}\n\\!\\!\\!\\!\\! \\underset{{\\{ \\mathbf{A}_n\\}_{n=1}^{N},\\boldsymbol{\\lambda} }}{\\text{min.}} \\; & \\sum_{j=1}^N \\sum_{k>j}^N \\sum_{\\ell>k}^N {\\rm{D}} \\left( \\widehat{\\underline{\\mathbf{X}}}_{jk\\ell},[\\![ \\boldsymbol{\\lambda},\\mathbf{A}_{j},\\mathbf{A}_k,\\mathbf{A}_{\\ell} ]\\!]_R \\right ) \\\\\n\\text{s.t. \\quad} & \\boldsymbol{\\lambda}\\geq \\mathbf{0}, {\\mathbf{1}}^T\\boldsymbol{\\lambda}=1 \\\\\n&\\mathbf{A}_n \\geq \\mathbf{0}, \\; n=1\\ldots N \\\\\n& {\\mathbf{1}}^T \\mathbf{A}_n = \\mathbf{1}^T, \\; n=1\\ldots N\n\\end{aligned}\n\\label{opt:coupled2}\n\\end{equation}\nwhere $\\rm{D}(\\cdot,\\cdot)$ is a suitable metric. The Frobenious norm and Kullback-Leibler (KL) divergence are considered in this work. For probability tensors $\\underline{\\mathbf{X}},\\underline{\\mathbf{Y}}$ we define\n\\begin{equation*}\n{\\rm{D}}_{ \\rm{KL} }(\\underline{\\mathbf{X}},\\underline{\\mathbf{Y}}) \\triangleq \\sum_{i_1,i_2,i_3} \\underline{\\mathbf{X}}[i_1,i_2,i_3] \\log \\frac{\\underline{\\mathbf{X}}[i_1,i_2,i_3]}{\\underline{\\mathbf{Y}}[i_1,i_2,i_3]}\n\\end{equation*}\n\\begin{equation*}\n{\\rm{D}}_{ \\rm{FRO}}(\\underline{\\mathbf{X}}, \\underline{\\mathbf{Y}}) \\triangleq \\sum_{i_1,i_2,i_3} \\bigl( \\underline{\\mathbf{X}} [i_1,i_2,i_3] -\\underline{\\mathbf{Y}} [i_1,i_2,i_3] \\bigr)^2.\n\\end{equation*}\nOptimization problem~\\eqref{opt:coupled2} is non-convex and NP-hard in its general form. Nevertheless, sensible approximation algorithms can be derived, based on well-appreciated nonconvex optimization tools. The idea is to cyclically update the variables while keeping all but one fixed. By fixing all other variables and optimizing with respect to $\\mathbf{A}_{j}$ we have\n\\begin{equation}\n\\underset{{\\mathbf{A}_j \\in \\mathcal{C}}}{\\text{min.}} \\; \\sum_{k \\neq j} \\sum_{\\substack{l\\neq j \\\\ l > k}} {\\rm{D}} \\left( \\underline{\\mathbf{X}}^{(1)}_{jk\\ell},\n(\\mathbf{A}_{\\ell} \\odot \\mathbf{A}_{k}) \\textrm{diag}(\\boldsymbol{\\lambda}) \\mathbf{A}_{j}^T \\right ),\n\\label{opt:subproblem1}\n\\end{equation}\nwhere $\\mathcal{C} = \\{ \\mathbf{A} \\mid \\mathbf{A}\\geq 0, \\mathbf{1}^T\\mathbf{A} = \\mathbf{1}^T\\}$.\nProblem~\\eqref{opt:subproblem1} is convex and can be solved efficiently using Exponentiated Gradient (EG)~\\citep{kivinen1997} -- which is a special case of mirror descent~\\citep{beck2003}. At each iteration $\\tau$ of mirror descent we update $\\mathbf{A}_{j}^{\\tau}$ by solving\\\\\n\\begin{equation*}\n\\mathbf{A}_{j}^{\\tau} = \\argmin_{\\mathbf{A}_{j} \\in \\mathcal{C}}\\langle\\ \\nabla f \\bigl(\\mathbf{A}_{j}^{\\tau-1} \\bigr), \\mathbf{A}_j \\rangle + \\frac{1}{\\eta_{\\tau}} B_{\\Phi} \\bigl(\\mathbf{A}_{j},\\mathbf{A}_{j}^{\\tau-1} \\bigr)\n\\end{equation*}\nwhere $B_{\\Phi}(\\mathbf{A},\\widehat{\\mathbf{A}}) = \\Phi (\\mathbf{A}) - \\Phi(\\widehat{\\mathbf{A}}) - \\langle\\ \\mathbf{A}- \\widehat{\\mathbf{A}}, \\nabla \\Phi(\\widehat{\\mathbf{A}}) \\rangle$ is a Bregman divergence. Setting $\\Phi$ to be the negative entropy $\\Phi(\\mathbf{A}) = \\sum_{i,j} \\mathbf{A}(i,j) \\log \\mathbf{A}(i,j)$, the update becomes\n\\begin{equation}\n\\mathbf{A}_{j}^{\\tau} = \\mathbf{A}_{j}^{\\tau-1} \\circledast \\exp \\left( - \\eta_{\\tau} \\nabla f \\left (\\mathbf{A}_{j}^{\\tau-1} \\right) \\right),\n\\end{equation}\nwhere $\\circledast$ is the Hadamard product, followed by column normalization $\\mathbf{A}_{j}^{\\tau}[:,r] = \\frac{\\mathbf{A}_{j}[:,r]}{ 1^T \\mathbf{A}[:,r]}$.\nThe optimization problem with respect to $\\boldsymbol{\\lambda}$ is the following \n\\begin{equation}\n\\!\\!\\! \\underset{{ \\boldsymbol{\\lambda} \\in \\mathcal{C}}}{\\text{min.}} \\; \\sum_{j,k,\\ell} {\\rm{D}} \\left( \n{\\rm{vec}} (\\underline{\\mathbf{X}}_{jk\\ell} ) , (\\mathbf{A}_{\\ell} \\odot \\mathbf{A}_{k} \\odot \n\\mathbf{A}_j ) \\boldsymbol{\\lambda} \\right ).\n\\label{opt:subproblem2}\n\\end{equation}\nThe update rules for $\\boldsymbol{\\lambda}$ are similar\n\\begin{equation}\n\\boldsymbol{\\lambda}^{\\tau} = \\boldsymbol{\\lambda}^{\\tau-1} \\circledast \\exp \\left( - \\eta_{\\tau} \\nabla f \\left( \\boldsymbol{\\lambda}^{\\tau-1} \\right) \\right).\n\\end{equation}\nThe step $\\eta_\\tau$ can be computed by the Armijo rule~\\citep{bertsekas1999}.\n\n\\begin{algorithm}[t]\n\\caption{Proposed Algorithm}\n\\label{Alg:prop}\n\\begin{algorithmic}[1]\n\\STATEx \\hspace*{-.4cm}\n \\textbf{Input}: A dataset $\\mathbf{D}\\in \\mathbb{R}^{M \\times N}$\n\\STATE Estimate $\\underline{\\mathbf{X}}_{jk\\ell} \\; \\forall j,k,\\ell \\in [N],\\; \\ell>k>j$ from data.\n\\STATE Initialize $\\{\\mathbf{A}_n\\}_{n=1}^N$ and $\\boldsymbol{\\lambda}$.\n\\REPEAT \n \\FORALL{$n \\in [N]$} \n \\STATE{Solve opt. problem~\\eqref{opt:subproblem1} via mirror descent.}\n \\ENDFOR\n \\STATE{Solve opt. problem~\\eqref{opt:subproblem2} via mirror descent.} \n\\UNTIL{convergence criterion is satisfied}\n \\FORALL{$n \\in [N]$} \n \\STATE{Recover $f_{X_n|H}$ by differentiation using Eq.~\\eqref{eq:cdf}}\n \\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experiments}\n\n\\subsection{Synthetic Data}\n\nIn this section, we employ synthetic data simulations to showcase the effectiveness of the proposed algorithm. Experiments are conducted on synthetic datasets $\\{\\mathbf{x}_m \\}^M_{m=1}$ of varying sample sizes, generated from $R$ component distributions. We set the number of variables to $N = 10$, and vary the number of components $R \\in \\{5,10\\}$. We run the algorithms using $5$ different random initializations and for each algorithm keep the model that yields the lowest cost. We evaluate the performance of the algorithms by calculating the KL divergence between the true and learned model, which is approximated using Monte Carlo integration. Specifically, we generate $\\{\\mathbf{x}_{m'} \\}_{m'=1}^{M'}$ test points, $M'=1000$ drawn from the mixture and approximate the KL divergence between the true and learned model by \\[{\\rm{D}}_{ \\rm{KL} } \\left(f_{\\mathcal{X}}, \\widehat{f}_{\\mathcal{X}} \\right) \\approx \\frac{1}{M'} \\sum_{m'=1}^{M'} \\log {f_{\\mathcal{X}}(\\mathbf{x}_{m'})}\/{\\widehat{f}_{\\mathcal{X}}(\\mathbf{x}_{m'})}.\\] We also compute the clustering accuracy on the test points as follows. Each data point $\\mathbf{x}_{m'}$ is first assigned to the component yielding the highest posterior probability ${ \\widehat{c}_m = \\argmax_c f_{H|\\mathcal{X}}( c | \\mathbf{x}_m)}$. Due to the label permutation ambiguity, the obtained components are aligned with the true components using the Hungarian algorithm~\\citep{kuhn1955}. The clustering accuracy is then calculated as the ratio of wrongly labeled data points over the total number of data points\n For each scenario, we repeat $10$ Monte Carlo simulations and report the average results. We explore the following settings for the conditional PDFs: (1) Gaussian (2) GMM with two components (3) Gamma and (4) Laplace. The mixing weights are drawn from a Dirichlet distribution $\\omega \\sim {\\rm{Dir}}(\\alpha_1,\\ldots,\\alpha_r)$ with $\\alpha_r = 10 \\; \\forall r$. We emphasize that our approach does not use any knowledge of the parametric form of the conditional PDFs; it only assumes smoothness. \n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49 \\linewidth]{gaussian_5.eps}\n\\includegraphics[width=0.49 \\linewidth]{gaussian_10.eps}\n\\caption{KL divergence (Gaussian).}\n\\label{fig:kl_gaussian}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49 \\linewidth]{gaussian_5_ac.eps}\n\\includegraphics[width=0.49 \\linewidth]{gaussian_10_ac.eps}\n\\caption{Clustering accuracy (Gaussian).}\n\\label{fig:ac_gaussian}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49 \\linewidth]{NP_5.eps}\n\\includegraphics[width=0.49 \\linewidth]{NP_10.eps}\n\\caption{KL divergence (GMM).}\n\\label{fig:kl_np}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49 \\linewidth]{NP_5_ac.eps}\n\\includegraphics[width=0.47 \\linewidth]{NP_10_a.eps}\n\\caption{Clustering Accuracy (GMM).}\n\\label{fig:ac_np}\n\\end{figure}\n\n\\textbf{Gaussian Conditional Densities}: In the first experiment we assume that each conditional PDF is a Gaussian. For cluster $r$ and random variable $X_n$ we set $f_{X_n|H}(x_n |r) = \\mathcal{N}(\\mu_{nr}, \\sigma_{nr}^2)$. Mean and variance are drawn from uniform distributions, $\\mu_{nr} \\sim \\mathcal{U}(-5,5) $, $\\sigma_{nr}^2 \\sim \\mathcal{U}(1,2)$. We compare the performance of our algorithms to that of EM (EM GMM). Figure~\\ref{fig:kl_gaussian} shows the KL divergence between the true and the learned model for various dataset sizes and different number of components. We see that the performance of our methods converges to that of EM despite the fact that we do not assume a particular model for the conditional densities. Interestingly, our approach performs better in terms of clustering accuracy as shown in Figure~\\ref{fig:ac_gaussian}. We can see that although the joint distribution learned by EM is closer to the true in terms of the KL divergence, EM may fail to identify the true parameters of every component.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49 \\linewidth]{gamma_kl.eps}\n\\includegraphics[width=0.49 \\linewidth]{gamma_kl_10.eps}\n\\caption{KL divergence (Gamma).}\n\\label{fig:kl_gamma}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49 \\linewidth]{gamma_ac.eps}\n\\includegraphics[width=0.49 \\linewidth]{gamma_ac_10.eps}\n\\caption{Clustering accuracy (Gamma).}\n\\label{fig:ac_gamma}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49 \\linewidth]{laplace_5.eps}\n\\includegraphics[width=0.49 \\linewidth]{laplace_10.eps}\n\\caption{KL divergence (Laplace).}\n\\label{fig:kl_laplace}\n\\end{figure}\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.49 \\linewidth]{laplace_5_ac.eps}\n\\includegraphics[width=0.49 \\linewidth]{laplace_10_ac.eps}\n\\caption{Clustering accuracy (Laplace).}\n\\label{fig:ac_laplace}\n\\end{figure}\n\n\\textbf{GMM Conditional Densities}: In the second experiment we assume that each conditional PDF is itself a mixture model of two univariate Gaussian distributions. More specifically, we set ${f_{X_n|H}(x_n | r) = \\frac{1}{2} \\mathcal{N} \\left (\\mu^{(1)}_{nr}, {\\sigma^{(1)2}_{nr}} \\right) + \\frac{1}{2} \\mathcal{N} \\left (\\mu^{(2)}_{nr}, {\\sigma^{(2)2}_{nr}} \\right)}$. Means and variances are drawn from uniform distributions $\\mu_{nr}^{(1)} \\sim \\mathcal{U}(0,7) $, $\\sigma_{nr}^{(1)2} \\sim \\mathcal{U}(1,4)$, $\\mu_{nr}^{(2)} \\sim \\mathcal{U}(-7,0) $, $\\sigma_{nr}^{(2)2} \\sim \\mathcal{U}(1,4)$. Our method is able to learn the mixture model in contrast to the EM GMM which exhibits poor performance, due to the model mismatch, as shown in Figures~\\ref{fig:kl_np},~\\ref{fig:ac_np}. \n\n\\textbf{Gamma Conditional Densities}: Another example of a smooth distribution is the shifted Gamma distribution. We set $f_{X_n|H}(x_n| r) = \\frac{1}{\\beta^\\alpha \\Gamma(\\alpha) } (x-\\mu_{nr})^{\\alpha-1} \\exp( -\\frac{x - \\mu_{nr}}{\\beta}) $ with $\\alpha=5$, $\\mu_{nr} \\sim \\mathcal{U}(-5,0) $, $\\beta_{nr} \\sim \\mathcal{U}(0.1,0.5)$. As the number of samples grows our method exhibits better performance, significantly outperforming EM GMM as shown in Figures~\\ref{fig:kl_gamma},~\\ref{fig:ac_gamma}.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=0.9 \\linewidth]{real_data_new_2.eps}\n\\caption{Clustering accuracy on real datasets.}\n\\end{figure}\n\n\\textbf{Laplace Conditional Densities}: In the last simulated experiment we assume that each conditional PDF is a Laplace distribution with mean $\\mu_{nr}$ and standard deviation $\\sigma_{nr}$ i.e., $f_{X_n|H}(x_n | r) = \\frac{1}{\\sqrt{2}\\sigma_{nr}} \\exp \\left( \\frac{\\sqrt{2}| x_n - \\mu_{nr}|}{\\sigma_{nr}} \\right)$. A Laplace distribution in contrast to the previous cases is not smooth (at its mean). Parameters are drawn from uniform distributions, $\\mu_{nf} \\sim \\mathcal{U}(-5,5) $, $\\sigma_{nf}^2 \\sim \\mathcal{U}(5,10)$. We compare the performance of our methods to that of the EM GMM and an EM algorithm for a Laplace mixture model (EM LMM). The proposed method approaches the performance of EM LMM and exhibits better performance in terms of KL and clustering accuracy compared to the EM GMM for higher number of data samples, as shown in Figures~\\ref{fig:kl_laplace},~\\ref{fig:ac_laplace}.\n\n\\subsection{Real Data}\n\nFinally, we conduct several real-data experiments to test the ability of the algorithms to cluster data. We selected $7$ datasets with continuous variables suitable for classification or regression tasks from the UCI repository. For each labeled dataset we hide the label and treat it as the latent component. For datasets that contained a continuous variable as a response, we discretized the response into $R$ uniform intervals and treated it as the latent component. For each dataset we repeated $10$ Monte Carlo simulations by randomly splitting the dataset into three sets; $70 \\%$ was used as a training set, $10\\%$ as a validation set and $20\\%$ as a test set. The validation set was used to select the number of discretization intervals which was either $5$ or $10$. We compare our methods against the EM GMM with diagonal covariance, EM GMM with full-covariance and the K-means algorithm in terms of clustering accuracy. Note that although the conditional independence assumption may not actually hold in practice, almost all the algorithms give satisfactory results in the tested datasets. The proposed algorithms perform well, outperforming the baselines in $5$ out of $7$ datasets while performing reasonably well in the remaining.\n\n\\section{Discussion and Conclusion}\nWe have proposed a two-stage approach based on tensor decomposition and signal processing tools for recovering the conditional densities of mixtures of smooth product distributions. Our method does not assume a parametric form for the unknown conditional PDFs. We have formulated the problem as a coupled tensor factorization and proposed an alternating-optimization algorithm. Experiments on synthetic data have shown that when the underlying conditional PDFs are indeed smooth our method can recover them with high accuracy. Results on real data have shown satisfactory performance on data clustering tasks.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:level1}Introduction}\n\n\nIn 1873, J. C. Maxwell described the physical feature of electromagnetic field with the vector analysis as well as the quaternion analysis. W. R. Hamilton invented the quaternion in 1843. The octonion, as the ordered couple of quaternions, was invented by J. T. Graves and A. Cayley independently and successively. Later the scientists and engineers separated the quaternion into the scalar part and vector part, to facilitate its application in the engineering. In the works about the electromagnetic theory, J. C. Maxwell mingled naturally the quaternion analysis and vector terminology to depict the electromagnetic feature. Similarly the scholars begin to study the physics feature of gravitational field with the algebra of quaternions.\n\nIn recent years applying the quaternion to study the electromagnetic feature has been becoming one significant research orientation, and it continues the development trend of gradual deepening and expanding. Some scholars have been studying the electromagnetic and gravitational theories with the quaternion, trying to promote the further progress of these two field theories. H. T. Anastassiu etc \\cite{anastassiu} applied the quaternion to describe the electromagnetic feature. S. M. Grusky etc \\cite{grusky} adopted the quaternion to research the time-dependent electromagnetic field of the chiral medium in the classical field theory. K. Morita \\cite{morita} developed the study of the quaternion field theory. S. Demir \\cite{demir1} and M. Tanisli \\cite{tanisli} etc applied the bi-quaternion and hyperbolic quaternion to formulate directly the field equations in the classical field theory. Similarly J. G. Winans \\cite{winans} described the physics quantities with the quaternion. J. Edmonds \\cite{edmonds} utilized the quaternion to depict the wave equation and gravitational theory in the curved space-time. F. A. Doria \\cite{doria} adopted the quaternion to research the gravitational theory. A. S. Rawat \\cite{rawat} etc discussed the gravitational field equation with the quaternion treatment. V. Majernik \\cite{majernik} studied the extended Maxwell-like gravitational field equations with the quaternion. Moreover several of the scientists apply the octonion analysis to study the electromagnetic theory. M. Gogberashvili \\cite{gogberashvili} used the octonion to discuss the electromagnetic theory. V. L. Mironov etc \\cite{mironov} described the electromagnetic equations and related features with the algebra of octonions. O. P. S. Negi etc \\cite{negi2} depicted the Maxwell's equations by means of the octonion. S. Demir \\cite{demir2} expressed the gravitational field equations with the octonion. O. P. S. Negi \\cite{negi3}, P. S. Bisht \\cite{bisht}, B. S. Rajput \\cite{rajput}, H. Dehnen \\cite{dehnen}, and S. Dangwal \\cite{dangwal} etc adopted the quaternion and octonion to explore the dyon in the gravitational and electromagnetic fields. The paper focuses on the application of the complex quaternion to study the angular momentum, torque, and force etc in the electromagnetic and gravitational fields.\n\nIn the existing studies of field theory described with the quaternion up to now, the most of researches regard the quaternion as one simplex substitution for the complex number or the vector in the theoretical applications. Obviously the existing quaternion studies so far have not achieved the expected outcome. This is a far cry from the expectation that an entirely new method can bring some new conclusions. The application of quaternion should enlarge the range of definition of some physics concepts, and bring in new perspectives and inferences.\n\nThe ordered couple of quaternions compose the octonion (Table I). On the contrary, the octonion is able to be separated into two parts, the quaternion and the $S$-quaternion (short for the second quaternion), and their coordinates can be chosen as the complex numbers. In the paper, the quaternion space and the $S$-quaternion space can be introduced into the field theory, in order to describe the physical feature of gravitational and electromagnetic fields. One will be able to conclude that the quaternion space is suitable to describe the gravitational features, while the $S$-quaternion space is proper to depict the electromagnetic features. This method can deduce the most of conclusions of the electromagnetic and gravitational fields described with the vector.\n\nIn the electromagnetic theory described with the complex quaternion, it is able to deduce directly the Maxwell's equations in the classical electromagnetic theory \\cite{honig}, without the help of the current continuity equation. In this approach, substituting the $S$-quaternion for the quaternion, one can obtain the same conclusions still. On the basis of this approach, the paper is able to deduce directly the force and the current continuity equation etc further, and turn these physics contents into the indispensable parts of the electromagnetic theory described with the $S$-quaternion. Meanwhile the paper can apply the quaternion to research the gravitational theory, deducing directly the force, torque, energy, and the mass continuity equation etc.\n\nBetween the field theory described by the quaternion or $S$-quaternion \\cite{weng} with that by the vector terminology, there are quite a number of inferences in common. However this approach believes that splitting artificially the quaternion into two components, the vector part and the scalar part, must cause a few inference differences between two approaches, the quaternion analysis and the vector terminology. The focus is being placed on the depiction discrepancy of electromagnetic features described by the quaternions with that by the vector, including the continuity equation, force, torque, and energy etc.\n\nBy comparison with the classical field theory, one can find that the paper has some improvements as follows. (1) Applying one single octonion space to simultaneously describe the physics quantities of the electromagnetic and gravitational fields. The electromagnetic and gravitational fields are not considered as two isolated parts any more. The electromagnetic field and gravitational field can be combined together to become one united field in the theoretical description, to depict simultaneously the physics features of two fields. (2) Combining like-terms of the physics quantity. In the existing researches, the force (or torque, energy) of electromagnetic and gravitational fields possesses several different terms. The paper is able to unify different terms of the force of electromagnetic and gravitational fields into the single formula, and then to analyze the related physics features. (3) Deducing the continuity equation. Being similar to the force, the continuity equation is also one direct deduction of the field theory described with the quaternions. The mass continuity equation and the current continuity equation both are vital deductions and inevitable components of the field theory with the quaternions.\n\n\n\n\n\n\\section{\\label{sec:level1}Field Equations}\n\n\n\nIn the above, the quaternion space $\\mathbb{H}_g$ is independent of the $S$-quaternion space $\\mathbb{H}_e$ , but these two quaternion spaces, $\\mathbb{H}_g$ and $\\mathbb{H}_e$ , can combine together to become one octonion space $\\mathbb{O}$ . The quaternion space $\\mathbb{H}_g$ is suitable to describe the feature of gravitational field, while the $S$-quaternion space $\\mathbb{H}_e$ is proper to depict the property of electromagnetic field. Further the octonion space $\\mathbb{O}$ is able to represent simultaneously the physics features of these two fields.\n\nIn the quaternion space $\\mathbb{H}_g$ for the gravitational field, the basis vector is $\\mathbb{H}_g = ( \\emph{\\textbf{i}}_0, \\emph{\\textbf{i}}_1, \\emph{\\textbf{i}}_2, \\emph{\\textbf{i}}_3 )$, the radius vector is $\\mathbb{R}_g = i r_0 \\emph{\\textbf{i}}_0 + \\Sigma r_k \\emph{\\textbf{i}}_k$, and $\\textbf{r} = \\Sigma r_k \\emph{\\textbf{i}}_k$. The velocity is $\\mathbb{V}_g = i v_0 \\emph{\\textbf{i}}_0 + \\Sigma v_k \\emph{\\textbf{i}}_k$, and $\\textbf{v} = \\Sigma v_k \\emph{\\textbf{i}}_k $. The gravitational potential is $\\mathbb{A}_g = i a_0 \\emph{\\textbf{i}}_0 + \\Sigma a_k \\emph{\\textbf{i}}_k$, and $\\textbf{a} = \\Sigma a_k \\emph{\\textbf{i}}_k $. The gravitational strength is $\\mathbb{F}_g = f_0 \\emph{\\textbf{i}}_0 + \\Sigma f_k \\emph{\\textbf{i}}_k$, and $\\textbf{f} = \\Sigma f_k \\emph{\\textbf{i}}_k$. The gravitational source is $\\mathbb{S}_g = i s_0 \\emph{\\textbf{i}}_0 + \\Sigma s_k \\emph{\\textbf{i}}_k$, and $\\textbf{s} = \\Sigma s_k \\emph{\\textbf{i}}_k$.\nHerein the symbol $\\circ$ denotes the octonion multiplication. $r_j, ~v_j, ~a_j, ~s_j$, and $f_0$ are all real. $f_k$ is the complex number. $i$ is the imaginary unit. $\\emph{\\textbf{i}}_0 = 1$. $j = 0, 1, 2, 3$. $k = 1, 2, 3$.\n\nIn the $S$-quaternion space $\\mathbb{H}_e$ for the electromagnetic field, the basis vector is $\\mathbb{H}_e = ( \\emph{\\textbf{I}}_0, \\emph{\\textbf{I}}_1, \\emph{\\textbf{I}}_2, \\emph{\\textbf{I}}_3 )$, the radius vector is $\\mathbb{R}_e = i R_0 \\emph{\\textbf{I}}_0 + \\Sigma R_k \\emph{\\textbf{I}}_k$, with $\\textbf{R} = \\Sigma R_k \\emph{\\textbf{I}}_k$, and $\\textbf{R}_0 = R_0 \\emph{\\textbf{I}}_0$. The velocity is $\\mathbb{V}_e = i V_0 \\emph{\\textbf{I}}_0 + \\Sigma V_k \\emph{\\textbf{I}}_k $, with $\\textbf{V} = \\Sigma V_k \\emph{\\textbf{I}}_k$, and $\\textbf{V}_0 = V_0 \\emph{\\textbf{I}}_0$. The electromagnetic potential is $\\mathbb{A}_e = i A_0 \\emph{\\textbf{I}}_0 + \\Sigma A_k \\emph{\\textbf{I}}_k $, with $\\textbf{A} = \\Sigma A_k \\emph{\\textbf{I}}_k $, and $\\textbf{A}_0 = A_0 \\emph{\\textbf{I}}_0$. The electromagnetic source is $\\mathbb{S}_e = i S_0 \\emph{\\textbf{I}}_0 + \\Sigma S_k \\emph{\\textbf{I}}_k$, with $\\textbf{S} = \\Sigma S_k \\emph{\\textbf{I}}_k$, and $\\textbf{S}_0 = S_0 \\emph{\\textbf{I}}_0$. The electromagnetic strength is $\\mathbb{F}_e = F_0 \\emph{\\textbf{I}}_0 + \\Sigma F_k \\emph{\\textbf{I}}_k$, with $\\textbf{F}_0 = F_0 \\emph{\\textbf{I}}_0 $, and $\\textbf{F} = \\Sigma F_k \\emph{\\textbf{I}}_k$. Herein $\\mathbb{H}_e = \\mathbb{H}_g \\circ \\emph{\\textbf{I}}_0$. $R_j, ~V_j, ~A_j, ~S_j$, and $F_0$ are all real. $F_k$ is the complex number.\n\nThese two quaternion spaces, $\\mathbb{H}_g$ and $\\mathbb{H}_e$, may compose one octonion space, $\\mathbb{O} = \\mathbb{H}_g + \\mathbb{H}_e$. In the octonion space $\\mathbb{O}$ for the electromagnetic and gravitational fields, the octonion radius vector is $\\mathbb{R} = \\mathbb{R}_g + k_{eg} \\mathbb{R}_e$, the octonion velocity is $\\mathbb{V} = \\mathbb{V}_g + k_{eg} \\mathbb{V}_e$, with $k_{eg}$ being one coefficient. The octonion field potential is $\\mathbb{A} = \\mathbb{A}_g + k_{eg} \\mathbb{A}_e$, the octonion field strength is $\\mathbb{F} = \\mathbb{F}_g + k_{eg} \\mathbb{F}_e$. From here on out, some physics quantities are extended to the octonion functions, according to the characteristics of octonion. Apparently $\\mathbb{V}$, $\\mathbb{A}$, and their differential coefficients are all octonion functions of $\\mathbb{R}$.\n\nThe octonion definition of field strength, $\\mathbb{F} = \\square \\circ \\mathbb{A} $ , can be rewritten as,\n\\begin{equation}\n\\mathbb{F}_g + k_{eg} \\mathbb{F}_e = \\square \\circ ( \\mathbb{A}_g + k_{eg} \\mathbb{A}_e ) ~,\n\\end{equation}\nwhere there are $\\mathbb{F}_g = \\square \\circ \\mathbb{A}_g$, and $\\mathbb{F}_e = \\square \\circ \\mathbb{A}_e$, according to the coefficient $k_{eg}$ and basis vectors, $\\mathbb{H}_g$ and $\\mathbb{H}_e$. The quaternion operator is $\\square = \\emph{i} \\emph{\\textbf{i}}_0 \\partial_0 + \\Sigma \\emph{\\textbf{i}}_k \\partial_k$. $ \\nabla = \\Sigma \\emph{\\textbf{i}}_k \\partial_k$, $ \\partial_j = \\partial \/ \\partial r_j$. $v_0 = \\partial r_0 \/ \\partial t$. Comparing with the Special Theory of Relativity states that $v_0$ is equal to the speed of light $c$ . $t$ is the time.\n\nFor the sake of convenience the paper adopts the gauge condition (Table II), $- f_0 = \\partial_0 a_0 - \\nabla \\cdot \\textbf{a} = 0$. And then the gravitational strength can be written as $\\textbf{f} = \\emph{i} \\textbf{g} \/ v_0 + \\textbf{b}$. One component of gravitational strength is the gravitational acceleration, $\\textbf{g} \/ v_0 = \\partial_0 \\textbf{a} + \\nabla a_0$. The other is $\\textbf{b} = \\nabla \\times \\textbf{a}$, which is similar to the magnetic flux density. Similarly the paper adopts the gauge condition, $- \\textbf{F}_0 = \\partial_0 \\textbf{A}_0 - \\nabla \\cdot \\textbf{A} = 0$. Therefore the electromagnetic strength is $\\textbf{F} = \\emph{i} \\textbf{E} \/ v_0 + \\textbf{B}$. Herein the electric field intensity is $\\textbf{E} \/ v_0 = \\partial_0 \\textbf{A} + \\nabla \\circ \\textbf{A}_0$, and the magnetic flux density is $\\textbf{B} = \\nabla \\times \\textbf{A}$.\n\nThe octonion field source $\\mathbb{S}$ of the electromagnetic and gravitational fields is defined as,\n\\begin{eqnarray}\n\\mu \\mathbb{S} && = - ( \\emph{i} \\mathbb{F} \/ v_0 + \\square )^* \\circ \\mathbb{F}\n\\nonumber \\\\\n&& = \\mu_g \\mathbb{S}_g + k_{eg} \\mu_e \\mathbb{S}_e - ( \\emph{i} \\mathbb{F} \/ v_0 )^* \\circ \\mathbb{F} ~,\n\\end{eqnarray}\nwhere $\\mu$, $\\mu_g$, and $\\mu_e$ are coefficients. $\\mu_g < 0$, and $\\mu_e > 0$. $*$ denotes the conjugation of octonion.\nIn the case for one single particle, a comparison with the classical field theory reveals that, $\\mathbb{S}_g = m \\mathbb{V}_g$, and $\\mathbb{S}_e = q \\mathbb{V}_e$. $m$ is the mass density, while $q$ is the density of electric charge.\n\nAccording to the coefficient $k_{eg}$ and the basis vectors, $\\mathbb{H}_g$ and $\\mathbb{H}_e$, the octonion definition of the field source can be separated into two parts,\n\\begin{eqnarray}\n&& \\mu_g \\mathbb{S}_g = - \\square^* \\circ \\mathbb{F}_g ~,\n\\\\\n&& \\mu_e \\mathbb{S}_e = - \\square^* \\circ \\mathbb{F}_e ~.\n\\end{eqnarray}\n\nObviously Eq.(3) is the definition of gravitational source in the quaternion space $\\mathbb{H}_g$. Expanding of Eq.(3) yields the gravitational field equations. When $\\textbf{b} = 0$ and $\\textbf{a} = 0$, one of the gravitational field equations will be degenerated into the Newton's law of universal gravitation in the classical gravitational theory. Meanwhile Eq.(4) is the definition of electromagnetic source in the $S$-quaternion space $\\mathbb{H}_e$ . Expanding of Eq.(4) deduces the Maxwell's equations in the classical electromagnetic theory (Appendix A).\n\nOn the analogy of the definition of complex coordinate system, one can define the coordinate of octonion, which involves the quaternion and $S$-quaternion simultaneously. In the octonion coordinate system, the octonion physics quantity can be defined as $ \\{ ( \\emph{i} c_0 + \\emph{i} d_0 \\textbf{\\emph{I}}_0 ) \\circ \\emph{\\textbf{i}}_0 + \\Sigma ( c_k + d_k \\textbf{\\emph{I}}_0^* ) \\circ \\emph{\\textbf{i}}_k \\} $. It means that there are the quaternion coordinate $c_k$ and the $S$-quaternion coordinate $d_k \\emph{\\textbf{I}}_0^*$ for the basis vector $\\emph{\\textbf{i}}_k$, while the quaternion coordinate $c_0$ and the $S$-quaternion coordinate $d_0 \\emph{\\textbf{I}}_0$ for the basis vector $\\emph{\\textbf{i}}_0$. Herein $c_j$ and $d_j$ are all real.\n\n\n\n\n\n\n\n\\section{\\label{sec:level1}Angular momentum}\n\n\nIn the octonion space, it is able to define the linear momentum from the field source, and then define the angular momentum and the torque accordingly. The octonion linear momentum density, $\\mathbb{P} = \\mathbb{P}_g + k_{eg} \\mathbb{P}_e$, is defined from the octonion field source $\\mathbb{S}$ , and is written as,\n\\begin{equation}\n\\mathbb{P} = \\mu \\mathbb{S} \/ \\mu_g ~,\n\\end{equation}\nwhere $\\mathbb{P}_g = \\{ \\mu_g \\mathbb{S}_g - ( \\emph{i} \\mathbb{F} \/ v_0 )^* \\circ \\mathbb{F} \\} \/ \\mu_g$, and $\\mathbb{P}_e = \\mu_e \\mathbb{S}_e \/ \\mu_g$. $\\mathbb{P}_g = i p_0 + \\textbf{p}$. $\\textbf{p} = \\Sigma p_k \\emph{\\textbf{i}}_k$. $\\mathbb{P}_e = i \\textbf{P}_0 + \\textbf{P}$. $\\textbf{P} = \\Sigma P_k \\emph{\\textbf{I}}_k$. $\\textbf{P}_0 = P_0 \\emph{\\textbf{I}}_0$. In general, the contribution of the tiny term, $\\{ \\emph{i} \\mathbb{F}^* \\circ \\mathbb{F} \/ (\\mu_g v_0 )\\} $, in the above could be neglected.\n\nIn the octonion space, the compounding radius vector is $\\mathbb{R}^+ = \\mathbb{R} + k_{rx} \\mathbb{X}$, with $k_{rx}$ being the coefficient. The physics quantity $\\mathbb{X}$ is the integral function of field potential $\\mathbb{A}$, that is, $\\mathbb{A} = i \\square^\\times \\circ \\mathbb{X}$. From the compounding radius vector $\\mathbb{R}^+$ and the octonion linear momentum density $\\mathbb{P}$, the octonion angular momentum density, $\\mathbb{L} = (\\mathbb{R}^+ )^\\times \\circ \\mathbb{P}$, is defined as\n\\begin{eqnarray}\n\\mathbb{L} = (\\mathbb{R}_g^+ + k_{eg} \\mathbb{R}_e^+ )^\\times \\circ (\\mathbb{P}_g + k_{eg} \\mathbb{P}_e )~,\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\mathbb{R}^+ = \\emph{i} r_0^+ + \\textbf{r}^+ + k_{eg} (\\emph{i} \\textbf{R}_0^+ + \\textbf{R}^+)~,\n\\end{eqnarray}\nwhere $\\mathbb{R}_g^+ = \\mathbb{R}_g + k_{rx} \\mathbb{X}_g$. $\\mathbb{R}_e^+ = \\mathbb{R}_e + k_{rx} \\mathbb{X}_e$. $r_j^+ = r_j + k_{rx} x_j$. $R_j^+ = R_j + k_{rx} X_j$. $\\textbf{r}^+ = \\Sigma r_k^+ \\emph{\\textbf{i}}_k$. $\\textbf{R}_0^+ = R_0^+ \\emph{\\textbf{I}}_0$, $\\textbf{R}^+ = \\Sigma R_k^+ \\emph{\\textbf{I}}_k$. $\\mathbb{X} = \\mathbb{X}_g + k_{eg} \\mathbb{X}_e$. $\\mathbb{X}_g = \\emph{i} x_0 + \\Sigma x_k \\emph{\\textbf{i}}_k$, $\\mathbb{X}_e = \\emph{i} X_0 \\emph{\\textbf{I}}_0 + \\Sigma X_k \\emph{\\textbf{I}}_k$. $x_j$ and $X_j$ are all real. $\\times$ denotes the conjugation of complex number.\n\nThe above can be expanded further into\n\\begin{eqnarray}\n\\mathbb{L} = && \\{ (\\mathbb{R}_g^+)^\\times \\circ \\mathbb{P}_g + k_{eg}^2 (\\mathbb{R}_e^+)^\\times \\circ \\mathbb{P}_e \\}\n\\nonumber\n\\\\\n&& + k_{eg} \\{ (\\mathbb{R}_g^+)^\\times \\circ \\mathbb{P}_e + (\\mathbb{R}_e^+)^\\times \\circ \\mathbb{P}_g \\} ~,\n\\end{eqnarray}\nwhere the part, $\\mathbb{L}_g = (\\mathbb{R}_g^+)^\\times \\circ \\mathbb{P}_g + k_{eg}^2 (\\mathbb{R}_e^+)^\\times \\circ \\mathbb{P}_e $, situates on the quaternion space $\\mathbb{H}_g$, while the part, $\\mathbb{L}_e = (\\mathbb{R}_g^+)^\\times \\circ \\mathbb{P}_e + (\\mathbb{R}_e^+)^\\times \\circ \\mathbb{P}_g $, stays on the $S$-quaternion space $\\mathbb{H}_e$.\n\nIn the quaternion space $\\mathbb{H}_g$, the component $\\mathbb{L}_g$ can be expressed as (Appendix B),\n\\begin{eqnarray}\n\\mathbb{L}_g = && r_0^+ p_0 + \\textbf{r}^+ \\cdot \\textbf{p} + k_{eg}^2 ( \\textbf{R}_0^+ \\circ \\textbf{P}_0 + \\textbf{R}^+ \\cdot \\textbf{P} )\n\\nonumber\n\\\\\n&&\n+ \\emph{i} \\{ p_0 \\textbf{r}^+ - r_0^+ \\textbf{p} + k_{eg}^2 ( \\textbf{R}^+ \\circ \\textbf{P}_0 - \\textbf{R}_0^+ \\circ \\textbf{P} ) \\}\n\\nonumber \\\\\n&& + \\textbf{r}^+ \\times \\textbf{p} + k_{eg}^2 \\textbf{R}^+ \\times \\textbf{P} ~,\n\\end{eqnarray}\nwhere $\\mathbb{L}_g = L_{10} + \\emph{i} \\textbf{L}_1^i + \\textbf{L}_1$. $\\textbf{L}_1 = \\textbf{r}^+ \\times \\textbf{p} + k_{eg}^2 \\textbf{R}^+ \\times \\textbf{P} $, includes the angular momentum density. $L_{10}$ covers the dot product of the radius vector and the linear momentum density. $\\textbf{L}_1 = \\Sigma L_{1k} \\emph{\\textbf{i}}_k$, $\\textbf{L}_1^i = \\Sigma L_{1k}^i \\emph{\\textbf{i}}_k$. $L_{1j}$ and $L_{1k}^i$ are all real.\n\nIn the $S$-quaternion space $\\mathbb{H}_e$, the component $\\mathbb{L}_e$ can be written as,\n\\begin{eqnarray}\n\\mathbb{L}_e = && r_0^+ \\textbf{P}_0 + \\textbf{r}^+ \\cdot \\textbf{P} + p_0 \\textbf{R}_0^+ + \\textbf{R}^+ \\cdot \\textbf{p}\n\\nonumber\n\\\\\n&&\n+ \\emph{i} ( - r_0^+ \\textbf{P} + \\textbf{r}^+ \\circ \\textbf{P}_0 - \\textbf{R}_0^+ \\circ \\textbf{p} + p_0 \\textbf{R}^+ )\n\\nonumber\n\\\\\n&&\n+ \\textbf{r}^+ \\times \\textbf{P} + \\textbf{R}^+ \\times \\textbf{p} ~,\n\\end{eqnarray}\nwhere $\\mathbb{L}_e = \\textbf{L}_{20} + \\emph{i} \\textbf{L}_2^i + \\textbf{L}_2$. $\\textbf{L}_2^i$ covers the electric dipole moment etc. $\\textbf{L}_2 = \\textbf{r}^+ \\times \\textbf{P} + \\textbf{R}^+ \\times \\textbf{p}$, includes the magnetic dipole moment etc. $\\textbf{L}_{20}$ is similar to $L_{10}$ . $\\textbf{L}_{20} = L_{20} \\emph{\\textbf{I}}_0$, $\\textbf{L}_2 = \\Sigma L_{2k} \\emph{\\textbf{I}}_k$, $\\textbf{L}_2^i = \\Sigma L_{2k}^i \\emph{\\textbf{I}}_k$. $L_{2j}$ and $L_{2k}^i$ are all real.\n\nIt means that the definition of octonion angular momentum is able to contain more physics contents, which were considered to be independent of each other in the past, such as the angular momentum and the dipole moment etc. The angular momentum includes the orbital angular momentum etc, while the dipole moment covers the magnetic dipole moment and the electric dipole moment etc.\n\n\n\n\n\n\n\n\n\n\n\\section{\\label{sec:level1}Octonion Torque}\n\nMaking use of the octonion angular momentum, some existing energy terms can be written into a single definition of energy in the octonion space. From the octonion angular momentum density $\\mathbb{L}$, the octonion torque density $\\mathbb{W}$ can be defined as follows,\n\\begin{equation}\n\\mathbb{W} = - v_0 ( \\emph{i} \\mathbb{F} \/ v_0 + \\square ) \\circ \\mathbb{L} ~,\n\\end{equation}\nwhere $\\mathbb{L} = \\mathbb{L}_g + k_{eg} \\mathbb{L}_e$. $\\mathbb{W} = \\mathbb{W}_g + k_{eg} \\mathbb{W}_e$.\n\nThe above can be expanded into (Appendix C),\n\\begin{eqnarray}\n\\mathbb{W} = && - ( \\emph{i} \\mathbb{F}_g \\circ \\mathbb{L}_g + \\emph{i} k_{eg}^2 \\mathbb{F}_e \\circ \\mathbb{L}_e + v_0 \\square \\circ \\mathbb{L}_g )\n\\nonumber\n\\\\\n&&\n- k_{eg} ( \\emph{i} \\mathbb{F}_g \\circ \\mathbb{L}_e + \\emph{i} \\mathbb{F}_e \\circ \\mathbb{L}_g + v_0 \\square \\circ \\mathbb{L}_e )~,\n\\end{eqnarray}\nwhere the component, $\\mathbb{W}_g = - ( \\emph{i} \\mathbb{F}_g \\circ \\mathbb{L}_g + \\emph{i} k_{eg}^2 \\mathbb{F}_e \\circ \\mathbb{L}_e + v_0 \\square \\circ \\mathbb{L}_g ) $, situates on the quaternion space $\\mathbb{H}_g$, while the component, $\\mathbb{W}_e = - ( \\emph{i} \\mathbb{F}_g \\circ \\mathbb{L}_e + \\emph{i} \\mathbb{F}_e \\circ \\mathbb{L}_g + v_0 \\square \\circ \\mathbb{L}_e ) $, stays on the $S$-quaternion space $\\mathbb{H}_e$.\n\n\n\n\n\n\\subsection{\\label{sec:level2}Component $\\mathbb{W}_g$}\n\nIn the quaternion space $\\mathbb{H}_g$, the component $\\mathbb{W}_g$ can be expressed as,\n\\begin{eqnarray}\n\\mathbb{W}_g = &&\n\\{\n( \\textbf{g} L_{10} \/ v_0 + \\textbf{g} \\times \\textbf{L}_1 \/ v_0 + \\textbf{b} \\times \\textbf{L}_1^i )\n\\nonumber\n\\\\\n&&\n- v_0 ( - \\partial_0 \\textbf{L}_1^i + \\nabla L_{10} + \\nabla \\times \\textbf{L}_1 )\n\\nonumber \\\\\n&& + k_{eg}^2 ( \\textbf{E} \\circ \\textbf{L}_{20} \/ v_0 + \\textbf{E} \\times \\textbf{L}_2 \/ v_0 + \\textbf{B} \\times \\textbf{L}_2^i )\n\\}\n\\nonumber\n\\\\\n&& + \\emph{i} \\{ ( \\textbf{g} \\cdot \\textbf{L}_1^i \/ v_0 - \\textbf{b} \\cdot \\textbf{L}_1 )\n- v_0 ( \\partial_0 L_{10} + \\nabla \\cdot \\textbf{L}_1^i)\n\\nonumber\n\\\\\n&&\n+ k_{eg}^2 ( \\textbf{E} \\cdot \\textbf{L}_2^i \/ v_0 - \\textbf{B} \\cdot \\textbf{L}_2 )\n\\}\n\\nonumber\n\\\\\n&& + \\emph{i} \\{ ( \\textbf{g} \\times \\textbf{L}_1^i \/ v_0 - L_{10} \\textbf{b} - \\textbf{b} \\times \\textbf{L}_1 )\n\\nonumber \\\\\n&&\n- v_0 ( \\partial_0 \\textbf{L}_1 + \\nabla \\times \\textbf{L}_1^i )\n\\nonumber \\\\\n&& + k_{eg}^2 ( \\textbf{E} \\times \\textbf{L}_2^i \/ v_0 - \\textbf{B} \\circ \\textbf{L}_{20} - \\textbf{B} \\times \\textbf{L}_2 )\n\\}\n\\nonumber\n\\\\\n&& + \\{ ( \\textbf{b} \\cdot \\textbf{L}_1^i + \\textbf{g} \\cdot \\textbf{L}_1 \/ v_0 )\n- v_0 ( \\nabla \\cdot \\textbf{L}_1 )\n\\nonumber\n\\\\\n&&\n+ k_{eg}^2 ( \\textbf{B} \\cdot \\textbf{L}_2^i + \\textbf{E} \\cdot \\textbf{L}_2 \/ v_0 ) \\}~,\n\\end{eqnarray}\nwhere $\\mathbb{W}_g = \\emph{i} W_{10}^i + W_{10} + \\emph{i} \\textbf{W}_1^i + \\textbf{W}_1$. $W_{10}^i$ is the energy density. The energy includes the proper energy, kinetic energy, potential energy, work, and the interacting energy between the electric field intensity with the electric dipole moment, and the interacting energy between the magnetic flux density with the magnetic dipole moment. $-\\textbf{W}_1^i$ is the torque density, covering the torque density produced by the applied force etc. $\\textbf{W}_1$ is the curl of angular momentum density, while $W_{10}$ is the divergence of angular momentum density. $\\textbf{W}_1 = \\Sigma W_{1k} \\emph{\\textbf{i}}_k$, $\\textbf{W}_1^i = \\Sigma W_{1k}^i \\emph{\\textbf{i}}_k$. $W_{1j}$ and $W_{1j}^i$ are all real.\n\nThe expression of energy $W_{10}^i$ is associated with the definition of field potential. The gravitational potential, $\\mathbb{A}_g = \\emph{i} a_0 + \\textbf{a}$, is defined as, $\\mathbb{A}_g = \\emph{i} \\square^\\times \\circ \\mathbb{X}_g$, with $a_0 = \\partial_0 x_0 + \\nabla \\cdot \\textbf{x}$, and $\\textbf{a} = \\partial_0 \\textbf{x} - \\nabla x_0$. The electromagnetic potential, $\\mathbb{A}_e = \\emph{i} \\textbf{A}_0 + \\textbf{A}$, is defined as, $\\mathbb{A}_e = \\emph{i} \\square^\\times \\circ \\mathbb{X}_e$, with $\\textbf{A}_0 = \\partial_0 \\textbf{X}_0 + \\nabla \\cdot \\textbf{X}$, and $\\textbf{A} = \\partial_0 \\textbf{X} - \\nabla \\circ \\textbf{X}_0$. The gauge conditions are chosen as, $\\nabla \\times \\textbf{x} = 0$, and $\\nabla \\times \\textbf{X} = 0$.\n\nThe energy density $W_{10}^i$ is written as\n\\begin{eqnarray}\nW_{10}^i = && ( \\textbf{g} \\cdot \\textbf{L}_1^i \/ v_0 - \\textbf{b} \\cdot \\textbf{L}_1 )\n\\nonumber\n\\\\\n&&\n- v_0 ( \\partial_0 L_{10} + \\nabla \\cdot \\textbf{L}_1^i)\n\\nonumber\n\\\\\n&&\n+ k_{eg}^2 ( \\textbf{E} \\cdot \\textbf{L}_2^i \/ v_0 - \\textbf{B} \\cdot \\textbf{L}_2 )\n\\nonumber \\\\\n\\approx\n&& - \\{ v_0 p_0 + v_0 p_0 ( \\nabla \\cdot \\textbf{r} ) \\}\n\\nonumber \\\\\n&&\n- \\{ v_0 ( \\partial_0 \\textbf{r}) \\cdot \\textbf{p} + v_0 \\textbf{r} \\cdot \\partial_0 \\textbf{p} \\}\n\\nonumber \\\\\n&&\n- \\{ ( p_0 a_0 + \\textbf{a} \\cdot \\textbf{p})\n\\nonumber \\\\\n&&\n+ k_{eg}^2 ( \\textbf{A}_0 \\circ \\textbf{P}_0 + \\textbf{A} \\cdot \\textbf{P} ) \\}\n\\nonumber\n\\\\\n&&\n+ k_{eg}^2 \\{ (\\textbf{E} \/ v_0 ) \\cdot ( \\textbf{r} \\circ \\textbf{P}_0 )\n\\nonumber \\\\\n&&\n- \\textbf{B} \\cdot ( \\textbf{r} \\times \\textbf{P} ) \\}\n\\nonumber \\\\\n&&\n+ \\{ ( \\textbf{g} \/ v_0 ) \\cdot ( p_0 \\textbf{r} ) - \\textbf{b} \\cdot \\textbf{L}_1 \\}\n~,\n\\end{eqnarray}\nwhere $- \\{ v_0 p_0 + v_0 p_0 ( \\nabla \\cdot \\textbf{r} ) \\} = k_p v_0 p_0$, covers the proper energy $v_0 s_0$ etc. $- \\{ v_0 ( \\partial_0 \\textbf{r}) \\cdot \\textbf{p} + v_0 \\textbf{r} \\cdot \\partial_0 \\textbf{p} \\}$ is the sum of the kinetic energy and the work produced by the applied force. $- \\{ ( p_0 a_0 + \\textbf{a} \\cdot \\textbf{p}) + k_{eg}^2 ( \\textbf{A}_0 \\circ \\textbf{P}_0 + \\textbf{A} \\cdot \\textbf{P} ) \\}$ is the potential energy of gravitational and electromagnetic fields. $\\{ k_{eg}^2 (\\textbf{E} \/ v_0 ) \\cdot ( \\textbf{r} \\circ \\textbf{P}_0 ) \\}$ is the interacting energy between the electric field intensity with the electric dipole moment. $\\{ - k_{eg}^2 \\textbf{B} \\cdot ( \\textbf{r} \\times \\textbf{P} ) \\}$ is the interacting energy between the magnetic flux density with the magnetic dipole moment. $( - \\textbf{b} \\cdot \\textbf{L}_1 )$ can be considered as the interacting energy between the gravitational field with the angular momentum.\n$k_p = (k - 1)$ is the coefficient, with $k$ being the dimension of vector $\\textbf{r}$. For the case of $k = 3$, $( W_{10}^i \/ 2 )$ is the conventional energy in the classical field theory. And it is stated, $k_{rx} = 1 \/ v_0$, to compare with the potential energy in the classical gravitational and electromagnetic fields.\n\nThe above states that, in case the compounding radius vector $\\mathbb{R}^+$ neglects the integral function $\\mathbb{X}$, the energy definition in the quaternion space $\\mathbb{H}_g$ will be incomplete. Further it will result in the energy definition to exclude the potential energy of gravitational and electromagnetic fields etc.\n\nSimilarly the torque density $-\\textbf{W}_1^i$ is written as,\n\\begin{eqnarray}\n\\textbf{W}_1^i = && ( \\textbf{g} \\times \\textbf{L}_1^i \/ v_0 - L_{10} \\textbf{b} - \\textbf{b} \\times \\textbf{L}_1 )\n\\nonumber \\\\\n&&\n- v_0 ( \\partial_0 \\textbf{L}_1 + \\nabla \\times \\textbf{L}_1^i )\n\\nonumber \\\\\n&& + k_{eg}^2 ( \\textbf{E} \\times \\textbf{L}_2^i \/ v_0 - \\textbf{B} \\circ \\textbf{L}_{20} - \\textbf{B} \\times \\textbf{L}_2 )\n\\nonumber \\\\\n\\approx && p_0 \\textbf{g} \\times \\textbf{r} \/ v_0 - v_0 \\textbf{r} \\times \\partial_0 \\textbf{p}\n\\nonumber \\\\\n&& + k_{eg}^2 \\{ \\textbf{E} \\times ( \\textbf{r} \\circ \\textbf{P}_0 ) \/ v_0 - \\textbf{B} \\times ( \\textbf{r} \\times \\textbf{P} ) \\}\n\\nonumber \\\\\n&&\n- ( \\textbf{r} \\cdot \\textbf{p} ) \\textbf{b} - \\textbf{b} \\times ( \\textbf{r} \\times \\textbf{p} ) + \\textbf{a} \\times \\textbf{p} ~,\n\\end{eqnarray}\nwhere $( - p_0 \\textbf{g} \\times \\textbf{r} \/ v_0 )$ is the torque term produced by the gravity. And $( - v_0 \\textbf{r} \\times \\partial_0 \\textbf{p} )$ is the torque term produced by the inertial force. $\\{ - k_{eg}^2 \\textbf{E} \\times ( \\textbf{r} \\circ \\textbf{P}_0 ) \/ v_0 \\}$ is the torque term caused by the electric field intensity and electric dipole moment. $\\{ k_{eg}^2 \\textbf{B} \\times ( \\textbf{r} \\times \\textbf{P} ) \\}$ is the torque term caused by the magnetic flux density and the magnetic dipole moment.\n\n\n\n\n\n\n\n\\begin{table}[h]\n\\caption{The multiplication table of octonion.}\n\\label{tab:table3}\n\\centering\n\\begin{ruledtabular}\n\\begin{tabular}{ccccccccc}\n$ $ & $1$ & $\\emph{\\textbf{i}}_1$ &\n$\\emph{\\textbf{i}}_2$ & $\\emph{\\textbf{i}}_3$ &\n$\\emph{\\textbf{I}}_0$ & $\\emph{\\textbf{I}}_1$\n& $\\emph{\\textbf{I}}_2$ & $\\emph{\\textbf{I}}_3$ \\\\\n\\hline $1$ & $1$ & $\\emph{\\textbf{i}}_1$ & $\\emph{\\textbf{i}}_2$ &\n$\\emph{\\textbf{i}}_3$ & $\\emph{\\textbf{I}}_0$ &\n$\\emph{\\textbf{I}}_1$\n& $\\emph{\\textbf{I}}_2$ & $\\emph{\\textbf{I}}_3$ \\\\\n$\\emph{\\textbf{i}}_1$ & $\\emph{\\textbf{i}}_1$ & $-1$ &\n$\\emph{\\textbf{i}}_3$ & $-\\emph{\\textbf{i}}_2$ &\n$\\emph{\\textbf{I}}_1$\n& $-\\emph{\\textbf{I}}_0$ & $-\\emph{\\textbf{I}}_3$ & $\\emph{\\textbf{I}}_2$ \\\\\n$\\emph{\\textbf{i}}_2$ & $\\emph{\\textbf{i}}_2$ &\n$-\\emph{\\textbf{i}}_3$ & $-1$ & $\\emph{\\textbf{i}}_1$ &\n$\\emph{\\textbf{I}}_2$ & $\\emph{\\textbf{I}}_3$\n& $-\\emph{\\textbf{I}}_0$ & $-\\emph{\\textbf{I}}_1$ \\\\\n$\\emph{\\textbf{i}}_3$ & $\\emph{\\textbf{i}}_3$ &\n$\\emph{\\textbf{i}}_2$ & $-\\emph{\\textbf{i}}_1$ & $-1$ &\n$\\emph{\\textbf{I}}_3$ & $-\\emph{\\textbf{I}}_2$\n& $\\emph{\\textbf{I}}_1$ & $-\\emph{\\textbf{I}}_0$ \\\\\n\n$\\emph{\\textbf{I}}_0$ & $\\emph{\\textbf{I}}_0$ &\n$-\\emph{\\textbf{I}}_1$ & $-\\emph{\\textbf{I}}_2$ &\n$-\\emph{\\textbf{I}}_3$ & $-1$ & $\\emph{\\textbf{i}}_1$\n& $\\emph{\\textbf{i}}_2$ & $\\emph{\\textbf{i}}_3$ \\\\\n$\\emph{\\textbf{I}}_1$ & $\\emph{\\textbf{I}}_1$ &\n$\\emph{\\textbf{I}}_0$ & $-\\emph{\\textbf{I}}_3$ &\n$\\emph{\\textbf{I}}_2$ & $-\\emph{\\textbf{i}}_1$\n& $-1$ & $-\\emph{\\textbf{i}}_3$ & $\\emph{\\textbf{i}}_2$ \\\\\n$\\emph{\\textbf{I}}_2$ & $\\emph{\\textbf{I}}_2$ &\n$\\emph{\\textbf{I}}_3$ & $\\emph{\\textbf{I}}_0$ &\n$-\\emph{\\textbf{I}}_1$ & $-\\emph{\\textbf{i}}_2$\n& $\\emph{\\textbf{i}}_3$ & $-1$ & $-\\emph{\\textbf{i}}_1$ \\\\\n$\\emph{\\textbf{I}}_3$ & $\\emph{\\textbf{I}}_3$ &\n$-\\emph{\\textbf{I}}_2$ & $\\emph{\\textbf{I}}_1$ &\n$\\emph{\\textbf{I}}_0$ & $-\\emph{\\textbf{i}}_3$\n& $-\\emph{\\textbf{i}}_2$ & $\\emph{\\textbf{i}}_1$ & $-1$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\n\n\n\n\n\n\n\\subsection{\\label{sec:level2}Component $\\mathbb{W}_e$}\n\nIn the $S$-quaternion space $\\mathbb{H}_e$, the component $\\mathbb{W}_e$ can be written as,\n\\begin{eqnarray}\n\\mathbb{W}_e = \\emph{i} \\textbf{W}_{20}^i + \\textbf{W}_{20} + \\emph{i} \\textbf{W}_2^i + \\textbf{W}_2~,\n\\end{eqnarray}\nwhere $\\textbf{W}_{20}$ covers the divergence of magnetic dipole moment, while $\\textbf{W}_2$ includes the curl of magnetic dipole moment and the derivative of electric dipole moment. $\\textbf{W}_{20} = W_{20} \\emph{\\textbf{I}}_0$, $\\textbf{W}_{20}^i = W_{20}^i \\emph{\\textbf{I}}_0$. $\\textbf{W}_2 = \\Sigma W_{2k} \\emph{\\textbf{I}}_k$, $\\textbf{W}_2^i = \\Sigma W_{2k}^i \\emph{\\textbf{I}}_k$. $W_{2j}$ and $W_{2j}^i$ are all real. That is,\n\\begin{eqnarray}\n\\textbf{W}_{20}^i = && ( \\textbf{g} \\cdot \\textbf{L}_2^i \/ v_0 - \\textbf{b} \\cdot \\textbf{L}_2 )\n \\nonumber \\\\\n &&\n + ( \\textbf{E} \\cdot \\textbf{L}_1^i \/ v_0 - \\textbf{B} \\cdot \\textbf{L}_1 )\n \\nonumber \\\\\n && - v_0 ( \\partial_0 \\textbf{L}_{20} + \\nabla \\cdot \\textbf{L}_2^i ) ~,\n\\\\\n\\textbf{W}_{20} = && ( \\textbf{b} \\cdot \\textbf{L}_2^i + \\textbf{g} \\cdot \\textbf{L}_2 \/ v_0 ) - v_0 ( \\nabla \\cdot \\textbf{L}_2 )\n \\nonumber \\\\\n && + ( \\textbf{B} \\cdot \\textbf{L}_1^i + \\textbf{E} \\cdot \\textbf{L}_1 \/ v_0 ) ~,\n\\\\\n\\textbf{W}_2^i = && ( \\textbf{g} \\times \\textbf{L}_2^i \/ v_0 - \\textbf{b} \\circ \\textbf{L}_{20} - \\textbf{b} \\times \\textbf{L}_2 )\n \\nonumber \\\\\n && + ( \\textbf{E} \\times \\textbf{L}_1^i \/ v_0 - L_{10} \\textbf{B} - \\textbf{B} \\times \\textbf{L}_1 )\n \\nonumber \\\\\n && - v_0 ( \\partial_0 \\textbf{L}_2 + \\nabla \\times \\textbf{L}_2^i ) ~,\n\\\\\n\\textbf{W}_2 = && ( \\textbf{g} \\circ \\textbf{L}_{20} \/ v_0 + \\textbf{g} \\times \\textbf{L}_2 \/ v_0 + \\textbf{b} \\times \\textbf{L}_2^i )\n \\nonumber \\\\\n && + ( L_{10} \\textbf{E} \/ v_0 + \\textbf{E} \\times \\textbf{L}_1 \/ v_0 + \\textbf{B} \\times \\textbf{L}_1^i )\n \\nonumber \\\\\n && - v_0 ( - \\partial_0 \\textbf{L}_2^i + \\nabla \\circ \\textbf{L}_{20} + \\nabla \\times \\textbf{L}_2 ) ~.\n\\end{eqnarray}\n\nThe above means that it is comparative easy to distinguish the physics quantity in the gravitational field with that in the electromagnetic field, in the definitions for the field potential, field strength, and field source. However in the definitions for the angular momentum, energy, and torque, the physics quantities of those two fields are mixed more and more complicatedly. Those mixed physics quantities have the direct influence on some physics quantities of each field (Table III).\n\nThe paper deals with not only the physics quantity in the subspace $\\mathbb{H}_g$ but also that in the subspace $\\mathbb{H}_e$ . It is noticeable that the physics quantity in the subspace $\\mathbb{H}_e$ is able to impact directly the multiplication and related differential coefficient in the subsequent deductions, and then should not be neglected artificially. For example, the terms $\\textbf{W}_{20}^i$ and $\\textbf{W}_2^i$ etc of the physics quantity $\\mathbb{W}_e$ in the subspace $\\mathbb{H}_e$ have the direct contribution to the power and force of the physics quantity $\\mathbb{N}$ in the following section, and the contribution of those terms is able to be measured directly.\n\nIt states that the definition of octonion torque is able to combine some physics contents, which were considered to be independent of each other before, such as the energy, the torque, the divergence of and curl of magnetic dipole moment, and the derivative of electric dipole moment etc. The energy covers the proper energy, kinetic energy, work, and potential energy etc. And the torque includes the torque between the electric field intensity with the electric dipole moment, and the torque between the magnetic flux density with the magnetic dipole moment (or the spin magnetic moment) etc.\n\n\n\n\n\n\\begin{table}[h]\n\\caption{The multiplication of the operator and octonion physics quantity.}\n\\label{tab:table2}\n\\centering\n\\begin{ruledtabular}\n\\begin{tabular}{ll}\ndefinition & expression~meaning \\\\\n\\hline\n$\\nabla \\cdot \\textbf{a}$ & $-(\\partial_1 a_1 + \\partial_2 a_2 + \\partial_3 a_3)$ \\\\\n$\\nabla \\times \\textbf{a}$ & $\\emph{\\textbf{i}}_1 ( \\partial_2 a_3\n - \\partial_3 a_2 ) + \\emph{\\textbf{i}}_2 ( \\partial_3 a_1\n - \\partial_1 a_3 )$ \\\\\n$$ & ~~~$+ \\emph{\\textbf{i}}_3 ( \\partial_1 a_2\n - \\partial_2 a_1 )$ \\\\\n$\\nabla a_0$ & $\\emph{\\textbf{i}}_1 \\partial_1 a_0\n + \\emph{\\textbf{i}}_2 \\partial_2 a_0\n + \\emph{\\textbf{i}}_3 \\partial_3 a_0 $ \\\\\n$\\partial_0 \\textbf{a}$ & $\\emph{\\textbf{i}}_1 \\partial_0 a_1\n + \\emph{\\textbf{i}}_2 \\partial_0 a_2\n + \\emph{\\textbf{i}}_3 \\partial_0 a_3 $ \\\\\n\n$\\nabla \\cdot \\textbf{A}$ & $-(\\partial_1 A_1 + \\partial_2 A_2 + \\partial_3 A_3) \\emph{\\textbf{I}}_0 $ \\\\\n$\\nabla \\times \\textbf{A}$ & $-\\emph{\\textbf{I}}_1 ( \\partial_2\n A_3 - \\partial_3 A_2 ) - \\emph{\\textbf{I}}_2 ( \\partial_3 A_1\n - \\partial_1 A_3 )$ \\\\\n$$ & ~~~$- \\emph{\\textbf{I}}_3 ( \\partial_1 A_2\n - \\partial_2 A_1 )$ \\\\\n$\\nabla \\circ \\textbf{A}_0$ & $\\emph{\\textbf{I}}_1 \\partial_1 A_0\n + \\emph{\\textbf{I}}_2 \\partial_2 A_0\n + \\emph{\\textbf{I}}_3 \\partial_3 A_0 $ \\\\\n$\\partial_0 \\textbf{A}$ & $\\emph{\\textbf{I}}_1 \\partial_0 A_1\n + \\emph{\\textbf{I}}_2 \\partial_0 A_2\n + \\emph{\\textbf{I}}_3 \\partial_0 A_3 $ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\n\n\n\n\n\n\\section{\\label{sec:level1}Octonion Force}\n\nExpanding the defining range of angular momentum and of torque will result in the expansion of the defining range of force correspondingly. One of the consequences will enable a single definition of force to contain different force terms as much as possible. In other words, some existing force terms can be united into the same one definition in the octonion space.\n\nFrom the torque density $\\mathbb{W}$, the octonion force density $\\mathbb{N}$ can be defined as follows,\n\\begin{equation}\n\\mathbb{N} = - ( \\emph{i} \\mathbb{F} \/ v_0 + \\square ) \\circ \\mathbb{W},\n\\end{equation}\nwhere $\\mathbb{N} = \\mathbb{N}_g + k_{eg} \\mathbb{N}_e$.\n\nFurther the above can be expanded into (Appendix D),\n\\begin{eqnarray}\n\\mathbb{N} = && - ( \\square \\circ \\mathbb{W}_g + \\emph{i} \\mathbb{F}_g \\circ \\mathbb{W}_g \/ v_0\n\\nonumber \\\\\n&&\n+ \\emph{i} k_{eg}^2 \\mathbb{F}_e \\circ \\mathbb{W}_e \/ v_0 )\n\\nonumber \\\\\n&& - k_{eg} ( \\square \\circ \\mathbb{W}_e + \\emph{i} \\mathbb{F}_g \\circ \\mathbb{W}_e \/ v_0\n\\nonumber \\\\\n&&\n+ \\emph{i} \\mathbb{F}_e \\circ \\mathbb{W}_g \/ v_0 ),\n\\end{eqnarray}\nwhere the component $\\mathbb{N}_g = - ( \\emph{i} \\mathbb{F}_g \\circ \\mathbb{W}_g \/ v_0 + \\emph{i} k_{eg}^2 \\mathbb{F}_e \\circ \\mathbb{W}_e \/ v_0 + \\square \\circ \\mathbb{W}_g ) $ situates on the quaternion space $\\mathbb{H}_g$, while the component $\\mathbb{N}_e = - ( \\emph{i} \\mathbb{F}_g \\circ \\mathbb{W}_e \/ v_0 + \\emph{i} \\mathbb{F}_e \\circ \\mathbb{W}_g \/ v_0 + \\square \\circ \\mathbb{W}_e ) $ stays on the $S$-quaternion space $\\mathbb{H}_e$.\n\n\n\n\n\\subsection{\\label{sec:level2}Component $\\mathbb{N}_g$}\n\nIn the quaternion space $\\mathbb{H}_g$, the component $\\mathbb{N}_g$ can be expressed as,\n\\begin{eqnarray}\n\\mathbb{N}_g = &&\n\\emph{i}\n\\{\n( W_{10}^i \\textbf{g} \/ v_0 + \\textbf{g} \\times \\textbf{W}_1^i \/ v_0\n\\nonumber \\\\\n&&\n- W_{10} \\textbf{b} - \\textbf{b} \\times \\textbf{W}_1 ) \/ v_0\n\\nonumber \\\\\n&& + k_{eg}^2 ( \\textbf{E} \\circ \\textbf{W}_{20}^i \/ v_0 + \\textbf{E} \\times \\textbf{W}_2^i \/ v_0\n\\nonumber \\\\\n&&\n- \\textbf{B} \\circ \\textbf{W}_{20} - \\textbf{B} \\times \\textbf{W}_2 ) \/ v_0\n\\nonumber \\\\\n&& - ( \\partial_0 \\textbf{W}_1 + \\nabla W_{10}^i + \\nabla \\times \\textbf{W}_1^i )\n\\}\n\\nonumber\n\\\\\n&& + \\{ ( W_{10} \\textbf{g} \/ v_0 + \\textbf{g} \\times \\textbf{W}_1 \/ v_0\n\\nonumber \\\\\n&&\n+ W_{10}^i \\textbf{b} + \\textbf{b} \\times \\textbf{W}_1^i ) \/ v_0\n\\nonumber \\\\\n&& + k_{eg}^2 ( \\textbf{E} \\circ \\textbf{W}_{20} \/ v_0 + \\textbf{E} \\times \\textbf{W}_2\/ v_0\n\\nonumber \\\\\n&&\n+ \\textbf{B} \\circ \\textbf{W}_{20}^i + \\textbf{B} \\times \\textbf{W}_2^i ) \/ v_0\n\\nonumber \\\\\n&& + ( \\partial_0 \\textbf{W}_1^i - \\nabla W_{10} - \\nabla \\times \\textbf{W}_1 )\n\\}\n\\nonumber\n\\\\\n&& + \\emph{i} \\{ ( \\textbf{g} \\cdot \\textbf{W}_1^i \/ v_0 - \\textbf{b} \\cdot \\textbf{W}_1 ) \/ v_0\n\\nonumber \\\\\n&&\n+ k_{eg}^2 ( \\textbf{E} \\cdot \\textbf{W}_2^i \/ v_0 - \\textbf{B} \\cdot \\textbf{W}_2 ) \/ v_0\n\\nonumber \\\\\n&&\n- ( \\partial_0 W_{10} + \\nabla \\cdot \\textbf{W}_1^i )\n\\}\n\\nonumber\n\\\\\n&& + \\{ ( \\textbf{g} \\cdot \\textbf{W}_1 \/ v_0 + \\textbf{b} \\cdot \\textbf{W}_1^i ) \/ v_0\n\\nonumber \\\\\n&&\n+ k_{eg}^2 ( \\textbf{E} \\cdot \\textbf{W}_2\/ v_0 + \\textbf{B} \\cdot \\textbf{W}_2^i ) \/ v_0\n\\nonumber \\\\\n&&\n+ ( \\partial_0 W_{10}^i - \\nabla \\cdot \\textbf{W}_1 )\n\\}~ ,\n\\end{eqnarray}\nwhere $\\mathbb{N}_g = \\emph{i} N_{10}^i + N_{10} + \\emph{i} \\textbf{N}_1^i + \\textbf{N}_1 $ . $N_{10}$ is the power density, which is able to figure out the mass continuity equation. $N_{10}^i$ covers the torque divergence. $\\textbf{N}_1^i$ is the force density. $\\textbf{N}_1$ is the torque derivative. $\\textbf{N}_1 = \\Sigma N_{1k} \\emph{\\textbf{i}}_k$, $\\textbf{N}_1^i = \\Sigma N_{1k}^i \\emph{\\textbf{i}}_k$. $N_{1j}$ and $N_{1j}^i$ are all real.\n\nWhen $\\mathbb{N}_g = 0$, the mass continuity equation can be derived from $N_{10} = 0$. From $\\textbf{N}_1 = 0$, one can deduce the velocity curl of the particle. Especially the angular velocity of Larmor precession for one charged particle can be derived from $\\textbf{N}_1 = 0$. The force equilibrium equation is able to be deduced from $\\textbf{N}_1^i = 0$. Further $\\textbf{N}_1^i = 0$ deduces that the gravitational acceleration $\\textbf{g}$ corresponds to the linear acceleration. And $\\textbf{N}_1 = 0$ infers that the component $\\textbf{b}$ of gravitational strength corresponds to the velocity curl, which is the double of the precessional angular velocity when $k = 2$.\n\nIn case the field strength is relatively weak, the definition of power density,\n\\begin{eqnarray}\nN_{10} = && ( \\textbf{g} \\cdot \\textbf{W}_1 \/ v_0 + \\textbf{b} \\cdot \\textbf{W}_1^i ) \/ v_0\n\\nonumber \\\\\n&&\n+ ( \\partial_0 W_{10}^i - \\nabla \\cdot \\textbf{W}_1 )\n\\nonumber \\\\\n&&\n+ k_{eg}^2 ( \\textbf{E} \\cdot \\textbf{W}_2 \/ v_0 + \\textbf{B} \\cdot \\textbf{W}_2^i ) \/ v_0\n~,\n\\end{eqnarray}\ncan be written approximately as,\n\\begin{eqnarray}\nN_{10} \/ k_p \\approx && \\partial_0 ( p_0 v_0 ) - \\nabla \\cdot ( \\textbf{p} v_0 )\n\\nonumber \\\\\n&&\n+ L_{10} ( \\textbf{b} \\cdot \\textbf{b} - \\textbf{g} \\cdot \\textbf{g} \/ v_0^2 ) \/ ( v_0 k_p )\n\\nonumber \\\\\n&&\n+ k_{eg}^2 L_{10} ( \\textbf{B} \\cdot \\textbf{B} - \\textbf{E} \\cdot \\textbf{E} \/ v_0^2 ) \/ ( v_0 k_p )\n\\nonumber \\\\\n&&\n+ k_{eg}^2 \\textbf{E} \\cdot \\textbf{P} \/ v_0 + \\textbf{g} \\cdot \\textbf{p} \/ v_0 ~,\n\\end{eqnarray}\nwhere $W_{10}^i \\approx k_p p_0 v_0$, $\\textbf{W}_1 \\approx k_p \\textbf{p} v_0$, $\\textbf{W}_2 \\approx k_p \\textbf{P} v_0$. $\\textbf{f}^* \\circ \\textbf{f} = - ( \\textbf{b} \\cdot \\textbf{b} - \\textbf{g} \\cdot \\textbf{g} \/ v_0^2 )$ is the quaternion norm of the gravitational strength $\\textbf{f}$. Meanwhile $\\textbf{F}^* \\circ \\textbf{F} = - ( \\textbf{B} \\cdot \\textbf{B} - \\textbf{E} \\cdot \\textbf{E} \/ v_0^2 )$ is the $S$-quaternion norm of the electromagnetic strength $\\textbf{F}$.\n\nWhen $N_{10} = 0$, the above will be reduced to the mass continuity equation, including the term capable of translating into the Joule heat, $( - k_{eg}^2 \\textbf{E} \\cdot \\textbf{P} - \\textbf{g} \\cdot \\textbf{p} )$. It means that the field is able to release the heat to the particle, or absorb the heat from the particle. The heat exchange between the field and the particle is associated with the field strength and particle velocity, impacting the mass continuity equation directly.\n\nUnder the extreme conditions there is no field strength, the above can be simplified into the mass continuity equation in the classical field theory,\n\\begin{equation}\n\\partial_0 p_0 - \\nabla \\cdot \\textbf{p} = 0 ~.\n\\end{equation}\n\nSimilarly in case the field strength is comparative weak, the force density,\n\\begin{eqnarray}\n\\textbf{N}_1^i = && ( W_{10}^i \\textbf{g} \/ v_0 + \\textbf{g} \\times \\textbf{W}_1^i \/ v_0\n\\nonumber \\\\\n&&\n- W_{10} \\textbf{b} - \\textbf{b} \\times \\textbf{W}_1 ) \/ v_0\n\\nonumber \\\\\n&&\n+ k_{eg}^2 ( \\textbf{E} \\circ \\textbf{W}_{20}^i \/ v_0 + \\textbf{E} \\times \\textbf{W}_2^i \/ v_0\n\\nonumber \\\\\n&&\n- \\textbf{B} \\circ \\textbf{W}_{20} - \\textbf{B} \\times \\textbf{W}_2 ) \/ v_0\n\\nonumber \\\\\n&&\n- ( \\partial_0 \\textbf{W}_1 + \\nabla W_{10}^i + \\nabla \\times \\textbf{W}_1^i ) ~,\n\\end{eqnarray}\ncan be written approximately as,\n\\begin{eqnarray}\n\\textbf{N}_1^i \/ k_p \\approx && - \\partial_0 (\\textbf{p} v_0) + p_0 \\textbf{g} \/ v_0\n\\nonumber \\\\\n&&\n+ L_{10} ( \\textbf{g} \\times \\textbf{b} + k_{eg}^2 \\textbf{E} \\times \\textbf{B} ) \/ ( v_0^2 k_p )\n\\nonumber \\\\\n&&\n- \\textbf{b} \\times \\textbf{p} - \\nabla (p_0 v_0)\n\\nonumber \\\\\n&&\n+ k_{eg}^2 ( \\textbf{E} \\circ \\textbf{P}_0 \/ v_0 - \\textbf{B} \\times \\textbf{P} ) ~,\n\\end{eqnarray}\nwhere $\\textbf{W}_{20}^i \\approx k_p \\textbf{P}_0 v_0$. $\\textbf{W}_2^i \\approx \\textbf{v} \\times \\textbf{P} $. $( p_0 \\textbf{g} \/ v_0 )$ is the gravity. $ \\partial_0 ( - \\textbf{p} v_0)$ is the inertial force. $ \\{k_{eg}^2 ( \\textbf{E} \\circ \\textbf{P}_0 \/ v_0 - \\textbf{B} \\times \\textbf{P} ) \\}$ covers the Lorentz force and Coulomb force. $\\nabla (p_0 v_0)$ is the energy gradient.\n\nWhen $\\textbf{N}_1^i = 0$, the above will deduce the force equilibrium equation, including the existing force terms in the gravitational and electromagnetic fields, and some new force terms.\n$\\{ L_{10} ( k_{eg}^2 \\textbf{E} \\times \\textbf{B} ) \/ ( v_0^2 k_p ) \\}$ is in direct proportion to the electromagnetic momentum. Comparing with the force in the classical electromagnetic theory states that $k_{eg}^2 = \\mu_g \/ \\mu_e < 0$. The force $\\textbf{N}_1^i$ in the above requires that the gravitational field must situate on the quaternion space $\\mathbb{H}_g$, meanwhile the electromagnetic field has to stay on the $S$-quaternion space $\\mathbb{H}_e$, but not vice versa. Obviously the inertial force term $( - v_0 \\partial_0 \\textbf{p} )$ plays a crucial role in the space discrimination.\n\n\n\n\n\\subsection{\\label{sec:level2}Component $\\mathbb{N}_e$}\n\nIn the $S$-quaternion space $\\mathbb{H}_e$, the component $\\mathbb{N}_e$ can be written as,\n\\begin{eqnarray}\n\\mathbb{N}_e = \\emph{i} \\textbf{N}_{20}^i + \\textbf{N}_{20} + \\emph{i} \\textbf{N}_2^i + \\textbf{N}_2~,\n\\end{eqnarray}\nwhere $\\textbf{N}_{20}$ is able to infer the current continuity equation. $\\textbf{N}_2 = \\Sigma N_{2k} \\emph{\\textbf{I}}_k$, $\\textbf{N}_{20} = N_{20} \\emph{\\textbf{I}}_0$. $\\textbf{N}_2^i = \\Sigma N_{2k}^i \\emph{\\textbf{I}}_k$, $\\textbf{N}_{20}^i = N_{20}^i \\emph{\\textbf{I}}_0$. $N_{2j}$ and $N_{2j}^i$ are all real numbers. That is,\n\\begin{eqnarray}\n\\textbf{N}_{20}^i = && ( \\textbf{g} \\cdot \\textbf{W}_2^i \/ v_0 - \\textbf{b} \\cdot \\textbf{W}_2 ) \/ v_0\n \\nonumber \\\\\n &&\n - ( \\partial_0 \\textbf{W}_{20} + \\nabla \\cdot \\textbf{W}_2^i )\n \\nonumber \\\\\n && + ( \\textbf{E} \\cdot \\textbf{W}_1^i \/ v_0 - \\textbf{B} \\cdot \\textbf{W}_1 ) \/ v_0 ~,\n\\\\\n\\textbf{N}_{20} = && ( \\textbf{g} \\cdot \\textbf{W}_2 \/ v_0 + \\textbf{b} \\cdot \\textbf{W}_2^i ) \/ v_0\n \\nonumber \\\\\n &&\n + ( \\partial_0 \\textbf{W}_{20}^i - \\nabla \\cdot \\textbf{W}_2 )\n \\nonumber \\\\\n && + ( \\textbf{E} \\cdot \\textbf{W}_1 \/ v_0 + \\textbf{B} \\cdot \\textbf{W}_1^i ) \/ v_0 ~,\n\\\\\n\\textbf{N}_2^i = && ( \\textbf{g} \\circ \\textbf{W}_{20}^i \/ v_0 + \\textbf{g} \\times \\textbf{W}_2^i \/ v_0\n \\nonumber \\\\\n &&\n - \\textbf{b} \\circ \\textbf{W}_{20} - \\textbf{b} \\times \\textbf{W}_2 ) \/ v_0\n \\nonumber \\\\\n && + ( W_{10}^i \\textbf{E} \/ v_0 + \\textbf{E} \\times \\textbf{W}_1^i \/ v_0\n \\nonumber \\\\\n &&\n - W_{10} \\textbf{B} - \\textbf{B} \\times \\textbf{W}_1 ) \/ v_0\n \\nonumber \\\\\n && - ( \\partial_0 \\textbf{W}_2 + \\nabla \\circ \\textbf{W}_{20}^i + \\nabla \\times \\textbf{W}_2^i ) ~,\n\\\\\n\\textbf{N}_2 = && ( \\textbf{g} \\circ \\textbf{W}_{20} \/ v_0 + \\textbf{g} \\times \\textbf{W}_2 \/ v_0\n \\nonumber \\\\\n &&\n + \\textbf{b} \\circ \\textbf{W}_{20}^i + \\textbf{b} \\times \\textbf{W}_2^i ) \/ v_0\n \\nonumber \\\\\n && + ( W_{10} \\textbf{E} \/ v_0 + \\textbf{E} \\times \\textbf{W}_1 \/ v_0\n \\nonumber \\\\\n &&\n + W_{10}^i \\textbf{B} + \\textbf{B} \\times \\textbf{W}_1^i ) \/ v_0\n \\nonumber \\\\\n && + ( \\partial_0 \\textbf{W}_2^i - \\nabla \\circ \\textbf{W}_{20} - \\nabla \\times \\textbf{W}_2 ) ~.\n\\end{eqnarray}\n\nIn case the field strength is comparative weak, $\\textbf{N}_{20}$ is written approximately as,\n\\begin{eqnarray}\n\\textbf{N}_{20} \/ k_p \\approx && \\partial_0 ( \\textbf{P}_0 v_0 ) - \\nabla \\cdot ( \\textbf{P} v_0 )\n\\nonumber \\\\\n&&\n+ \\textbf{g} \\cdot \\textbf{P} \/ v_0 + \\textbf{E} \\cdot \\textbf{p} \/ v_0\n\\nonumber \\\\\n&&\n+ ( \\textbf{b} \\cdot \\textbf{b} + \\textbf{B} \\cdot \\textbf{B} ) \\textbf{L}_{20} \/ ( v_0 k_p ) ~,\n\\end{eqnarray}\nwhere the magnetic flux density $\\textbf{B}$ and the gravitational strength component $\\textbf{b}$ both make a contribution to the above.\n\nWhen $\\textbf{N}_{20} = 0$, the above is able to reason out the current continuity equation, including the interacting term $ \\{ (\\textbf{g} \\cdot \\textbf{P} + \\textbf{E} \\cdot \\textbf{p} ) \/ v_0 \\}$ between the gravitational field and electromagnetic field. It means that the current continuity equation may be disturbed by some influencing factors, such as the field potential.\n\nUnder the extreme conditions there is no field strength, the above can be reduced into the current continuity equation in the classical field theory,\n\\begin{equation}\n\\partial_0 \\textbf{P}_0 - \\nabla \\cdot \\textbf{P} = 0 ~.\n\\end{equation}\n\nIn the classical electromagnetic theory, it requires the current continuity equation to participate the deduction of the Maxwell's equations. However in the field theory described with the octonion, the current continuity equation and the mass continuity equation both are the direct inferences of the field theory. Therefore the current continuity equation really does not need to be taken part in the deduction of the Maxwell's equations anymore. For the charged particle and electric current, it is necessary to satisfy simultaneously the two continuity equations, and then enable those two continuity equations to be correlative each other in this case.\n\nIn the octonion space, when $\\mathbb{N} = 0$, Eq.(21) can be separated into eight equations, which were not considered to interconnect each other in the past. Any charged particle must obey all of these eight equations simultaneously, when there are the electromagnetic field and the gravitational field.\n\nIn the force equilibrium equation, the inertial force, gravity, electromagnetic force, and energy gradient etc make a contribution to the equation. Especially when the energy gradient is much stronger than other force terms, it is able to attract or repulse all kinds of particles of ordinary matter. And the energy gradient caused by the gravitational strength component $\\textbf{b}$ may be applied to puzzle about the dynamic of cosmic jets.\n\nIn the equation for precessional angular velocity, many torque terms coming from the electromagnetic and gravitational fields, will impact the precessional angular frequency of charged particle, when there are the electromagnetic and gravitational fields simultaneously.\n\nIt shows that the definition of octonion force is able to contain more physics contents, which were considered to be independent of each other before, such as the force equilibrium equation, the equation for precessional angular velocity, the mass continuity equation, and the current continuity equation etc (Table IV). The force equilibrium equation is able to conclude the Newton's law of universal gravitation. The equation for precessional angular velocity is capable of deducing the angular velocity of Larmor precession for one charged particle.\n\n\n\n\n\n\n\n\n\n\n\n\\begin{table}[h]\n\\caption{Some definitions of the physics quantity in the gravitational and electromagnetic fields described with the complex octonion.}\n\\label{tab:table2}\n\\centering\n\\begin{ruledtabular}\n\\begin{tabular}{lll}\nphysics~quantity & definition \\\\\n\\hline\nquaternion~operator & $\\square = i \\partial_0 + \\Sigma \\emph{\\textbf{i}}_k \\partial_k$ \\\\\nradius~vector & $\\mathbb{R} = \\mathbb{R}_g + k_{eg} \\mathbb{R}_e $ \\\\\nintegral~function & $\\mathbb{X} = \\mathbb{X}_g + k_{eg} \\mathbb{X}_e $ \\\\\nfield~potential & $\\mathbb{A} = i \\square^\\times \\circ \\mathbb{X} $ \\\\\nfield~strength & $\\mathbb{F} = \\square \\circ \\mathbb{A} $ \\\\\nfield~source & $\\mu \\mathbb{S} = - ( i \\mathbb{F} \/ v_0 + \\square )^* \\circ \\mathbb{F} $ \\\\\n\nlinear~momentum & $\\mathbb{P} = \\mu \\mathbb{S} \/ \\mu_g $ \\\\\nangular~momentum & $\\mathbb{L} = ( \\mathbb{R} + k_{rx} \\mathbb{X} )^\\times \\circ \\mathbb{P} $ \\\\\noctonion~torque & $\\mathbb{W} = - v_0 ( i \\mathbb{F} \/ v_0 + \\square ) \\circ \\mathbb{L} $ \\\\\noctonion~force & $\\mathbb{N} = - ( i \\mathbb{F} \/ v_0 + \\square ) \\circ \\mathbb{W} $ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\\section{\\label{sec:level1}Conclusions and Discussions}\n\nThe paper introduced the quaternion into the field theory, to describe simultaneously the features of gravitational and electromagnetic fields. The space extended from the electromagnetic field is independent of that from the gravitational field. Meanwhile the spaces of gravitational field and of electromagnetic field can be chosen as the quaternion spaces. Furthermore the components of those two spaces and of relevant physics quantities may be complex numbers.\n\nIn the gravitational and electromagnetic field theories, the scalar parts of quaternion operator and of field potential are all imaginary numbers. However this simple case is still able to result in many components of other physics quantities to become the imaginary numbers and even complex numbers. In order to achieve the field theory described with the quaternion, which approximates to the classical field theory, it is necessary to choose suitable gauge conditions. Therefore it is able to be said that the gauge condition is also the essential ingredient for the field theory.\n\nIn the quaternion space, it is capable of deducing the angular momentum, torque, force, energy, and mass continuity equation etc. The mass continuity equation is one indispensable part of the gravitational theory described with the quaternion. Under the condition of weak gravitational strength, the mass continuity equation described with the quaternion can be degenerated into that in the classical gravitational theory. The force includes the inertial force, gravity, electromagnetic force, and energy gradient etc in the classical field theory.\n\nAccording to the multiplication of octonion, among the product of two physics quantities in the $S$-quaternion space, some terms may stay on the $S$-quaternion space still, while other terms will situate on the quaternion space, such as the electromagnetic force. In the $S$-quaternion space, it is able to reason out directly the dipole moment and current continuity equation etc. The current continuity equation is one intrinsic part of the electromagnetic theory described with the $S$-quaternion. In the case of weak electromagnetic strength, the current continuity equation described with the $S$-quaternion can be simplified into that in the classical electromagnetic theory.\n\nIt should be noted that the paper discussed only some simple cases about the torque, force, energy, force equilibrium equation, and continuity equations etc in the field theory described with the quaternion\/$S$-quaternion. However it clearly states that the complex octonion is able to availably describe the physics features of electromagnetic and gravitational fields. This will afford the theoretical basis for further analysis, and is helpful to research the property of the force and continuity equations in the following researches.\n\n\n\n\n\n\n\n\n\n\\begin{table*}\n\\caption{The analogy between the physics quantity of gravitational field with that of electromagnetic field in the complex octonion space.}\n\\begin{ruledtabular}\n\\begin{tabular}{lll}\n\nphysics~quantity & gravitational~field & electromagnetic~field \\\\\n\\hline\nfield~potential & $a_0$, gravitational~scalar~potential & $\\textbf{A}_0$, electromagnetic~scalar~potential \\\\\n & $\\textbf{a}$, (similar to $\\textbf{A}$) & $\\textbf{A}$, electromagnetic~vector~potential \\\\\nfield~strength\t & $\\textbf{g}$, gravitational~acceleration \t & $\\textbf{E}$, electric~field~intensity \\\\\n \t & $\\textbf{b}$, (similar to $\\textbf{B}$) & $\\textbf{B}$, magnetic~flux~density \\\\\nfield~source\t & $s_0$, mass~density \t & $\\textbf{S}_0$, electric~charge~density \\\\\n \t & $\\textbf{s}$, linear~momentum~density \t & $\\textbf{S}$, electric~current~density \\\\\n\nangular~momentum\t & $L_{10}$, dot~product & $\\textbf{L}_{20}$, (similar to $L_{10}$) \\\\\n\t & $\\textbf{L}_1$, angular~momentum \t & $\\textbf{L}_2$, magnetic~dipole~moment \\\\\n\t & $\\textbf{L}_1^i$, (similar to $\\textbf{L}_2^i$) & $\\textbf{L}_2^i$, electric~dipole~moment \\\\\noctonion~torque\t & $W_{10}$, divergence~of \t & $\\textbf{W}_{20}$, divergence~of \\\\\n & ~~~~~~~angular~momentum & ~~~~~~~~magnetic~dipole~moment \\\\\n\t & $W_{10}^i$,\tenergy & $\\textbf{W}_{20}^i$, (similar to $W_{10}^i$) \\\\\n\t & $\\textbf{W}_1$,\tcurl~of~angular~momentum & $\\textbf{W}_2$, curl~of~magnetic~dipole \\\\\n & & ~~~~~~~~moment,~and~derivative \\\\\n & & ~~~~~~~~of~electric~dipole~moment \\\\\n\t & $\\textbf{W}_1^i$, torque & $\\textbf{W}_2^i$, (similar to $\\textbf{W}_1^i$) \\\\\noctonion~force \t & $N_{10}$, power \t & $\\textbf{N}_{20}$, (similar to $N_{10}$) \\\\\n\t & $N_{10}^i$, torque~divergence & $\\textbf{N}_{20}^i$, (similar to $N_{10}^i$) \\\\\n\t & $\\textbf{N}_1$,\ttorque~derivative & $\\textbf{N}_2$, (similar to $\\textbf{N}_1$) \\\\\n\t & $\\textbf{N}_1^i$, force & $\\textbf{N}_2^i$, (similar to $\\textbf{N}_1^i$) \\\\\n\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table*}\n\n\n\n\n\n\n\\begin{acknowledgements}\nThe author is indebted to the anonymous referee for their valuable and constructive comments on the previous manuscript. This project was supported partially by the National Natural Science Foundation of China under grant number 60677039.\n\\end{acknowledgements}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}