diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzahs" "b/data_all_eng_slimpj/shuffled/split2/finalzahs" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzahs" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn engineering and management applications, one often has to collect data from unknown systems, learn their transition functions, and learn to make predictions and decisions. A critical precursor of decision making is to model the system from data. We study how to learn an unknown Markov model of the system from its state-transition trajectories. When the system admits a large number of states, recovering the full model becomes sample expensive.\n\nIn this paper, we focus on Markov processes where the transition matrix has a small rank.\nThe small rank implies that the observed process is governed by a low-dimensional latent process which we cannot see in a straightforward manner. It is a property that is (approximately) satisfied in a wide range of practical systems. Despite the large state space, the low-rank property unlocks potential of accurate learning of a full set of transition density functions based on short empirical trajectories. \n\n\\subsection{Motivating Examples}\n\nPractical state-transition processes with a large number of states often exhibit low-rank structures. For example, the sequence of stops made by a taxi turns out to follow a Markov model with approximately low rank structure \\citep{liu2012understanding, benson2017spacey}. \nFor another example, random walk on a lumpable network has a low-rank transition matrix \\citep{buchholz1994exact,e2008optimal}. The transition kernel with fast decaying eigenvalues has been also observed in molecular dynamics \\citep{rohrdanz2011determination}, which can be used to find metastable states, coresets and manifold structures of complicated dynamics \\citep{chodera2007automatic, coifman2008diffusion}.\n\n\nLow-rank Markov models are also related to dimension reduction for control systems and reinforcement learning. For example, the state aggregation approach for modeling a high-dimensional system can be viewed as a low-rank approximation approach \\citep{bertsekas1995dynamic, bertsekas1995neuro,singh1995reinforcement}. In state aggregation, one assumes that there exists a latent stochastic process $\\{z_t\\} \\subset [r]$ such that\n$\\mathbb{P}( s_{t+1} \\mid s_t ) = \\sum_z \\mathbb{P}( z_t =z \\mid s_t) \\mathbb{P}( s_{t+1} \\mid z_t=z),$\nwhich is equivalent to a factorization model of the transition kernel $\\P$.\nIn the context of reinforcement learning, the nonnegative factorization model was referred to as the generalization to the rich-observation model \\citep{azizzadenesheli2016reinforcement1}. The low-rank structure allows us to model and optimize the system using significantly fewer observations and less computation. Effective methods for estimating the low-rank Markov model would pave the way to better understanding of process data and more efficient decision making.\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Our approach}\nWe propose to estimate the low-rank Markov model based on an empirical trajectory of states, whose length is only proportional to the total number of states. \nWe propose two approaches based on the maximum likelihood principle and low-rank optimization. The first approach uses a convex nuclear-norm regularizer to enforce the low-rank structure and a polyhedral constraint to ensure that optimization is over all probabilistic matrices. The second approach is to solve a rank-constrained optimization problem using difference-of-convex (DC) programming. For both approaches, we provide statistical upper bounds for the Kullback-Leibler (KL) divergence between the estimator and the true transition matrix as well as the $\\ell_2$ risk. We also provide an information-theoretic lower bound to show that the proposed estimators are nearly rate-optimal. Note that the low-rank estimation of the Markov model was considered in \\cite{zhang2018optimal} where a spectral method with total variation bound is given. In comparison, the novelty of our methods lies in the use of maximum likelihood principle and low-rank optimization, which allows us to obtain the first KL divergence bound for learning low-rank Markov models.\n\n\nOur second approach involves solving a rank constraint optimization problem over probabilistic matrices, which is a refinement of the convex nuclear-norm approach. Due to the non-convex rank constraint, the optimization problem is difficult - \u00a0to the best of our knowledge, there is no efficient approach that directly solves the rank-constraint problem.\u00a0In this paper, we develop a penalty approach to relax the rank constraint and transform the original problem into a DC (difference of convex functions) programming one. Furthermore, we develop a particular DC algorithm to solve the problem by initiating at the solution to the convex problem and successively refining the solution through solving a sequence of inner subproblems. Each subroutine is based on the multi-block alternating direction method of multipliers (ADMM). Empirical experiments show that the successive refinements through DC programming do improve the learning quality. As a byproduct of this research, we develop a new class of DC algorithms and a unified convergence analysis for solving non-convex non-smooth problems, which were not available in the literature to our best knowledge. \n\n\\subsection{Contributions and paper outline}\n\nThe paper provides a full set of solutions for learning low-rank Markov models. The main contributions are: (1) We develop statistical methods for learning low-rank Markov model with rate-optimal Kullback-Leiber divergence guarantee for the first time; (2) We develop low-rank optimization methods that are tailored to the computation problems for nuclear-norm regularized and rank-constrained M-estimation; (3) A byproduct is a generalized DC algorithm that applies to nonsmooth nonconvex optimization with convergence guarantee. \n\nThe rest of the paper is organized as follows. Section 2 surveys related literature. Section 3 proposes two maximum likelihood estimators based on low-rank optimization and establishes their statistical properties. Section 4 develops computation methods and establishes convergence of the methods. Section 5 presents the results of our numerical experiments.\n\n\n\n\\section{Related literature}\n\n\nModel reduction for complicated systems has a long history. It traces back to variable-resolution dynamic programming \\citep{moore1991variable} and state aggregation for decision process \\citep{sutton1998reinforcement}. In the case of Markov process, \\citep{deng2011optimal, deng2012model} considered low-rank reduction of Markov models with explicitly known transition probability matrix, but not the estimation of the reduced models. Low-rank matrix approximation has been proved powerful in analysis of large-scale panel data, with numerous applications including network analysis \\citep{e2008optimal}, community detection \\citep{newman2013spectral}, ranking \\citep{negahban2016rank}, product recommendation \\citep{keshavan2010matrix} and many more. The main goal is to impute corrupted or missing entries of a large data matrix. Statistical theory and computation methods are well understood in the settings where a low-rank signal matrix is corrupted with independent Gaussian noise or its entries are missed independently. \n\n\nIn contrast, our problem is to estimate the transition density functions from dependent state trajectories, where statistical theory and efficient methods are underdeveloped. When the Markov model has rank $1$, it becomes an independent process. In this case, our problem reduces to estimation of a discrete distribution from independent samples \\citep{steinhaus1957problem,lehmann2006theory,han2015minimax}. For a rank-$2$ transition matrix, \\cite{huang2016recovering} proposed an estimation method using a small number of independent samples. Very recently there have been some works on minimax learning of Markov chains. \\citet{HOP18} derived the minimax rates of estimating a Markov model in terms of a smooth class of $f$-divergences. They considered the family of $\\alpha$-minorated Markov chains, i.e., all the transition probabilities are greater than $\\alpha$. \\citet{WKo19} computed the finite-sample PAC-type minimax sample complexity of recovering the transition matrix from a state trajectory of a Markov chain, up to a tolerance in a total-variation-based (TV-based) metric. This TV-based metric does not belong to the family of the smooth $f$-divergences in \\citet{HOP18}, and their class of Markov models strictly contains the class of the $\\alpha$-minorated ones. Neither of these works considered low-rank Markov models though. \n\nThe closest work to ours is \\cite{zhang2018optimal}, in which a spectral method via truncated singular value decomposition was introduced and the upper and lower error bounds in terms of total variation were established. \\cite{yang2017dynamic} developed an online stochastic gradient method for computing the leading singular space of a transition matrix from random walk data. To our best knowledge, none of the existing works has analyzed efficient recovery of a low-rank Markov model with Kullback-Leiber divergence guarantee.\n\nHidden Markov Models (HMMs) are closely related with our low-rank Markov models. Note that the observation trajectory of an HMM is not necessarily Markovian. Therefore, an HMM can be regarded as a relaxed variant of low-rank Markov models. There have been many works on estimating HMM, in particular through spectral approaches, e.g., \\citet{hsu2012spectral} and \\citet{anandkumar2014tensor}. A critical difference is: States are not fully observable in HMM, but are fully observable in low-rank Markov models. Although HMM is more general, the low-rank Markov model is more suitable for dynamical processes where the state space is large but fully observable, for which we will establish tighter error bounds. \n\n\nOn the optimization side, we adopt DC programming to handle the rank constraint and replace it with the difference of two convex functions. DC programming was first introduced by \\cite{tao1997convex} and has become a prominent tool for handling a class of nonconvex optimization problems (see also \\cite{tao2005dc,le2012exact,le2017stochastic,lethi2018}). \nIn particular, \\citet{van2015convergence} and \\cite{wen2017proximal} considered the {majorized DC algorithm}, which motivated the optimization method developed in this paper.\nHowever, both \\cite{van2015convergence} and \\citet{wen2017proximal} used the majorization technique with restricted choices of majorants, and neither considered the introduction of the indefinite proximal terms. In addition, \\citet{wen2017proximal} further assumes the smooth part in the objective to be convex. \nIn comparison with the existing methods, our DC programming method applies to nonsmooth problems and is compatible with a more flexible and possibly indefinite proximal term.\n\nFinally, we would like to mention the probabilistic tools we used to derive the statistical results. Recent years have witnessed many works on measure concentration of dependent random variables, e.g., \\citet{Mar96, Kon07, KRa08, Pau15, JFS18}, etc. Nevertheless, these results do not suffice to establish the desired statistical guarantee, because exploiting low-rank structure requires studying the concentration of a matrix martingale in terms of the spectral norm, as shown in Lemma \\ref{lem:gradient}. The matrix Freedman inequality \\citep[][Corollary~1.3]{tropp2011freedman} turns out to be the right tool for analyzing the concentration of the matrix martingale. We also used an variant of Bernstein's inequality for general Markov chains \\citep[][Theorem~1.2]{JFS18} to derive an exponential tail bound for the status counts of the Markov chain ${\\mathcal X}$. \n\n\n\n\n\n\n\n\\section{Minimax rate-optimal estimation of low-rank Markov chains}\n\\label{sec:StatProperty}\n\nConsider an ergodic Markov chain ${\\mathcal X} = \\{X_0, X_1, \\ldots, X_n\\}$ on $p$ states ${\\mathcal S}=\\{s_j\\}_{j=1}^p$ with the transition probability matrix $\\mathbf{P}\\in\\mathbb{R}^{p\\times p}$ and stationary distribution $\\pi$, where $P_{ij} = \\mathbb{P}(X_1 = s_j|X_0 = s_i)$ for any $i, j \\in [p]$. Let $\\pi_{\\min} := \\min_{j \\in [p]} \\pi_j$ and $\\pi_{\\max} := \\max_{j \\in [p]} \\pi_j$. We quantify the distance between two transition matrices $\\mathbf{P}$ and $\\widehat{\\mathbf{P}}$ in Frobenius norm $\\|\\widehat{\\mathbf{P}} - \\mathbf{P}\\|_{\\rm F} = \\bigl\\{\\sum_{i,j = 1}^p (\\widehat P_{ij} - P_{ij})^2\\bigr\\}^{1\/2}$ and Kullback--Leibler divergence $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat{\\mathbf{P}}) = \\sum_{i, j=1}^p \\pi_iP_{ij}\\log(P_{ij}\/ \\widehat P_{ij})1_{\\{P_{ij} \\neq 0\\}}$.\nSuppose that the unknown transition matrix $\\mathbf{P}$ has a small rank $r \\ll p$. Our goal is to estimate the transition matrix $\\P$ via a state trajectory of length $n$. \n\n\\subsection{Spectral gap of nonreversible Markov chains}\nWe first introduce the {\\it right ${\\mathcal L}_2$-spectral gap} of $\\mathbf{P}$ \\citep{Fil91, JFS18}, a quantity that determines the convergence speed of the Markov chain ${\\mathcal X}$ to its invariant distribution $\\pi$. \nLet ${\\mathcal L}_2(\\pi):= \\{h \\in \\Re^p: \\sum_{j \\in [p]} h_j^2 \\pi_j < \\infty\\}$ be a Hilbert space endowed with the following inner product: \n\\[\n\\inn{h_1, h_2}_{\\pi} := \\sum_{j \\in [p]} h_{1j} h_{2j}\\pi_j. \n\\]\nThe matrix $\\mathbf{P}$ induces a linear operator on ${\\mathcal L}_2(\\pi)$: $h \\mapsto \\mathbf{P} h$, which we abuse $\\mathbf{P}$ to denote. Let $\\mathbf{P}^*$ be the adjoint operator of $\\mathbf{P}$ with respect to ${\\mathcal L}_2(\\pi)$:\n$$\\P^* = \\diag(\\pi)^{-1} \\P^\\top \\diag(\\pi).$$ \nNote that the following four statements are equivalent: (a) $\\P$ is self-adjoint; (b) $\\P^\\ast = \\P$; (c) the detailed balance condition holds: $\\pi_i P_{ij} = \\pi_j P_{ji}$; (d) the Markov chain is reversible. In our analysis, we do {\\it not} require the Markov chain to be reversible. We therefore introduce the {\\it additive reversiblization of $\\mathbf{P}$}: $(\\mathbf{P} + \\mathbf{P}^*) \/ 2$, which is a self-adjoint operator on ${\\mathcal L}_2(\\pi)$ and has the largest eigenvalue as 1. The right spectral gap of $\\mathbf{P}$ is defined as follows: \n\\begin{definition}[Right ${\\mathcal L}_2$-spectral gap]\n\tWe say the right ${\\mathcal L}_2$-spectral gap of $\\mathbf{P}$ is $1 - \\rho_+ $ if \n\t\\[\n\t\\rho_+:= \\sup_{\\inn{h, 1}_{\\pi} = 0, \\inn{h, h}_{\\pi} = 1} \\frac{1}{2}\\inn{(\\mathbf{P} + \\mathbf{P}^*)h, h}_{\\pi} < 1, \n\t\\]\n\twhere $1$ in $\\inn{h, 1}$ refers to the all-one $p$-dimensional vector.\n\\end{definition}\nDefine the $\\epsilon$-mixing time of the Markov chain ${\\mathcal X}$ as\n\\[\n\\tau(\\epsilon) := \\min\\{t: \\max_{j \\in [p]} \\|(\\mathbf{P}^t)_{j\\cdot} - \\pi\\|_{\\mathrm{TV}} \\le \\epsilon\\}, \n\\]\nwhere $\\|(\\mathbf{P}^t)_{j\\cdot} - \\pi\\|_{\\mathrm{TV}} := 2 ^ {-1} \\lonenorm{(\\mathbf{P}^t)_{j\\cdot} - \\pi}$ is the total variation distance between $\\mathbf{P} ^ t_{j \\cdot}$ and $\\pi$. For reversible and ergodic Markov chains, \\citet[][Theorem~12.3]{LPe17} show that \n\\be\n\t\\label{eq:mixing_gap}\n\t\\tau(\\epsilon) \\le \\frac{1}{1 - \\rho_+}\\log\\biggl(\\frac{1}{\\epsilon \\pi_{\\min}}\\biggr), \n\\ee\nwhich implies that the larger the spectral gap is, the faster the Markov chain converges to the stationary distribution.\n\n\\subsection{Estimation methods and statistical results}\n\\label{sec:main_results}\n\nNow we are in position to present our methods and statistical results. Given the trajectory $\\{X_1,\\ldots,X_n\\}$, we count the number of times that the state $s_i$ transitions to $s_j$:\n\\[ n_{ij}: = \\left|\\{1\\le k\\le n \\mid \\, X_{k-1} = s_i, X_k = s_j\\}\\right|.\\]\nLet $n_i: = \\sum_{j=1}^{p} n_{ij}$ for $i=1,\\ldots, p$ and $n := \\sum_{i=1}^p n_i$. The averaged negative log-likelihood function of $\\P$ based on the state-transition trajectory $\\{x_0, \\ldots, x_n\\}$ is\n\\begin{equation}\\label{eq:log-likelihood}\n{\\ell_n}(\\P):= - \\frac{1}{n} \\sum_{k = 1}^n \\log (\\inn{\\mathbf{P}, \\mathbf{X}_k}) = -\\frac{1}{n} \\sum_{i=1}^{p}\\sum_{j=1}^{p} n_{ij}\\log(P_{ij}), \n\\end{equation}\nwhere $\\mathbf{X}_k := e_ie_j^\\top \\in \\Re^{p \\times p}$ if $x_k = s_i$ and $x_{k + 1} = s_j$. We first impose the following assumptions on $\\mathbf{P}$ and $\\pi$. \n\\begin{assumption}\n\t\\label{asp:1}\n\t(i) ${\\rm rank}(\\mathbf{P}) = r$; (ii) There exist some positive constants $\\alpha, \\beta > 0$ such that for any $1 \\le j, k \\le p$, $P_{jk} \\in \\{0\\} \\cup [\\alpha \/ p, \\beta \/ p]$. \n\\end{assumption}\n\\begin{remark}\nThe entrywise constraints on $\\mathbf{P}$ are imposed by our theoretical analysis and may not be necessary in practice. \n\tSpecifically, the upper and lower bounds for the nonzero entries of $\\mathbf{P}$\n\tensure that (i) the gradient of the log-likelihood $\\nabla \\ell_n(\\mathbf{P})$ is well controlled and exhibits exponential concentration around its population mean (see \\eqref{eq:log_likelihood} for the reason we need $\\alpha$ there); \n\t(ii) the converter between the $\\ell_2$-risk $\\fnorm{\\widehat \\mathbf{P} - \\mathbf{P}}$ ($\\fnorm{\\widehat \\mathbf{P}^r - \\mathbf{P}}$ resp.) and the KL-divergence $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})$ ($D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}^r)$ resp.) depends on $\\alpha$ and $\\beta$, as per Lemma \\ref{lem:kl_to_l2}. The entry-wise upper and lower bounds are common in statistical analysis of count data, e.g., Poisson matrix completion \\citep[Equation (10)]{cao2016poisson}, Poisson sparse regression \\citep[Assumption 2.1]{jiang2015minimax}, point autoregressive model \\citep[Definition of $\\mathcal{A}_s$]{hall2016inference}, etc.\n\\end{remark\n\n\\begin{remark}\n\n\tIf we remove $0$ in the feasible range of $P_{jk}$, we obtain the $(\\alpha\/p)$-minoration condition: $P_{jk} \\ge \\alpha \/ p$ for all $j, k \\in [p]$. The $(\\alpha\/p)$-minoration condition implies strong mixing since combining \\citet[][pp. 237-238]{Bre99} and \\citet[][Lemma~2.2.2]{Kon07} yields $1 - \\rho_+ \\ge \\alpha$ and we can deduce that $\\tau(\\epsilon) \\le \\alpha^{-1}\\log\\{(\\epsilon \\pi_{\\min})^{-1}\\}$ given \\eqref{eq:mixing_gap}. \n\\end{remark}\n\nNext we propose and analyze a nuclear-norm regularized maximum likelihood estimator (MLE) of $\\mathbf{P}$ defined as follows: \n\\begin{equation}\n\\label{prob:convex-nuclear}\n\\begin{array}{rllll}\n\\widehat{\\P} := & \\argmin ~\\ell_n({\\mathbf{Q}}) + \\lambda \\nnorm{{\\mathbf{Q}}}\\\\\n\\mbox{s.t.} & {\\mathbf{Q}} 1_p = 1_p,\\quad \\alpha \/ p \\le Q_{ij}\\le \\beta \/ p,\\quad \\forall\\, 1\\le i,j\\le p, \n\\end{array}\n\\end{equation}\nwhere $\\lambda>0$ is a tuning parameter. Note that we cannot allow $\\mathbf{Q}$ to have zero entries as in Assumption \\ref{asp:1}, because otherwise we may have that $\\widehat P_{ij} = 0$ and $P_{ij} > 0$ for some $(i, j)$, violating the requirement of the definition of $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P})$. Our first theorem shows that with an appropriate choice of $\\lambda$, $\\widehat \\mathbf{P}$ exhibits a sharp statistical rate. For simplicity, from now on we say $a \\gtrsim b$ ($a \\lesssim b$) if there exists a universal constant $c > 0$ ($C > 0$) such that $a \\ge cb$ ($a \\le Cb$). \n\n\\begin{theorem}[Statistical guarantee for the nuclear-norm regularized estimator]\n\t\\label{thm:nuclear}\n\tSuppose the initial state $X_0$ is drawn from the stationary distribution $\\pi$ and Assumption \\ref{asp:1} holds. There exists a universal constant $C_1 > 0$, such that for any $\\xi > 1$, if we choose \n\t\\[\n\t\t\\lambda = C_1 \\biggl\\{\\biggl(\\frac{\\xi p ^ 2\\pi_{\\max} \\log p}{n\\alpha}\\biggr)^{\\! 1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\}, \n\t\\]\n\tthen whenever $n\\pi_{\\max}(1 - \\rho_+) \\ge \\max\\{\\max(20, \\xi ^ 2) \\log p, \\log n\\}$, we have that \n\t\\[\n\t\\begin{aligned}\n\t\t\\mathbb{P} \\biggl( D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) \\gtrsim \\frac{\\xi r\\pi_{\\max}\\beta ^ 2 p \\log p}{\\pi_{\\min}\\alpha ^ 3n} + \\frac{\\xi\\pi_{\\min}}{rp\\pi_{\\max}\\log p}\\biggr) \\lesssim e^{- \\xi} + p^{-(\\xi - 1)} + p^{-10},\n\t\\end{aligned}\n\t\\]\n\tand that \n\t\\[\n\t\t\\begin{aligned}\n\t\t\\mathbb{P} \\biggl( \\fnorm{\\widehat \\mathbf{P} - \\mathbf{P}} ^ 2 \\gtrsim \\frac{\\xi r\\pi_{\\max}\\beta ^ 4\\log p}{\\pi_{\\min} ^ 2\\alpha ^ 4n} + \\frac{\\xi\\beta ^ 2}{\\alpha rp ^ 2 \\pi_{\\max} \\log p}\\biggr) \\lesssim e^{- \\xi} + p^{-(\\xi - 1)} + p^{-10}. \n\t\t\\end{aligned}\n\t\\]\t\n\n\\end{theorem}\n\n\n\\begin{remark}\n\t\tWhen $n \\lesssim \\{rp \\pi_{\\max}(\\log p) \\beta\/ (\\pi_{\\min} \\alpha ^ {\\!3 \/ 2})\\} ^ 2$, the second terms of both the KL-Divergence and Frobenius-norm error bounds are dominated by the first terms respectively, so that \n\t\t\\be\n\t\t\t\\label{eq:hd_result}\n\t\t\tD_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) = O_{\\mathbb{P}}\\biggl(\\frac{r\\pi_{\\max}\\beta ^ 2 p \\log p}{\\pi_{\\min}\\alpha ^ 3n}\\biggr)~\\text{and}~\\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}} ^ 2 = O_{\\mathbb{P}}\\biggl(\\frac{r\\pi_{\\max}\\beta ^ 4\\log p}{\\pi_{\\min} ^ 2\\alpha ^ 4n}\\biggr).\n\t\t\\ee\n\t\tWhen $\\alpha \\asymp \\beta$ and $\\pi_{\\max} \\asymp \\pi_{\\min}$, we have that $\\alpha, \\beta \\asymp 1$ and that $\\pi_{\\max}, \\pi_{\\min} \\asymp 1 \/ p$. Therefore, $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}) = O_{\\mathbb{P}}(rp\\log p \/ n)$ and $\\fnorm{\\widehat \\mathbf{P} - \\mathbf{P}} ^ 2 = O_{\\mathbb{P}}(rp\\log p \/ n)$. These rates are consistent with those derived in the literature of low-rank matrix estimation \\citep{NWa11, KLT11}. For a big $n$, the current error bounds are sub-optimal: the second terms of the bounds are independent of $n$ and thus do not converge to zero as $n$ goes to infinity. These terms are due to the requirement of the uniform concentration of $\\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P})$, the empirical counterpart of $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})$, to $D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})$ (see Lemma \\ref{lem:uniform_law}). We eliminate these trailing terms through an alternative proof strategy in Section \\ref{sec:alt}, though the resulting statistical rates have worse dependence on $\\alpha$ and $\\beta$ and are thus relegated to the appendix.\n\\end{remark}\n\n\\begin{remark}\n\tWhen $r = 1$, $\\mathbf{P}$ can be written as $1v^\\top$ for some vector $v\\in \\Re^{p}$, and then estimating $\\mathbf{P}$ essentially reduces to estimating a discrete distribution from multinomial count data. The first term of the upper bounds in Theorem \\ref{thm:nuclear} nearly matches (up to a log factor) the classical results of discrete distribution estimation $\\ell_2$ risks (see, e.g., \\citet[Pg. 349]{lehmann2006theory}). \n\n\\end{remark}\n\n\nNext we move on to the second approach -- using rank-constrained MLE to estimate $\\mathbf{P}$: \n\\begin{equation}\\label{prob:nonconvex-lowrank}\n\\begin{array}{rllll}\n\\widehat{\\P}^r := & \\argmin {\\ell_n}({\\mathbf{Q}}) \\\\\n \\mbox{s.t.} & {\\mathbf{Q}} 1_p = 1_p,\\quad \\alpha \/ p \\le Q_{ij}\\le \\beta \/ p,\\quad \\forall\\, 1\\le i,j\\le p,\\quad \\text{rank}({\\mathbf{Q}}) \\le r. \n\\end{array}\n\\end{equation}\nSimilarly to \\eqref{prob:convex-nuclear}, we cannot allow $\\mathbf{Q}$ to have zero entries. In contrast to $\\widehat \\mathbf{P}$, the rank-constrained MLE $\\widehat \\mathbf{P}^r$ enforces the prior knowledge ``$\\mathbf{P}$ is low-rank\" exactly without inducing any additional bias. It requires solving a non-convex and non-smooth optimization problem, for which we will provide an algorithm based on DC programming in Section \\ref{sec:6}. Here we first present its statistical guarantee. \n\n\\begin{theorem}[Statistical guarantee for the rank-constrained estimator]\n\t\\label{thm:rank}\n\t\tSuppose that Assumption \\ref{asp:1} holds and that $n\\pi_{\\max} (1 - \\rho_+) > \\max(20 \\log p, \\log n)$. There exist universal constants $C_1, C_2 > 0$ such that for any $\\xi > 0$, \n\t\t\\[\n\t\t\t\\mathbb{P}\\biggl\\{D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}^r) \\ge \\max\\biggl(\\frac{C_1 r\\pi_{\\max} \\beta^2 p \\log p }{\\pi_{\\min}\\alpha ^ 3n}, \\frac{\\xi \\pi_{\\min}}{rp \\pi_{\\max} \\log p}\\biggr)\\biggr\\} \\le C_2e^{- \\xi}, \n\t\t\\]\n\t\tand \n\t\t\\[\n\t\t\t\\mathbb{P}\\biggl\\{\\fnorm{\\widehat \\mathbf{P}^r - \\mathbf{P}} ^ 2 \\ge \\max\\biggl(\\frac{C_1 r\\pi_{\\max} \\beta^4 \\log p }{\\pi ^ 2_{\\min}\\alpha ^ 4n}, \\frac{\\xi \\beta ^ 2}{\\alpha rp ^ 2 \\pi_{\\max}\\log p}\\biggr)\\biggr\\} \\le C_2 e^{-\\xi}. \n\t\t\\]\n\\end{theorem}\n\n\\begin{remark}\n\tThe proof of the rank constrained method requires fewer inequality steps and is more straightforward than the that of the nuclear method. Although our upper bounds of the nuclear norm regularized method and the rank constrained one have the same rate, the difference of their proofs may implicitly suggest the advantage of the rank constrained method in the constant, as futher illustrated by our numerical studies.\n\\end{remark\n\nTo assess the quality of the established statistical guarantee, we further provide a lower bound result below. It shows that when $\\alpha, \\beta$ are constants, both estimators $\\widehat\\mathbf{P}$ and $\\widehat \\mathbf{P}^r$ are rate-optimal up to a logarithmic factor. Informally speaking, they are not improvable for estimating the class of rank-$r$ Markov chains. \n\\begin{theorem}[Minimax error lower bound for estimating low-rank Markov models]\n\t\\label{thm:lower_bound}\n\tConsider the following set of low-rank transition matrices\n\t$$\\Theta := \\bigl\\{\\mathbf{P}: \\forall j, k \\in [p], P_{jk} \\in \\{0\\} \\cup [\\alpha \/ p, +\\infty),~\\bP1_p = 1_p,~{\\rm rank}(\\mathbf{P}) \\le r\\bigr\\}.$$\n\tThere exists a universal constant $c > 0$ such that when $p(r -1) \\ge 192\\log 2$, we have\n\t\\[\n\t\t\\inf_{\\widehat{\\mathbf{P}}}\\sup_{\\mathbf{P}\\in \\Theta}\\mathbb{E}\\|\\widehat{\\mathbf{P}} - \\mathbf{P}\\|_F^2 \\geq \\frac{cp(r - 1)}{n\\alpha}. \n\t\\]\n\\end{theorem}\n\\begin{remark}\t\n\tTheorem \\ref{thm:lower_bound} shows that a smaller $\\alpha$ makes the estimation problem harder. It still remains to be an open problem whether $\\beta$ in Assumption \\ref{asp:1} should be in this minimax risk or not. \n\\end{remark}\n\nBesides the full transition matrix $\\mathbf{P}$, the leading left and right singular vectors of $\\mathbf{P}$, denoted by $\\bU, \\bV \\in \\mathbb{O}^{p\\times r}$, also play important roles in Markov chain analysis. For example, performing $k$-means on reliable estimate of $\\bU$ or $\\bV$ can give rise to state aggregation of the Markov chain \\citep{zhang2018optimal}. In the following, we further establish the statistical rate of estimating the singular subspace of the Markov transition matrix, based on the previous results. \n\n\\begin{theorem}\\label{thm:uv}\n\tUnder the setting of Theorem \\ref{thm:nuclear}, let $\\widehat{\\bU}, \\widehat \\bV \\in \\mathbb{O}^{p\\times r}$ be the left and right singular vectors of $\\widehat{\\mathbf{P}}$ respectively. Then there exist universal constants $C_1, C_2$, such that for any $\\xi > 0$, we have that \n\t\\begin{equation*}\n\t\\max\\left\\{\\|\\sin\\Theta(\\widehat{\\bU}, \\bU)\\|_F^2, \\|\\sin\\Theta(\\widehat{\\bV}, \\bV)\\|_F^2\\right\\} \n\t\\le \\min\\biggl\\{\\max\\biggl(\\frac{C r\\pi_{\\max} \\beta^4\\log p }{\\pi ^ 2_{\\min}\\alpha ^ 4n\\sigma_r^2(\\mathbf{P})}, \\frac{\\xi \\beta ^ 2}{\\alpha rp ^ 2 \\pi_{\\max}(\\log p)\\sigma_r^2(\\mathbf{P}) }\\biggr), r\\biggr\\}\n\t\\end{equation*}\n\twith probability at least $1 - C_2(e^{- \\xi} + p^{-(\\xi - 1)} + p^{-10})$. Here, $\\sigma_r(\\mathbf{P})$ is the $r$-th largest singular value of $\\mathbf{P}$ and $\\|\\sin\\Theta(\\widehat{\\bU}, \\bU)\\|_F := (r - \\|\\widehat{\\bU}^\\top\\bU\\|_F^2)^{1\/2}$ is the Frobenius norm $\\sin \\Theta$ distance between $\\widehat{\\bU}$ and $\\bU$.\n\\end{theorem}\n\n\n\n\\subsection{Proof outline of Theorems \\ref{thm:nuclear}, \\ref{thm:rank}}\n\\label{sec:statistical_analysis}\nIn this section, we elucidate the roadmap to proving Theorems \\ref{thm:nuclear} and \\ref{thm:rank}. Complete proofs are deferred to the supplementary materials. We mainly focus on Theorem \\ref{thm:nuclear} for the nuclear-norm penalized MLE $\\widehat \\mathbf{P}$, as we use similar strategies to prove Theorem \\ref{thm:rank}. \n\nWe first show in the forthcoming Lemma \\ref{lem:large_lambda} that when the regularization parameter $\\lambda$ is sufficiently large, the statistical error $\\widehat \\boldsymbol{\\Delta} := \\widehat \\mathbf{P} - \\mathbf{P}$ falls in a restricted nuclear-norm cone. This cone structure is crucial to establishing strong statistical guarantee for estimation of low-rank matrices with high-dimensional scaling \\citep{NWa11}. Define a linear subspace ${\\mathcal N} := \\{ \\mathbf{Q}: \\mathbf{Q} 1_p = 1_p \\}$ \nand denote the corresponding projection operator by $\\Pi_{{\\mathcal N}}$. In other words, for any $\\mathbf{Q} \\in {\\mathcal N}$ and any $j = 1, \\ldots, p$, the summation of all the entries in the $j$th row of $\\mathbf{Q}$ equals one. One can verify that for any $\\mathbf{Q} \\in \\Re^{p \\times p}$, $\\Pi_{{\\mathcal N}}(\\mathbf{Q}) = \\mathbf{Q} - \\mathbf{Q} 11^\\top \/ p$. Let $\\mathbf{P}=\\bU\\mathbf{D}\\bV^\\top$ be an SVD of $\\mathbf{P}$, where $\\bU, \\bV\\in \\Re^{p \\times r}$ are orthonormal and the diagonals of $\\mathbf{D}$ are in the non-increasing order. Define\n\\[\n\\begin{aligned}\n\t& {\\mathcal M}:=\\{\\mathbf{Q} \\in \\Re^{p \\times p}\\ |\\ \\text{row}(\\mathbf{Q})\\subseteq \\text{col}(\\bV), \\text{col}(\\mathbf{Q})\\subseteq \\text{col}(\\bU)\\}, \\\\\n\t& \\overline{{\\mathcal M}}^{\\perp}:=\\{\\mathbf{Q} \\in \\Re^{p \\times p}\\ |\\ \\text{row}(\\mathbf{Q})\\perp \\text{col}(\\bV), \\text{col}(\\mathbf{Q})\\perp \\text{col}(\\bU)\\},\n\\end{aligned}\n\\]\nwhere col$(\\cdot)$ and row$(\\cdot)$ denote the column space and row space respectively. We can write any $\\boldsymbol{\\Delta}\\in \\Re^{p \\times p}$ as\n\\[\n\\boldsymbol{\\Delta}=[\\bU, \\bU^\\perp]\\left[\n\\begin{array}{cc}\n\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{11} & \\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{12} \\\\\n\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{21} & \\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{22}\n\\end{array} \\right] [\\bV, \\bV^\\perp]^\\top.\n\\]\nDefine $\\boldsymbol{\\Delta}_{{\\mathcal W}}$ as the projection of $\\boldsymbol{\\Delta}$ onto any Hilbert space ${\\mathcal W}\\subseteq\\Re^{p \\times p}$. Then,\n\\be\n\\begin{aligned}\n\t\\boldsymbol{\\Delta}_{{\\mathcal M}} =\\bU\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{11}{\\bV}^\\top,\\quad\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp} = \\bU^{\\perp} \\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{22}(\\bV^{\\perp})^{\\top}, \\quad \\boldsymbol{\\Delta}_{\\overline{\\mathcal M}} = [\\bU, \\bU^{\\perp}]\\left[\n\t\\begin{array}{cc}\n\t\t\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{11} & \\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{12} \\\\\n\t\t\\boldsymbol{\\Gamma}} \\def \\bDelta {\\boldsymbol{\\Delta}_{21} & \\bzero\n\t\\end{array} \\right] [\\bV, \\bV^{\\perp}]^{\\top}.\n\\end{aligned}\n\\ee\nThe lemma below shows that $\\widehat \\boldsymbol{\\Delta} := \\widehat\\mathbf{P} - \\mathbf{P}$ falls in a nuclear-norm cone if $\\lambda$ is sufficiently large. \n\\begin{lemma}\n\t\\label{lem:large_lambda}\n\tIf $\\lambda\\ge 2 \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))}$ in \\eqref{prob:convex-nuclear}, then we have that \n\t\\[\n\t\t\\nnorm{ \\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}} \\le 3\\nnorm{\\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}}} + 4\\nnorm{\\mathbf{P}_{{\\mathcal M}^\\perp}}. \n\t\\]\n\tIn particular, when $\\mathbf{P} \\in {\\mathcal M}$, we have that $\\nnorm{\\mathbf{P}_{{\\mathcal M}^{\\perp}}} = 0$ and that\n\t\\be\n\t\\label{eq:converter}\n\t\\nnorm{\\widehat \\boldsymbol{\\Delta}} \\le \\nnorm{\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^{\\perp}}} + \\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} \\le 4 \\nnorm{\\widehat \\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} \\le 4(2r)^{1 \/ 2}\\fnorm{\\widehat \\boldsymbol{\\Delta}}.\n\t\\ee\n\\end{lemma}\n\nLemma \\ref{lem:large_lambda} implies that the converting factor between the nuclear and Frobenius norms of $\\widehat \\boldsymbol{\\Delta}$ is merely $4(2r)^{1 \/ 2}$ when $\\P\\in \\mathcal{M}$, which is much smaller than the worst-case factor $p^{1 \/ 2}$ between nuclear and Frobenius norms of general $p$-by-$p$ matrices. This property of $\\widehat \\boldsymbol{\\Delta}$ is one cornerstone for establishing Theorem \\ref{thm:nuclear}.\n\n\n\\begin{comment}\n\\begin{remark}\n\tWhen $\\mathbf{P} \\in {\\mathcal M}$, $\\nnorm{\\mathbf{P}_{{\\mathcal M}^{\\perp}}} = 0$ and Lemma \\ref{lem:large_lambda} implies that \n\\be\n\t\\label{eq:converter}\n\t\\nnorm{\\widehat \\boldsymbol{\\Delta}} \\le \\nnorm{\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^{\\perp}}} + \\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} \\le 4 \\nnorm{\\widehat \\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} \\le 4\\sqrt{2r}\\fnorm{\\widehat \\boldsymbol{\\Delta}}.\n\\ee\nThus the converting factor between the nuclear and Frobenius norms of $\\widehat \\boldsymbol{\\Delta}$ is merely $4\\sqrt{2r}$, which is much smaller than the worst-case factor $\\sqrt{p}$ between nuclear and Frobenius norms of general $p$-by-$p$ matrices. This property of $\\widehat \\boldsymbol{\\Delta}$ is one cornerstone for establishing Theorem \\ref{thm:nuclear} (see \\eqref{eq:stat_error_fnorm} for details).\n\\end{remark}\n\\end{comment}\n\n\nNext, we derive the rate of $\\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))}$ to determine the order of $\\lambda$ that ensures the condition of Lemma \\ref{lem:large_lambda} to hold. \n\n\\begin{lemma}\n\t\\label{lem:gradient} \n\tUnder Assumption \\ref{asp:1}, whenever $n \\pi_{\\max} (1 - \\rho_+) \\ge 2 \\log p$, for any $\\xi > 1$, \n\t\\[\n\t\t\\mathbb{P}\\biggl\\{ \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))} \\gtrsim \\biggl(\\frac{\\xi p ^ 2\\pi_{\\max}\\log p}{n\\alpha}\\biggr)^{1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\} \\le 4p^{-(\\xi - 1)} + \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr). \n\t\\]\n\\end{lemma}\n\n\\begin{remark}\n\tLemma \\ref{lem:gradient} is essentially due to concentration of a matrix martingale. Many existing results on measure concentration of dependent random variables \\citep{Mar96, Kon07, KRa08, Pau15} are not directly applicable because of the matrix structure of $\\nabla \\ell_n(\\mathbf{P})$. The main probabilistic tool we use here is the matrix Freedman inequality \\citep[][Corollary~1.3]{tropp2011freedman} that characterizes concentration behavior of a matrix martingale (See \\eqref{eq:matrix_freedman} for details). We notice two recent works, \\citet{WKo19} and \\citet{WKon19}, that use the same matrix Freedman inequality. Specifically, \\citet{WKon19} applied the matrix Freedman inequality to derive a confidence interval for the mixing time of a Markov chain based on its single trajectory, and \\citet{WKo19} used the same inequality to establish an upper bound for the sample complexity of learning a Markov chain. Finally, we also use an variant of Bernstein's inequality for general Markov chains \\citep[][Theorem~1.2]{JFS18} to derive an exponential tail bound for the status counts of the Markov chain ${\\mathcal X}$ (See \\eqref{eq:mc_bernstein} for details).\n\\end{remark}\n\n\n\nLet ${\\mathcal C} := \\{\\mathbf{Q} \\in \\mathbb{R}^{p \\times p}: \\nnorm{\\mathbf{Q} - \\mathbf{P}} \\le 4\\times 2^{1 \/2} \\fnorm{\\mathbf{Q} - \\mathbf{P}}, \\mathbf{Q} 1_p = 1_p, \\alpha \/ p \\le Q_{jk} \\le \\beta \/ p, \\forall (j, k) \\in [p] \\times [p]\\}$. For any $\\mathbf{Q} \\in {\\mathcal C}$, define ${\\mathcal L}(\\mathbf{Q}) := \\mathbb{E} \\{- \\log (\\inn{\\mathbf{Q}, \\mathbf{X}_i})\\}$ and $\\ell_n(\\mathbf{Q}) := n^{-1}\\sum_{i = 1}^n -\\log(\\inn{\\mathbf{Q}, \\mathbf{X}_i})$. Recall that $D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) = {\\mathcal L}(\\mathbf{Q}) - {\\mathcal L}(\\mathbf{P}) = \\sum_{i=1}^p \\pi_i D_{\\mathrm{KL}}(P_{i\\cdot}, Q_{i\\cdot}) = \\sum_{i = 1}^p \\sum_{j=1}^p \\pi_iP_{ij} \\log(P_{ij}\/Q_{ij})$. Define the empirical KL divergence of $\\mathbf{Q}$ from $\\mathbf{P}$ as\n\\[\n\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P},\\mathbf{Q}) := \\frac{1}{n}\\sum_{i=1}^n \\langle \\log (\\mathbf{P}) -\\log(\\mathbf{Q}), \\mathbf{X}_i\\rangle = \\ell_n(\\mathbf{Q}) - \\ell_n(\\mathbf{P}). \n\\]\nThe final ingredient of the analysis is the uniform converegence of $\\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})$ to $D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})$ when $D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})$ is large. \n\\begin{lemma}\n\t\\label{lem:uniform_law}\n\tSuppose that $n\\pi_{\\max}(1 - \\rho_+) \\ge \\max(20\\log p, \\log n)$. For any $\\eta > \\pi_{\\min} \/ (2\\pi_{\\max} rp \\log p)$, define ${\\mathcal C}(\\eta) := \\{\\mathbf{Q} \\in {\\mathcal C}: D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) \\ge \\eta\\}$. Then there exist universal constants $C_1, C_2 > 0$ such that\n\t\\begin{equation}\\label{ineq:to-show-1}\n\t\\begin{aligned}\n\t\\mathbb{P}\\biggl\\{\\forall \\mathbf{Q} \\in {\\mathcal C}(\\eta), ~|\\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) - D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})| \\leq \\frac{1}{2}D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) + &\\frac{C_1\\pi_{\\max} \\beta ^ 2rp\\log p}{\\pi_{\\min}\\alpha ^ 3n}\\biggr\\} \\\\\n\t& \\geq 1- C_2 \\exp\\biggl(- \\frac{\\eta \\pi_{\\max} rp \\log p}{\\pi_{\\min}}\\biggr).\n\t\\end{aligned}\n\t\\end{equation}\n\\end{lemma}\n\nTheorem \\ref{thm:nuclear} follows immediatley after one combines Lemmas \\ref{lem:large_lambda}, \\ref{lem:gradient} and \\ref{lem:uniform_law}. As for the rank-constrained MLE $\\widehat\\mathbf{P}^r$, let $\\widehat {\\boldsymbol{\\Delta}}(r) := \\widehat \\mathbf{P}^r - \\mathbf{P}$. Note that the rank constraint in \\eqref{prob:nonconvex-lowrank} implies that ${\\rm rank}(\\widehat \\boldsymbol{\\Delta}(r)) \\le 2r$. Thus, $\\nnorm{\\widehat\\boldsymbol{\\Delta}(r)} \\le (2r)^{1 \/ 2} \\fnorm{\\widehat\\boldsymbol{\\Delta}(r)}$ and Lemma \\ref{lem:uniform_law} remains applicable in the statistical anlaysis of $\\widehat \\mathbf{P}^r$. \n\n\n\n\n\n\n\n\n\n\n\n\\section{Computing Markov models using low-rank optimization}\n\nIn this section we develop efficient optimization methods to compute the proposed estimators for the low-rank Markov model. From now on, we drop the constraint that $\\alpha \/ p \\le Q_{ij}\\le \\beta \/ p$, which is used only to derive the statistical guarantees. In other words, $\\alpha$ and $\\beta$ are motivated by statistical theory, and do not need to be taken into account in the optimization. \n\n\\subsection{Optimization methods for the nuclear-norm regularized likelihood problem}\n\\label{sec:optNuc}\n\nWe first consider the nuclear-norm regularized likelihood problem \\eqref{prob:convex-nuclear}. It is a special case of the following linearly constrained optimization problem:\n\\begin{equation}\\label{prob:gen-convex-nuc}\n\\min \\, \\left\\{ g({\\mathbf{X}}) + c \\norm{{\\mathbf{X}}}_{*}\\,\\mid\\, {\\mathcal A}(\\mathbf{X}) = b \\right\\}, \n\\end{equation}\nwhere $g:\\Re^{p \\times p} \\to (-\\infty,+\\infty]$ is a closed, convex, but possibly non-smooth function, ${\\mathcal A}:\\Re^{p\\times p} \\to \\Re^m$ is a linear map, $b\\in\\Re^m$ and $c > 0$ are given data. If we take $\\alpha=0, \\beta=p$ in problem \\eqref{prob:convex-nuclear}, it becomes a special case of the general problem \\eqref{prob:gen-convex-nuc} with $g({\\mathbf{X}}) = -\\ell_n({\\mathbf{X}}) + \\delta({\\mathbf{X}}\\ge 0)$, ${\\mathcal A}({\\mathbf{X}}) = {\\mathbf{X}} {\\bf 1}_p$, $b = {\\bf 1}_p$, and $\\delta(\\cdot)$ being the indicator function.\n\nDespite of its convexity, problem \\eqref{prob:gen-convex-nuc} is highly nontrivial due to the nonsmoothness of $g$ and the presence of the nuclear norm regularizer. Here, we propose to solve it via the dual approach. \nThe dual of problem \\eqref{prob:gen-convex-nuc} is\n\\begin{equation} \n\\label{prob:D}\n\\begin{array}{rll}\n\\min & g^*(-{\\bf\\Xi}) - \\inprod{b}{y} \\\\\n\\mbox{s.t.} & {\\bf\\Xi} + {\\mathcal A}^*(y) + \\mathbb{S} = 0,\\quad \\norm{\\mathbb{S}}_2 \\le c,\n\\end{array}\n\\end{equation}\nwhere $\\norm{\\cdot}_2$ denotes the spectral norm, and $g^*$ is the conjugate function of $g$ given by\n\\begin{align*} \ng^*({\\bf\\Xi}) {}\n= \\sum_{(i,j)\\in\\Omega} \\frac{n_{ij}}{n}(\\log \\frac{n_{ij}}{n} - 1 - \\log(-\\Xi_{ij})) + \\delta( \\Xi \\le 0) \\quad \\forall\\,{\\bf \\Xi}\\,\\in\\Re^{p\\times p}\n\\end{align*}\nwith $\\Omega = \\{(i,j) \\mid n_{ij} \\neq 0\\}$ and $\\overline \\Omega = \\{(i,j) \\mid n_{ij} = 0\\}$. \nGiven $\\sigma >0$, the augmented Lagrangian function ${\\mathcal L}_{\\sigma}$ associated with \\eqref{prob:D} is\n\\begin{align*}\n{\\mathcal L}_{\\sigma}({\\bf\\Xi}, y, \\mathbb{S}; {\\mathbf{X}}) = g^*(-{\\bf\\Xi}) - \\inprod{b}{y} + \\frac{\\sigma}{2} \\norm{{\\bf\\Xi} + {\\mathcal A}^*(y) + \\mathbb{S} + {\\mathbf{X}}\/\\sigma}^2 - \\frac{1}{2\\sigma}\\norm{{\\mathbf{X}}}^2.\n\\end{align*}\nWe consider popular ADMM type methods for solving problem \\eqref{prob:D} (A comprehensive numerical study has been conducted in \\citep{li2016schur} and justifies our procedure).\nSince there are three separable blocks in \\eqref{prob:D} (namely ${\\bf\\Xi}$, $y$, and $\\mathbb{S}$), the direct extended ADMM is not applicable. Indeed, it has been shown in \\citep{chen2016direct} that the direct extended ADMM for multi-block convex minimization problem is not necessarily convergent. Fortunately, the functions corresponding to block $y$ in the objective of \\eqref{prob:D} is linear. Thus we can apply the multi-block symmetric Gauss-Sediel based ADMM (sGS-ADMM) \\citep{li2016schur}.\nIn literature \\citep{chen2017efficient,ferreira2017semidefinite,lam2017fast,li2016schur,wang2018another}, extensive numerical experiments demonstrate that sGS-ADMM is not only convergent but also faster than the directly extended multi-block ADMM and its many other variants. Specifically, the algorithmic framework of sGS-ADMM for solving \\eqref{prob:D} is presented in Algorithm \\ref{alg:sGS-ADMM}. \n\n\\begin{algorithm} \n\t\\caption{An sGS-ADMM for solving \\eqref{prob:D}}\n\t\\label{alg:sGS-ADMM}\n\t\\begin{algorithmic}\n\t\t\\STATE {\\bfseries Input:} initial point $({\\bf\\Xi}^0, y^0, \\mathbb{S}^0, {\\mathbf{X}}^0)$, penalty parameter $\\sigma > 0$, maximum iteration number $K$, and the step-length $\\gamma \\in (0,(1+ \\sqrt{5})\/2)$\n\t\t\\FOR{$k=0$ {\\bfseries to} $K$}\n\t\t\\STATE $y^{k+\\frac{1}{2}} = \\argmin_{y} {\\mathcal L}_{\\sigma}({\\bf\\Xi}^k, y, \\mathbb{S}^k; {\\mathbf{X}}^k)$\n\t\t\\STATE ${\\bf\\Xi}^{k+1} = \\argmin_{{\\bf\\Xi}} {\\mathcal L}_{\\sigma}({\\bf\\Xi} ,y^{k+\\frac{1}{2}}, \\mathbb{S}^k; {\\mathbf{X}}^k) $\n\t\t\\STATE $y^{k+1} = \\argmin_{y} {\\mathcal L}_{\\sigma}({\\bf\\Xi}^{k+1}, y, \\mathbb{S}^k; {\\mathbf{X}}^k)$\n\t\t\\STATE $\\mathbb{S}^{k+1} = \\argmin_{\\mathbb{S}} {\\mathcal L}_{\\sigma}({\\bf\\Xi}^{k+1}, y^{k+1}, \\mathbb{S}, ; {\\mathbf{X}}^k)$\n\t\t\\STATE\n\t\t${\\mathbf{X}}^{k+1} = {\\mathbf{X}}^k + \\gamma\\sigma({\\bf\\Xi}^{k+1} + {\\mathcal A}^*(y^{k+1}) + \\mathbb{S}^{k+1})$\n\t\t\\ENDFOR\n\t\n\t\\end{algorithmic}\n\\end{algorithm}\n\nNext, we discuss how the $k$-th iteration of Algorithm \\ref{alg:sGS-ADMM} is performed:\n\\begin{description}\n\t\\item[{\\bf Computation of $y^{k+\\frac{1}{2}}$ and $y^{k+1}$}.]\n\tSimple calculations show that $y^{k+\\frac{1}{2}}$ and $y^{k+1}$ can be obtained by solving the following linear systems:\n\t\\begin{equation*}\n\t\\left\\{ \n\t\\begin{aligned}\n\t&y^{k+\\frac{1}{2}} = \\frac{1}{\\sigma}({\\mathcal A} {\\mathcal A}^*)^{-1}\\big( b - X^k - \\sigma({\\bf\\Xi}^k + \\mathbb{S}^k)\\big ), \\\\[5pt]\n\t&y^{k+1} = \\frac{1}{\\sigma}({\\mathcal A} {\\mathcal A}^*)^{-1}\\big( b - X^k - \\sigma({\\bf\\Xi}^{k+1} + \\mathbb{S}^k)\\big ).\n\t\\end{aligned}\n\t\\right. \n\t\\end{equation*}\n\tIn our estimation problem, it is not difficult to verify that \n\t$ {\\mathcal A} {\\mathcal A}^* y = p y$ for any $y\\in \\Re^p$.\n\tThanks to this special structure, the above formulas can be further reduced to \n\t\\begin{equation*}\n\ty^{k+\\frac{1}{2}} = \\frac{1}{\\sigma p}\\big( b - {\\mathbf{X}}^k - \\sigma({\\bf\\Xi}^k + \\mathbb{S}^k)\\big) \\mbox{ and }\n\ty^{k+1} = \\frac{1}{\\sigma p}\\big( b - {\\mathbf{X}}^k - \\sigma({\\bf\\Xi}^{k+1} + \\mathbb{S}^k)\\big).\n\t\\end{equation*}\n\t\\item [{\\bf Computation of ${\\bf \\Xi}^{k+1}$}.] \n\tTo compute ${\\bf \\Xi}^{k+1}$, we need to solve the following optimization problem:\n\n\t\\[\n\t\\min_{\\bf\\Xi} \\left\\{ g^*(-{\\bf\\Xi}) +\\frac{\\sigma}{2}\\norm{{\\bf\\Xi} + {\\bf R}^k}^2 \\right\\},\n\t\\]\n\twhere ${\\bf R}^k \\in \\Re^{p\\times p}$ is given. Careful calculations, together with the Moreau identity \\citep[Theorem 31.5]{rockafellar2015convex}, show that\n\t\\[\n\t{\\bf \\Xi}^{k+1} = \\frac{1}{\\sigma} [{\\mathbf{Z}}^k - \\sigma {\\bf R}^k]\\, \\mbox{ and } \\,\n\t{\\mathbf{Z}}^k = \\argmin_{{\\mathbf{Z}}} \\left\\{\\sigma g({\\mathbf{Z}}) + \\frac{1}{2}\\norm{{\\mathbf{Z}} - \\sigma {\\bf R}^k}^2 \\right\\}.\n\t\\]\n\tFor our estimation problem, i.e., $g({\\mathbf{X}}) = \\ell_n({\\mathbf{X}}) + \\delta( {\\mathbf{X}}\\ge 0)$, it is easy to see that \n\t$Z^k$ admits the following form:\n\t\\[\n\tZ^k_{ij} = \\frac{\\sigma R_{ij}^k + \\sigma\\sqrt{(R^k_{ij})^2 + 4 n_{ij}\/(n\\sigma)}}{2} \\quad \\mbox{if } (i,j)\\in \\Omega \\, \\mbox{ and } \\,\n\tZ^k_{ij} = \\sigma \\max(R_{ij}^k,0) \\quad \\mbox{if } (i,j)\\in \\overline{\\Omega}.\n\t\\]\n\t\\item [{\\bf Computation of $\\mathbb{S}^{k+1}$}.]\n\tThe computation of $\\mathbb{S}^{k+1}$ can be simplified as:\n\t\\[\n\t\\mathbb{S}^{k+1} = \\argmin_{\\mathbb{S}} \\left\\{\n\t\\frac{\\sigma}{2} \\norm{\\mathbb{S} + {\\bf\\Xi}^{k+1} + {\\mathcal A}^* y^{k+1} + {\\mathbf{X}}^k\/\\sigma}^2 \\mid \n\t\\norm{\\mathbb{S}}_2 \\le c \n\t\\right\\}.\n\t\\]\n\tLet ${\\bf W}_{k} := -({\\bf\\Xi}^{k+1} + {\\mathcal A}^* y^{k+1} + X^k\/\\sigma)$ admit the following singular value decomposition (SVD)\n\t$\n\t{\\bf W}_{k} = {\\bf U}_{k} {\\bf\\Sigma}_k {\\bf V}_k^\\top,\n\t$\n\twhere ${\\bf U}_{k}$ and ${\\bf V}_k$ are orthogonal matrices,\n\t${\\bf \\Sigma}_k = {\\rm Diag}(\\alpha_1^k,\\ldots, \\alpha_p^k)$ is\n\tthe diagonal matrix of singular values of $W_k$, with $\\alpha_1^k\\ge\\ldots\\ge \\alpha_p^k\\ge 0.$ Then, by Lemma 2.1 in \\citep{jiang2014partial}, we know that\n\t\\[\n\t\\mathbb{S}^{k+1} = {\\bf U}_k\\min({\\bf \\Sigma}_k,c){\\bf V}_k^\\top,\n\t\\]\n\twhere $\\min({\\bf \\Sigma}_k,c) = {\\rm Diag}\\big(\\min(\\alpha_1^k,c),\\ldots, \\min(\\alpha_p^k,c)\\big)$.\n\n\tWe also note that in the implementation, only partial SVD, which is much cheaper than full SVD, is needed as $r\\ll p$.\n\\end{description}\nThe nontrivial convergence results and the sublinear non-ergodic iteration\ncomplexity of Algorithm \\ref{alg:sGS-ADMM} can be obtained from \\cite{li2016schur} and \\cite{chen2017efficient}.\n{We put the convergence theorem and a sketch of the proof in the supplementary material.}\n\n\\subsection{Optimization methods for the rank-constrained likelihood problem}\n\\label{sec:6}\n\n\n\nNext we develop the optimization method for computing the rank-constrained likelihood maximizer from \\eqref{prob:nonconvex-lowrank}. In Subsection \\ref{sec:pen}, a penalty approach is applied to transform the original intractable rank-constrained problem into a DC programming problem. Then we solve this problem by a proximal DC (PDC) algorithm in Subsection \\ref{subsec:proxdca}. \nWe also discuss the solver for the subproblems involved in the proximal DC algorithm. Lastly, a unified convergence analysis of a class of majorized indefinite-proximal DC (Majorized iPDC) algorithms is provided in Subsection \\ref{sec:DCA}.\n\n\\subsubsection{A penalty approach for problem \\eqref{prob:nonconvex-lowrank}. } \n\\label{sec:pen}\nRecall \\eqref{prob:nonconvex-lowrank} is intractable due to the non-convex rank constraint, we introduce a penalty approach to relax. We particularly study the following optimization problem:\n\\begin{equation}\\label{prob:gen-nonconvex-lowrank}\n\\min \\, \\left\\{ f({\\mathbf{X}}) \\,\\mid\\, {\\mathcal A}({\\mathbf{X}}) = b,\\, {\\rm rank}({\\mathbf{X}})\\le r \\right\\}, \n\\end{equation}\nwhere $f:\\Re^{p \\times p} \\to (-\\infty,+\\infty]$ is a closed proper convex, but possibly non-smooth, function.\nThe original rank-constraint maximum likelihood problem \\eqref{prob:nonconvex-lowrank} can be viewed as a special case of the general model \\eqref{prob:gen-nonconvex-lowrank}.\n\n\n\nGiven ${\\mathbf{X}}\\in\\Re^{p\\times p}$, let $\\sigma_1({\\mathbf{X}}) \\geq \\cdots \\geq \\sigma_p({\\mathbf{X}})\\geq 0$ be the singular values of ${\\mathbf{X}}$. Since ${\\rm rank}({\\mathbf{X}})\\le r$ if and only $\\sigma_{r+1}({\\mathbf{X}}) + \\ldots + \\sigma_{p}({\\mathbf{X}}) = \\|{\\mathbf{X}}\\|_\\ast - \\|{\\mathbf{X}}\\|_{(r)} = 0$ ($\\|{\\mathbf{X}}\\|_{(r)} = \\sum_{i=1}^r \\sigma_i({\\mathbf{X}})$ is the Ky Fan $r$-norm of ${\\mathbf{X}}$), \\eqref{prob:gen-nonconvex-lowrank} can be equivalently formulated as\n\\begin{equation*}\n\\min \\left\\{ f({\\mathbf{X}}) \\mid \\|{\\mathbf{X}}\\|_\\ast-\\|{\\mathbf{X}}\\|_{(r)}=0, \\, {\\mathcal A}({\\mathbf{X}}) = b \\right\\}.\n\\end{equation*}\nSee also \\citep[Equation (29)]{sun2010majorized}.\nThe penalized formulation of problem \\eqref{prob:gen-nonconvex-lowrank} is\n\\begin{equation}\\label{prob:nonconvex-pen-DC}\n\\mi\n\\left\\{ f({\\mathbf{X}}) + c (\\|{\\mathbf{X}}\\|_\\ast - \\|{\\mathbf{X}}\\|_{(r)}) \\mid {\\mathcal A}({\\mathbf{X}}) = b \\right\\},\n\\end{equation}\nwhere $c>0$ is a penalty parameter. \nSince $\\norm{\\cdot}_{(r)}$ is convex, the objective in problem \\eqref{prob:nonconvex-pen-DC} is a difference\nof two convex functions: $f({\\mathbf{X}}) + c\\norm{{\\mathbf{X}}}_{*}$ and $c\\norm{{\\mathbf{X}}}_{(r)}$, i.e., \\eqref{prob:nonconvex-pen-DC} is a DC program.\n\nLet ${\\mathbf{X}}_c^*$ be an optimal solution to the penalized problem \\eqref{prob:nonconvex-pen-DC}. The following proposition shows that ${\\mathbf{X}}_c^\\ast$ is also the optimizer to \\eqref{prob:gen-nonconvex-lowrank} when it is low-rank.\n\\begin{proposition}\\label{prop:penlowrank}\n\tIf ${\\rm rank}({\\mathbf{X}}_c^*)\\le r$, then ${\\mathbf{X}}_c^*$ is also an optimal {solution} to the original problem \\eqref{prob:gen-nonconvex-lowrank}.\n\\end{proposition}\n\n\nIn practice, one can gradually increase the penalty parameter $c$ to obtain a sufficient low rank solution ${\\mathbf{X}}_c^*$. In our numerical experiments, we can obtain solutions with the desired rank with a properly chosen parameter $c$. \n\n\n\n\\subsubsection{A PDC algorithm for the penalized problem \\eqref{prob:nonconvex-pen-DC}.}\n\\label{subsec:proxdca}\n\n\n\nThe central idea of the DC algorithm \\citep{tao1997convex} is as follows: at each iteration, one approximates the concave part of the objective function by its affine majorant, then solves the resulting convex optimization problem. In this subsection, we present a variant of the classic DC algorithm for solving \\eqref{prob:nonconvex-pen-DC}. For the execution of the algorithm, we recall that the sub-gradient of Ky Fan $r$-norm at a point ${\\mathbf{X}}\\in\\Re^{p\\times p}$ \\citep{watson1993matrix} is\n\\[\\partial \\norm{{\\mathbf{X}}}_{(r)}=\\left\\{ \n{\\mathbf{U}} \\, {\\rm Diag}(q^*){\\mathbf{V}}^\\top \\, \\mid q^*\\in \\Delta \\right\\},\n\\\nwhere {$\\bU$ and $\\bV$ are the singular vectors of ${\\mathbf{X}}$,} and $\\Delta$ is the optimal solution set of the following problem\n\\begin{equation*}\n\\max_{q\\in \\Re^{p}} \\left\\{ \n\\sum_{i=1}^p \\sigma_i({\\mathbf{X}})q_i \\mid \n\\inprod{{\\bf 1}_p}{q} \\le r, \\, 0\\le q \\le 1\n\\right\\}.\n\\end{equation*}\nNote that one can efficiently obtain a component of $\\partial \\norm{{\\mathbf{X}}}_{(r)}$ by computing the\nSVD of $X$ and picking up the SVD vectors corresponding to the $r$ largest singular values.\nAfter these preparations, we are ready to state the PDC algorithm for problem \\eqref{prob:nonconvex-pen-DC} in Algorithm \\ref{alg:dc}.\nDifferent from the classic DC algorithm, an additional proximal term is added to ensure that solutions of subproblems \\eqref{subprob:mmalg} exist and the difference of two consecutive iterations converges. {See Theorem \\ref{thm:convergence-alg-MM} and Remark \\ref{remk:pdc} for more details.} \n\n\n\\begin{algorithm}\n\t\\caption{A PDC algorithm for solving \\eqref{prob:nonconvex-pen-DC}}\n\t\\label{alg:dc}\n\tGiven $c>0$, $\\alpha \\ge 0$, and the stopping tolerance $\\eta$, choose initial point ${\\mathbf{X}}^0\\in \\Re^{p\\times p}$.\n\tIterate the following steps for $k=0,1,\\ldots:$\n\t\n\t{\\bf 1.} Choose ${\\mathbf{W}}_k\\in \\partial \\norm{{\\mathbf{X}}^k}_{(r)}$. Compute\n\t\\begin{equation}\\label{subprob:mmalg}\n\t\\begin{split}\n\t\\mathbf{X}^{k+1}\\quad = \\quad & \\argmin f({\\mathbf{X}}) + c\\left(\\norm{{\\mathbf{X}}}_* - \\inprod{{\\mathbf{W}}_k}{{\\mathbf{X}} - {\\mathbf{X}}^k} - \\norm{{\\mathbf{X}}^k}_{(r)}\\right) \n\t+ \\frac{\\alpha}{2}\\norm{{\\mathbf{X}} - {\\mathbf{X}}^k}_F^2 \\\\\n\t& \\text{subject to } {\\mathcal A}({\\mathbf{X}}) = b.\n\t\\end{split}\n\t\\end{equation}\n\t{\\bf 2.} If $\\norm{{\\mathbf{X}}^{k+1} - {\\mathbf{X}}^k}_F\\le \\eta$, stop.\n\\end{algorithm}\nWe say that ${\\mathbf{X}}$ is a critical point of problem \\eqref{prob:nonconvex-pen-DC} if \n\\[\\partial (f({\\mathbf{X}}) + c\\norm{{\\mathbf{X}}}_* + \\delta({\\mathcal A}(X) = b) ) \\cap (c\\partial \\norm{{\\mathbf{X}}}_{(r)}) \\ne \\emptyset.\\]\nWe have the following convergence results for Algorithm \\ref{alg:dc}. \n\\begin{theorem}[Convergence of Algorithm \\ref{alg:dc}]\n\t\\label{thm:convergence-alg-MM}\n\tLet $\\{{\\mathbf{X}}^k\\}$ be the sequence generated by Algorithm \\ref{alg:dc} and $\\alpha \\ge 0$. Then $\\{ f({\\mathbf{X}}^k) + c(\\norm{{\\mathbf{X}}^k}_* - \\norm{{\\mathbf{X}}^k}_{(r)})\\}$ is a non-increasing sequence. If ${\\mathbf{X}}^{k+1} = {\\mathbf{X}}^k$ for some integer $k\\ge 0$, then ${\\mathbf{X}}^k$ is a critical point of \\eqref{prob:nonconvex-pen-DC}. Otherwise, it holds that\n\t\\begin{align*} \n\t&\\big(f({\\mathbf{X}}^{k+1}) + c(\\norm{{\\mathbf{X}}^{k+1}}_* - \\norm{{\\mathbf{X}}^{k+1}}_{(r)})\\big) \n\t- \\big( f({\\mathbf{X}}^k) + c(\\norm{{\\mathbf{X}}^k}_* - \\norm{{\\mathbf{X}}^k}_{(r)} )\\big)\n\t\\le {} -\\frac{\\alpha}{2}\\norm{{\\mathbf{X}}^{k+1} - {\\mathbf{X}}^k}^2_F.\n\t\\end{align*}\n\tMoreover, \n\tany accumulation point of the bounded sequence $\\{{\\mathbf{X}}^k\\}$ is a critical point of problem \\eqref{prob:nonconvex-pen-DC}.\n\tIn addition, if $\\alpha >0$, it holds that $\\lim_{k\\to \\infty}\\norm{{\\mathbf{X}}^{k+1} - {\\mathbf{X}}^k}_F = 0$.\n\\end{theorem}\n\n\\begin{remark}[Adjusting Parameters]\n\t\\label{remk:pdc}\n\t{\\rm In practice, a small $\\alpha >0$ is suggested to ensure strict decrease of the objective value and convergence \n\t\tof $\\{ \\norm{{\\mathbf{X}}^{k+1} - {\\mathbf{X}}^k}_F \\}$;\n\t\tif $f$ is strongly convex, one achieves these nice properties even if $\\alpha =0$ based on the results of Theorem \\ref{thm: MMconvergence}. The penalty parameter $c$ can be adaptively adjusted according to the rank of the sequence generated by Algorithm \\ref{alg:dc}.}\n\\end{remark}\n\n\\begin{remark}[Number of iterations of Algorithm \\ref{alg:dc}]\n\t\\label{prop:convergence-alg-MM}\n\tLet $\\eta >0$ be the stopping tolerance and $F^*$ be the optimal value of problem \\eqref{prob:nonconvex-pen-DC}. By using the inequality in Theorem \\ref{thm:convergence-alg-MM}, it can be shown that if $\\alpha >0$, then Algorithm \\ref{alg:dc} terminates in no more than $K$ iterations, where\n\t\\[\n\tK = \\left\\lceil \\frac{2\\big(f(X^0) + c(\\norm{X^0}_* - \\norm{X^0}_{(r)}) - F^* \\big)}{\\alpha \\eta^2}\\right\\rceil + 1.\n\t\\] \n\\end{remark}\n\n\\begin{remark}[Statistical properties]\n\t\tThe statistical rate we derived in Theorem 2 does not carry over to the iterates of the DC algorithm here. Though we show in Theorem 5 that the DC algorithm can converge to a critical point, it remains unclear whethere this point is close to the global optimum and provably enjoys the statistical guarantee. Recently there have been many works conveying positive messages on the statistical properties of the non-convex optimization algorithms. For example, \\citet{LWa15} showed that any stationary point of the composite objective function they considered lies within statistical precision of the true parameter. We hope to establish similar theory for the proposed DC approach in future research.\n\\end{remark}\n\nNext, we discuss how to solve subproblems \\eqref{subprob:mmalg}. \\eqref{subprob:mmalg} is still a nuclear norm penalized convex optimization problem and is a special case of model \\eqref{prob:gen-convex-nuc} with $g({\\mathbf{X}}) = f({\\mathbf{X}}) + \\inprod{{\\mathbf{W}}}{{\\mathbf{X}}} + \\frac{\\alpha }{2}\\norm{{\\mathbf{X}}}_F^2$. Hence, Algorithm \\ref{alg:sGS-ADMM} can directly solve these subproblems efficiently. When Algorithm \\ref{alg:sGS-ADMM} is executed on this new function $g$, all computations, except for the update of $\\bf\\Xi$, have already been discussed in Section \\ref{sec:optNuc}. To update $\\bf\\Xi$ in the process of executing Algorithm \\ref{alg:sGS-ADMM} for solving \\eqref{subprob:mmalg} with $g({\\mathbf{X}}) = \\ell_n({\\mathbf{X}}) + \\delta({\\mathbf{X}}\\ge0) + \\inprod{{\\mathbf{W}}}{{\\mathbf{X}}} + \\frac{\\alpha }{2}\\norm{{\\mathbf{X}}}_F^2$, we need to solve the following minimization problem for given ${\\bf R}\\in\\Re^{p\\times p}$ and $\\sigma > 0$,\n\\[\n{\\mathbf{Z}}^* = \\argmin_{{\\mathbf{Z}}} \\left\\{ \\sigma g({\\mathbf{Z}}) + \\frac{1}{2} \\norm{{\\mathbf{Z}} - \\sigma{\\bf R}}^2\\right\\}.\n\\] \n${\\mathbf{Z}}^*$ here can be calculated by\n\\begin{equation*}\nZ_{ij}^* = \n\\left\\{\n\\begin{aligned}\n{}&\\frac{(\\sigma R_{ij} - W_{ij}) + \\sigma\\sqrt{(R_{ij} - W_{ij}\/\\sigma)^2 + 4 (\\alpha + 1)n_{ij}\/(n\\sigma)}}{2(\\alpha + 1)} \\quad \\mbox{if } (i,j)\\in \\Omega; \\\\[5pt]\n{}&\\sigma \\max(R_{ij} - W_{ij}\/\\sigma,0) \\quad \\mbox{if } (i,j)\\in \\overline{\\Omega}.\n\\end{aligned}\n\\right.\n\\end{equation*}\n\n\n\n\\subsubsection {A unified analysis for the majorized iPDC algorithm.}\n\\label{sec:DCA}\nDue to the presence of the proximal term $\\frac{\\alpha}{2}\\norm{{\\mathbf{X}} - {\\mathbf{X}}^k}^2$ in Algorithm \\ref{alg:dc}, the classical DC analyses cannot be applied directly. In this subsection, we provide a unified convergence analysis for the majorized indefinite-proximal DC (majorized iPDC) algorithm which includes Algorithm \\ref{alg:dc} as a special instance. \nLet $\\mathbb{X}$ be a finite-dimensional real Euclidean space endowed with inner product $\\inprod{\\cdot}{\\cdot}$ and induced norm $\\norm{\\cdot}$.\nConsider the following optimization problem\n\\begin{equation}\\label{eq:dca model}\n\\min_{x\\in \\mathbb{X}}\\; \\theta(x)\\triangleq g(x) + p(x) - q(x),\n\\end{equation}\nwhere $g:\\mathbb{X}\\to \\Re$ is a continuously differentiable function (not necessarily convex) with a Lipschitz continuous gradient and Lipschitz modulus $L_g >0$, i.e.,\n\\[\\norm{\\nabla f(x) - \\nabla f(x')} \\le L_g \\norm{x - x'}\\quad \\forall \\, x, x'\\in \\mathbb{X}, \\]\n$p:\\mathbb{X}\\to (-\\infty, +\\infty]$ and $q:\\mathbb{X}\\to(-\\infty, +\\infty]$ are two proper closed convex functions.\nIt is not difficult to observe that penalized problem \\eqref{prob:nonconvex-pen-DC} is a special instance of problem \\eqref{eq:dca model}. \nFor general model \\eqref{eq:dca model}, one can only expect the DC algorithm converges to a critical point $\\bar{x}\\in \\mathbb{X}$ of \\eqref{eq:dca model} satisfying \n\\[ \\left(\\nabla g(\\bar{x}) + \\partial p(\\bar{x}) \\right) \\cap \\partial q(\\bar{x}) \\neq \\emptyset.\\]\n\n\n\nSince $g$ is continuously differentiable with Lipschitz continuous gradient, there exists a self-adjoint positive semidefinite linear operator ${\\mathcal G}:\\mathbb{X} \\to \\mathbb{X}$ such that for any\n$x, x'\\in \\mathbb{X}$,\n\\begin{equation*}\\label{ineq:majorization}\ng(x)\\le \\widehat{g}(x;x')\\triangleq g(x') + \\langle \\nabla g(x'), x-x'\\rangle + \\frac{1}{2}\\|x - x'\\|^2_{\\mathcal{G}}.\n\\end{equation*}\n\n\\begin{algorithm}\n\t\\caption{A majorized indefinite-proximal DC algorithm for solving problem \\eqref{eq:dca model}}\n\t\\label{alg:dca-general}\n\tGiven initial point $x^0\\in \\mathbb{X}$ and stopping tolerance $\\eta$, choose a self-adjoint, possibly\n\tindefinite, linear operator ${\\mathcal T}:\\mathbb{X} \\to \\mathbb{X}$.\n\tIterate the following steps for $k=0,1,\\ldots:$\n\t\n\t{\\bf 1.} Choose $\\xi^k \\in \\partial q({x}^k)$. Compute\n\t\\begin{equation}\\label{eq:subproblem}\n\tx^{k+1} \\in \\argmin_{x\\in \\mathbb{X}} \\; \\left\\{ \\widehat{\\theta}(x;{x}^k) + \\frac{1}{2}\\|x - {x}^k\\|^2_{\\mathcal{T}}\\right\\}, \n\t\\end{equation}\n\twhere $\\widehat{\\theta}(x;{x}^k) \\triangleq \\widehat{g}(x;{x}^k) + p(x) - \\big(q({x}^k) + \\langle x-{x}^k, \\xi^k\\rangle \\big).$\n\t\n\t{\\bf 2.} If $\\|{x}^{k+1}-{x}^k\\|\\le \\eta$, stop.\n\\end{algorithm}\n\n\n\nWe present the majorized iPDC algorithm for solving \\eqref{eq:dca model} in Algorithm \\ref{alg:dca-general} and provide the following convergence results. \n\\begin{theorem}[Convergence of iPDC]\\label{thm: MMconvergence}\n\tAssume that {$\\inf_{x\\in \\mathbb{X}}\\theta(x)>-\\infty$}. Let $\\{x^k\\}$ be the sequence generated by Algorithm \\ref{alg:dca-general}. \n\tIf $x^{k+1} = x^k$ for some $k\\ge 0$, then $x^k$ is a critical point of \\eqref{eq:dca model}. If $\\mathcal{G} + 2\\mathcal{T}\\succeq 0$, then\n\tany accumulation point of $\\{x^{k}\\}$, if exists, is a critical point of \\eqref{eq:dca model}. In addition, if $ \\mathcal{G} + 2\\mathcal{T}\\succ 0$, it holds that $\\displaystyle\\lim_{k\\to\\infty}\\|{x}^{k+1} - {x}^k\\| = 0$.\n\\end{theorem}\nThe proof of Theorem \\ref{thm: MMconvergence} is provided in the supplementary material.\n\n\\begin{remark}\nHere, we discuss the roles of linear operators ${\\mathcal G}$ and ${\\mathcal T}$.\nFirst, ${\\mathcal G}$ makes the subproblems \\eqref{eq:subproblem} in Algorithm \\ref{alg:dca-general} more amenable to efficient computations. Theorem \\ref{thm: MMconvergence} shows the algorithm is convergent if ${\\mathcal G} + 2{\\mathcal T} \\succeq 0$. This indicates that instead of adding the commonly used positive semidefinte or positive definite proximal terms, we allow ${\\mathcal T}$ to be indefinite for better practical performance. The computational benefit of using indefinite proximal terms has also been observed in \\citep{sun2010majorized,li2016majorized}.\nAs far as we know, Theorem \\ref{thm: MMconvergence} provides the first rigorous convergence proof of the DC algorithms with indefinite proximal terms. Second, ${\\mathcal G}$ and ${\\mathcal T}$ also help to guarantee that the solutions of the subproblems \\eqref{eq:subproblem} exist.\nSince ${\\mathcal G} + 2{\\mathcal T} \\succeq 0$ and ${\\mathcal G} \\succeq 0$, we have that $2{\\mathcal G} + 2{\\mathcal T} \\succeq 0$, i.e., ${\\mathcal G} + {\\mathcal T} \\succeq 0$.\nHence, ${\\mathcal G}+2{\\mathcal T} \\succeq 0$ (${\\mathcal G}+2{\\mathcal T} \\succ 0$) implies that subproblems \\eqref{eq:subproblem} \nare (strongly) convex. Third, the choices of ${\\mathcal G}$ and ${\\mathcal T}$ are very much problem dependent. The general principle is that ${\\mathcal G} + {\\mathcal T}$ should be as small as possible while ensuring {$x^{k+1}$} is relatively easy to compute.\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Simulation results}\n\\label{sec:num}\n\nIn this section, we conduct numerical experiments to validate our theoretical results. We first compare the proposed nuclear-norm regularized estimator and the rank-constrained estimator with previous methods in literature using synthetic data. We then use the rank-constrained method to analyze a dataset of Manhattan taxi trips to reveal citywide traffic patterns. All of our computational results are obtained by running {\\sc Matlab} (version 9.5) on a windows workstation (8-core, Intel Xeon W-2145 at 3.70GHz, 64 G RAM).\n\n\\subsection{Experiments with simulated data}\nWe randomly draw the transition matrix $\\P$ as follows. Let ${\\mathbf{U}}_0, {\\mathbf{V}}_0 \\in \\Re^{p\\times r}$ be random matrices with i.i.d. standard normal entries and let\n\\[\n\\widetilde {\\mathbf{U}}_{[i,:]} = ({\\mathbf{U}}_0 \\circ {\\mathbf{U}}_0)_{[i,:]} \/ \\norm{({\\mathbf{U}}_0)_{[i,:]}}_2^2 \\mbox{ and } \\widetilde {\\mathbf{V}}_{[:,j]} = ({\\mathbf{V}}_0 \\circ \\bV_0)_{[:,j]} \/ \\norm{({\\mathbf{V}}_0)_{[:,j]}}_2^2, \\quad i=1,\\ldots,p, j=1,\\ldots, r, \n\\]\nwhere $\\circ$ is the Hadamard product and $\\widetilde {\\mathbf{U}}_{[i,:]}$ denotes the $i$-th row of $\\widetilde{\\mathbf{U}}$. The transition matrix $\\P$ is obtained via $\\P = \\widetilde{\\mathbf{U}} \\widetilde{\\mathbf{V}}^\\top$. Then we simulate a Markov chain trajectory of length $n = {\\rm round} (krp \\log(p))$ on $p$ states, $\\{X_0,\\ldots, X_n\\}$, with varying values of $k$.\n\nWe compare the performance of four procedures: the nuclear norm penalized MLE, rank-constrained MLE, empirical estimator and spectral estimator. Here, the empirical estimator is the empirical count distribution matrix defined as follows: \n$$\\tilde{\\P} = \\left(\\tilde{\\P}_{ij}\\right)_{1\\leq i, j\\leq p}, \\quad \\tilde{\\P}_{ij} = \\left\\{\\begin{array}{ll}\n\\frac{\\sum_{k =1}^n 1_{\\{X_{k-1} = i, X_k = j\\}}}{\\sum_{k =1}^n 1_{\\{X_{k-1} = i\\}}}, & \\quad \\text{when }\\sum_{k =1}^n 1_{\\{X_{k-1} = i\\}} \\geq 1;\\\\\n\\frac{1}{p}, & \\quad \\text{when } \\sum_{k =1}^n 1_{\\{X_{k-1} = i\\}} = 0.\n\\end{array}\\right.$$\nThe empirical estimator is in fact the unconstrained maximum likelihood estimator without taking into account the low-rank structure. The spectral estimator \\citep[Algorithm 1]{zhang2018optimal} is based on a truncated SVD. In the implementation of the nuclear norm penalized estimator, the regularization parameter $\\lambda$ in \\eqref{prob:convex-nuclear} is set to be $C\\sqrt{{p\\log p\/n}}$ with constant $C$ selected by cross-validation.\nFor each method, let $\\widehat{{\\mathbf{U}}}$ and $\\widehat{{\\mathbf{V}}}$ be the leading $r$ left and right singular vectors of the resulting estimator $\\widehat \\P$. We measure the statistical performance of $\\widehat{\\mathbf{P}}$ through three quantities: \n\\begin{align*}\n \\eta_F := \\norm{\\P - \\widehat \\P}_F^2, ~~ \\eta_{KL}:= D_{\\mathrm{KL}}(\\mathbf{P},\\widehat\\mathbf{P}), \\mbox{ and } \\eta_{UV} := \\max\\bigl\\{ \\norm{\\sin \\Theta(\\widehat{{\\mathbf{U}}},{\\mathbf{U}})}_F^2, \\norm{\\sin \\Theta(\\widehat{{\\mathbf{V}}},{\\mathbf{V}})}_F^2 \\bigr\\}.\n\\end{align*} \n\n\nWe consider the following setting with $p=1000$, $r = 10$, and $k\\in [10, 100]$.\nThe results are plotted in Figure \\ref{figure:mlevsrank}.\nOne can observe from these results that for rank-constrained, nuclear norm penalized and spectral methods, $\\eta_F, \\eta_{KL}$ and $\\eta_{UV}$ converge to zero quickly as the number of the state transitions $n$ increases, while the statistical error of the empirical estimator decreases in a much slower rate. Among the three estimators in the zoomed plots (second rows of Figure \\ref{figure:mlevsrank}), the rank constrained estimator slightly outperforms the nuclear norm penalized estimator and the spectral estimator.\nThis observation is consistent with our algorithmic design: the nuclear norm minimization procedure is actually the initial step of Algorithm \\ref{alg:dc}; thus the rank-constrained estimator can be seen as a refined version of the nuclear norm regularized estimator. \n\n\nWe also consider the case where the invariant distribution $\\pi$ is ``imbalanced'', i.e., we construct $\\P$ such that $\\min_{i=1,\\ldots,p} \\pi_i$ is quite small and the appearance of some states is significantly less than the others. Specifically, given $\\gamma_1,\\gamma_2 >0$, we generate a diagonal matrix ${\\mathbf{D}}$ with i.i.d. beta-distributed (${\\rm Beta}(\\gamma_1, \\gamma_2)$) diagonal elements.\nAfter obtaining $\\widetilde {\\mathbf{U}}$ and $\\widetilde {\\mathbf{V}}$ in the same way as in the beginning of this subsection, we compute $\\widetilde \\P = \\widetilde {\\mathbf{U}} \\widetilde {\\mathbf{V}}^\\top {\\mathbf{D}}$. The ground truth transition matrix $\\P$ is obtained after a normalization of $\\widetilde \\P$. Then, we simulate a Markov chain trajectory of length $n = {\\rm round}(krp\\log(p))$ on $p$ states.\nIn our experiment, we set $p = 1000$, $r = 10$, $k\\in [10,100]$, and $\\gamma_1 = \\gamma_2 = 0.5$. The detailed results are plotted in Figure \\ref{figure:betamlevsrank}. As can be seen from the figure, under the imbalanced setting, the rank-constrained, nuclear norm penalized and spectral methods perform much better than the empirical approach in terms of all the three statistical performance measures ($\\eta_F$, $\\eta_{KL}$ and $\\eta_{UV}$). In addition, the rank-constrained estimator exhibits a clear advantage over two other approaches.\n\n\n\\begin{figure*}\n\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:mle-etaF}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/mle-rankF-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:mle-etaKL}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/mle-rankKL-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:mle-etaUV}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/mle-rankUV-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\n\t\\\\\n\t\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:svd-etaF}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/svd-rankF-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:svd-etaKL}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/svd-rankKL-plot-n1000-r10-rolls10}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:svd-etaUV}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/normal\/\/svd-rankUV-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\n\t\\caption{The first row compares the rank-constrained estimator, nuclear norm penalized estimator, spectral method, and empirical estimator in terms of $\\eta_F = \\norm{\\P - \\widehat \\P}_F^2, \\eta_{KL}= D_{\\mathrm{KL}}(\\mathbf{P},\\widehat\\mathbf{P})$, and $\\eta_{UV} = \\max\\bigl\\{ \\norm{\\sin \\Theta(\\widehat{{\\mathbf{U}}},{\\mathbf{U}},)}_F^2, \\norm{\\sin \\Theta(\\widehat{{\\mathbf{V}}},{\\mathbf{V}})}_F^2 \\bigr\\}$. The second row provides the zoomed plots of the first row without the empirical estimator. Here, $n = {\\rm round}(k rp\\log p)$ with $p = 1,000$, $r = 10$ and $k$ ranging from $10$ to $100$. \n\t\n\t}\n\t\\label{figure:mlevsrank}\n\t\n\t\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betamle-etaF}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/mle-rankF-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betamle-etaKL}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/mle-rankKL-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betamle-etaUV}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/mle-rankUV-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\n\t\\\\\n\t\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betasvd-etaF}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/svd-rankF-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betasvd-etaKL}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/svd-rankKL-beta-plot-n1000-r10-rolls10}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.33\\textwidth}\n\t\t\\label{figure:betasvd-etaUV}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/beta\/\/svd-rankUV-beta-plot-n1000-r10-rolls10}\n\t\n\t\\end{subfigure}\n\t\n\t\\caption{The first row compares the rank-constrained estimator, nuclear norm penalized estimator, spectral method, and empirical estimator in terms of $\\eta_F = \\norm{\\P - \\widehat \\P}_F^2, \\eta_{KL}= D_{\\mathrm{KL}}(\\mathbf{P},\\widehat\\mathbf{P})$, and $\\eta_{UV} = \\max\\bigl\\{ \\norm{\\sin \\Theta(\\widehat{{\\mathbf{U}}},{\\mathbf{U}},)}_F^2, \\norm{\\sin \\Theta(\\widehat{{\\mathbf{V}}},{\\mathbf{V}})}_F^2 \\bigr\\}$ with imbalanced\n\t\tinvariant distribution. The second row provides the zoomed plots of the first row without the empirical estimator. Here, $n = {\\rm round}(k rp\\log p)$ with $p = 1,000$, $r = 10$ and $k$ ranging from $10$ to $100$. \n\t}\n\t\\label{figure:betamlevsrank}\n\n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\\subsection{Experiments with Manhattan Taxi data}\n\\label{sec:taxi}\n\nIn this experiment, we analyze a real dataset of $1.1\\times 10^7$ trip records of NYC Yellow cabs (Link: \\url{https:\/\/s3.amazonaws.com\/nyc-tlc\/trip+data\/yellow_tripdata_2016-01.csv}) in January 2016. Our goal is to partition the Manhattan island into several areas, in each of which the taxi customers share similar destination preference. This can provide guidance for balancing the supply and demand of taxi service and optimizing the allocation of traffic resources.\n\nWe discretize the Manhattan island into a fine grid and model each cell of the grid as a state of the Markov chain; each taxi trip can thus be viewed as a state transition of this \nMarkov chain \\citep{yang2017dynamic, benson2017spacey, liu2012understanding}. For stability concerns, our model ignores the cells that have fewer than $1,000$ taxi visits. Given that the traffic dynamics typically vary over time, \nwe fit the MC under three periods of a day, i.e., $06:00\\sim 11:59$ (morning), $12:00 \\sim 17:59$ (afternoon) and $18:00 \\sim 23:59$ (evening), where the number of the active states $p = 803$, $999$ and $1,079$ respectively. \nWe apply the rank-constrained likelihood approach to obtain the estimator $\\widehat \\mathbf{P}^r$ of the transition matrix, and then apply $k$-means to the left singular subspaces of $\\widehat{\\mathbf{P}}^r$ to classify all the states into several clusters.\nFigure \\ref{figure:lr4} presents the clustering result with $r = 4$ \nand $k = 4$ \nfor the three periods of a day.\n\nFirst of all, we notice that the locations within the same cluster are close with each other in geographical distance. This is non-trivial: we do not have exposure to GPS location in the clustering analysis. This implies that taxi customers in neighboring locations have similar destination preference, which is consistent with common sense. Furthermore, to track the variation of the traffic dynamics over time, \nFigure \\ref{figure:pb4} visualizes the distribution of the destination choice that is correspondent to the center of the green cluster in the morning, afternoon and evening respectively. We identify the varying popular destinations in different periods of the day and provide corresponding explanations in the following table: \n\\begin{table}[H]\n\t\\centering \n\t\\def\\arraystretch{0.7}\t\n\t\\begin{tabular}{c@{\\hskip 1cm}p{7cm}@{\\hskip 1cm}p{4cm}}\n\t\t\\hline\\hline\n\t\tTime & \\hfil Popular Destinations & \\hfil Explanation \\\\ \\hline\\hline\n\t\tMorning & New York--Presbyterian Medical Center,\\vspace{-.3cm} \\newline \\hfil 42--59 St. Park Ave, Penn Station & hospitals, workplaces,\\vspace{-.3cm} \\newline \\quad the train station \\\\ \\hline\n\t\tAfternoon & \\centering 66 St. Broadway & \\hfil lunch, afternoon break,\\vspace{-.3cm} \\newline short trips \\\\ \\hline\n\t\tEvening & \\centering Penn Station & go home \\\\\\hline\\hline \n\t\\end{tabular}\n\\end{table}\n\nFinally, it might be tempting to model the taxi trips by an HMM, where regions of Manhattan correspond to hidden states. However, such a region is always part of the current observation (i.e., location of taxi): It is observable and is not a hidden state that has to be inferred from all past observations. As a result, although both HMM and the low-rank Markov model could apply to taxi trips, the low-rank Markov model is simpler and more accurate. \n\n\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\n\n\n\n\n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lrh1}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/test-P-rank-r4hh2}\n\t\\end{subfigure} \n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lrh2}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/test-P-rank-r4hh3}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lrh3}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/test-P-rank-r4hh4}\n\t\\end{subfigure}\n\t\\caption{\n\t\tThe meta-states compression of Manhattan traffic network via rank-constrained approach with $ r = 4$: mornings (left), afternoons (middle) and evenings (right). Each color or symbol represents a meta-state. One\n\t\tcan see the day-time state aggregation\n\t\tresults differ significantly from that of the evening time.}\n\t\\label{figure:lr4}\n\\end{figure*}\n\n\n\n\\begin{figure*}[tb]\n\t\\centering\n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lr6h1}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/P-cluster-rank-r4hh2}\n\t\\end{subfigure} \n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lr6h2}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/P-cluster-rank-r4hh3}\n\t\\end{subfigure}\n\n\t\\begin{subfigure}{.32\\textwidth\n\t\t\\label{figure:lr6h3}\n\t\t\\includegraphics[width=\\linewidth]{figs\/\/taxi\/\/P-cluster-rank-r4hh4}\n\t\\end{subfigure}\n\t\\caption{Visualization of the destination distributions corresponding to the pick-up locations in the green clusters in Figure \\ref{figure:lr4}: mornings (left), afternoons (middle) and evenings (right).}\n\t\\label{figure:pb4}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nThis paper studies the recovery and state compression of low-rank Markov chains from empirical trajectories via a rank-constrained likelihood approach. We provide statistical upper bounds for the $\\ell_2$ risk and Kullback-Leiber divergence between the estimator and the true probability transition matrix for the proposed estimator. \nThen, a novel DC programming algorithm is developed to solve the associated rank-constrained optimization problem. The proposed algorithm non-trivially combines several recent optimization techniques, such as the penalty approach, the proximal DC algorithm, and the multi-block sGS-ADMM. We further study a new class of majorized indefinite-proximal DC algorithms for solving general non-convex non-smooth DC programming problems and provide a unified convergence analysis.\nExperiments on simulated data illustrate the merits of our approach.\n\n\n\\section{Introduction.}\\label{intro}\n\n\n\\input{main.tex}\n\n\n\n\n\n\n\n\n\\bibliographystyle{informs2014}\n\n\n\\section{Proof of Lemma \\ref{lem:restricted_strong_convexity}}\n\n\\section{Technical lemmas}\n\n\\begin{lemma}\n\t\t\\label{lem:kl_to_l2}\n\t\tGiven two discrete distributions $u, v \\in \\mathbb{R}^p$, if there exist $\\alpha, \\beta > 0$ such that $u_j \\in \\{0\\}\\cup [\\alpha \/ p, \\beta \/ p]$ and $v_j \\in [\\alpha \/ p, \\beta \/ p]$ for any $j \\in [p]$, then we have $$D_{\\mathrm{KL}}(u, v) \\ge \\{p\\alpha \/ (2\\beta ^ 2)\\} \\ltwonorm{u - v} ^ 2.$$ This implies that under Assumption \\ref{asp:1}, for any $\\mathbf{Q} \\in {\\mathcal C}$, $$\\fnorm{\\mathbf{P} - \\mathbf{Q}} ^ 2 \\le \\frac{2\\beta^2}{\\alpha\\pi_{\\min}p}D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}).$$\n\\end{lemma}\n\\begin{proof}{Proof of Lemma \\ref{lem:kl_to_l2}}\n\t\t\\noindent By the mean value theorem, for any $j \\in [p]$ such that $u_j \\neq 0$, there exists $\\xi_j \\in [\\alpha \/ p, \\beta \/ p]$ such that \n\t\t\\[\n\t\t\t\\log(v_j) - \\log(u_j) = \\frac{v_j - u_j}{u_j} - \\frac{(v_j - u_j) ^ 2}{2\\xi_j ^2}. \n\t\t\\]\n\t\tTherefore, \n\t\t\\[\n\t\t\t\\begin{aligned}\n\t\t\t\tD_{\\mathrm{KL}}(u, v) & = \\sum_{j: u_j \\neq 0} u_j\\log(u_j \/ v_j) = \\sum_{j: u_j \\neq 0} (u_j - v_j) + \\sum_{j: u_j \\neq 0} \\frac{(u_j - v_j) ^ 2}{2 \\xi_j ^ 2} \\\\\n\t\t\t\t& \\ge 1 - \\sum_{j: u_j \\neq 0} v_j + \\sum_{j: u_j \\neq 0} \\frac{p \\alpha(u_j - v_j) ^ 2}{2\\beta ^ 2} = \\sum_{j: u_j = 0} v_j - u_j + \\sum_{j: u_j \\neq 0} \\frac{p \\alpha(u_j - v_j) ^ 2}{2\\beta ^ 2} \\\\\n\t\t\t\t& \\ge \\sum_{j: u_j = 0} \\frac{p(v_j - u_j) ^ 2}{\\beta} + \\sum_{j: u_j \\neq 0} \\frac{p \\alpha(u_j - v_j) ^ 2}{2\\beta ^ 2} \\ge \\frac{p\\alpha}{2\\beta ^ 2} \\ltwonorm{u - v} ^ 2. \n\t\t\t\\end{aligned} \n\t\t\\]\n\t\tThen we have\n\t\t\\[\n\t\t\t\\fnorm{\\mathbf{P} - \\mathbf{Q}} ^ 2 = \\sum_{i \\in [p]} \\ltwonorm{P_{i\\cdot} - Q_{i\\cdot}} ^ 2 \\le \\sum_{i \\in [p]} \\frac{2\\beta ^ 2\\pi_{i}}{p\\alpha \\pi_{\\min}}D_{\\mathrm{KL}}(P_{i\\cdot}, Q_{i\\cdot}) = \\frac{2\\beta ^ 2}{p\\alpha \\pi_{\\min}}D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}). \n\t\t\\]\n\\end{proof}\n\\section{Proof of Theorem \\ref{thm:nuclear}}\n\n\\begin{proof}\n\t\\noindent Given the definition of $\\widehat \\mathbf{P}$, \n\t\\begin{equation}\\label{ineq:tilde_D-P-hat-P}\n\t\t\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P},\\widehat{\\mathbf{P}}) = \\frac{1}{n}\\sum_{i=1}^n \\langle\\log(\\mathbf{P}) - \\log(\\widehat{\\mathbf{P}}), \\mathbf{X}_i\\rangle = \\ell_n(\\widehat{\\mathbf{P}}) - \\ell_n(\\mathbf{P}) \\leq \\lambda (\\nnorm{\\widehat \\mathbf{P}} - \\nnorm{\\mathbf{P}}) \\le \\lambda \\nnorm{\\mathbf{P} - \\widehat \\mathbf{P}}.\n\t\\end{equation}\n\tThen we have \n\t\\be\n\t\\label{ineq:basic}\n\t\\begin{aligned}\n\t\tD_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) & = {\\mathcal L}(\\widehat\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) = {\\mathcal L}(\\widehat\\mathbf{P}) - \\ell_n(\\widehat\\mathbf{P}) + \\ell_n(\\widehat\\mathbf{P}) - \\ell_n(\\mathbf{P}) + \\ell_n(\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) \\\\\n\t\t& \\le {\\mathcal L}(\\widehat \\mathbf{P}) - \\ell_n(\\widehat \\mathbf{P}) + \\ell_n(\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) + \\lambda \\nnorm{\\mathbf{P} - \\widehat \\mathbf{P}} \\\\\n\t\t& = D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) + \\lambda \\nnorm{\\mathbf{P} - \\widehat \\mathbf{P}}. \n\t\\end{aligned}\n\t\\ee\n\tDefine ${\\mathcal E} := \\{\\lambda \\ge 2 \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))}\\}$. If ${\\mathcal E}$ holds, then by Lemma \\ref{lem:large_lambda} and then Lemma \\ref{lem:kl_to_l2}, we obtain that \n\t\\[\n\t\t\\begin{aligned}\n\t\t\tD_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) & \\le D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) + 4(2r)^{1 \/ 2}\\lambda \\fnorm{\\mathbf{P} - \\widehat \\mathbf{P}} \\\\\n\t\t\t& \t\\le D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) + 8\\lambda \\beta\\biggl(\\frac{r D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})}{p\\pi_{\\min} \\alpha}\\biggr)^{\\!1\/2}. \n\t\t\\end{aligned}\n\t\\]\n\tFor any $\\xi > 1$, an application of Lemma \\ref{lem:uniform_law} with $\\eta = \\xi \\pi_{\\min}\/ (rp\\pi_{\\max}\\log p)$ yields\n\t\\[\n\t\t\\begin{aligned}\n\t\t\t\\mathbb{P}\\biggl[ \\biggl\\{D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}) \\le 16\\lambda \\beta\\biggl(\\frac{r D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P})}{p\\pi_{\\min} \\alpha}\\biggr)^{\\!1\/2} + \\frac{2C_1 r\\pi_{\\max}\\beta ^ 2 p\\log p}{\\pi_{\\min}\\alpha ^ 3n} + \\eta \\biggr\\} \\cap {\\mathcal E}\\biggr] \\ge 1 - C_2 e^{- \\xi} - \\mathbb{P}({\\mathcal E}^c), \n\t\t\\end{aligned}\n\t\\]\t\n\twhere $C_1$ and $C_2$ are exactly the same constants as in Lemma \\ref{lem:uniform_law}. Some algebra yields that \n\t\\[\n\t\t\\begin{aligned}\n\t\t\t\\mathbb{P}\\biggl[ \\biggl\\{D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}) \\le \\frac{256\\lambda ^ 2 \\beta ^ 2 r}{p\\pi_{\\min} \\alpha} + \\frac{2C_1 r\\pi_{\\max}\\beta ^ 2 p\\log p}{\\pi_{\\min}\\alpha ^ 3n} + \\eta \\biggr\\} \\cap {\\mathcal E}\\biggr] \\ge 1 - C_2 e^{- \\xi} - \\mathbb{P}({\\mathcal E}^c). \n\t\t\\end{aligned}\n\t\\]\t\t\n\tBy Lemma \\ref{lem:gradient}, there exists a universal constant $C_3 > 0$ such that if we choose \n\t\\[\n\t\t\\lambda = C_3\\biggl\\{\\biggl(\\frac{\\xi p ^ 2\\pi_{\\max} \\log p}{n\\alpha}\\biggr)^{1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\}, \n\t\\]\n\tthen for any $\\xi > 1$, whenever $n\\pi_{\\max}(1 - \\rho_+) \\ge \\max(20, \\xi ^ 2)\\log p$, we have that \n\t\\[\n\t\t\\begin{aligned}\n\t\t\t\\mathbb{P} \\biggl( D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}^r) \\gtrsim \\frac{\\xi r\\pi_{\\max}\\beta ^ 2 p \\log p}{\\pi_{\\min}\\alpha ^ 3n} + \\frac{\\xi\\pi_{\\min}}{rp \\pi_{\\max}\\log p}\\biggr) \\lesssim e^{- \\xi} + p^{-(\\xi - 1)} + p^{-10},\n\t\t\\end{aligned}\n\t\\]\n\tas desired. The Frobenius-norm error bound follows immediately by applying Lemma \\ref{lem:kl_to_l2}. \n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm:rank}}\n\n\\begin{proof}\t\n\t\\noindent Given the definition of $\\widehat \\mathbf{P}^r$, \n\t\\begin{equation}\\label{ineq:tilde_D-P-hat-P}\n\t\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P},\\widehat{\\mathbf{P}}^r) = \\frac{1}{n}\\sum_{i=1}^n \\langle\\log(\\mathbf{P}) - \\log(\\widehat{\\mathbf{P}}^r), \\mathbf{X}_i\\rangle = \\ell_n(\\widehat{\\mathbf{P}}^r) - \\ell_n(\\mathbf{P}) \\leq 0.\n\t\\end{equation}\n\tThen we have\n\t\\be\n\t\\label{ineq:basic}\n\t\\begin{aligned}\n\t\tD_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}^r) & = {\\mathcal L}(\\widehat\\mathbf{P}^r) - {\\mathcal L}(\\mathbf{P}) = {\\mathcal L}(\\widehat\\mathbf{P}^r) - \\ell_n(\\widehat\\mathbf{P}^r) + \\ell_n(\\widehat\\mathbf{P}^r) - \\ell_n(\\mathbf{P}) + \\ell_n(\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) \\\\\n\t\t& \\le {\\mathcal L}(\\widehat \\mathbf{P}^r) - \\ell_n(\\widehat \\mathbf{P}^r) + \\ell_n(\\mathbf{P}) - {\\mathcal L}(\\mathbf{P}) = D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}^r) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}^r). \n\t\\end{aligned}\n\t\\ee\n\tFor any $\\xi > 1$, an application of Lemma \\ref{lem:uniform_law} with $\\eta = \\pi_{\\min}\\xi \/ (rp\\pi_{\\max}\\log p)$ yields\n\t\\[\n\t\t\\mathbb{P}\\biggl\\{D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat \\mathbf{P}^r) \\ge \\max\\biggl(\\frac{2C_1 r\\pi_{\\max} \\beta^2 p \\log p }{\\pi_{\\min}\\alpha ^ 3n}, \\frac{\\xi\\alpha ^ 2}{rp^ 2 \\pi_{\\max} \\log p}\\biggr)\\biggr\\} \\le C_2e^{- \\xi}, \n\t\\]\n\tas desired. The Frobenius-norm error bound immediately follows by Lemma \\ref{lem:kl_to_l2}. \n\t\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm:lower_bound}}\n\n\\begin{proof}\n\t~\\hspace{-.5cm} To simplify the notation, assume without loss of generality that $p$ is a multiple of $4(r-1)$. For any $1\\leq k \\leq m$, consider\n\t\\begin{equation}\\label{eq:lower-bound-P^(k)2}\n\t\\begin{split}\n\t\\mathbf{P}^{(k)} = & \\begin{bmatrix}\n\t\\frac{2-\\alpha}{p} \\bone_{p\\times (p\/2)} ~~~~ \\frac{\\alpha}{p} \\bone_{p\\times (p\/2)} \n\t\\end{bmatrix} \\\\\n\t& + \\frac{\\eta(2-\\alpha)}{2p} \\begin{bmatrix}\n\t\\bzero_{(p\/2)\\times (p\/4)} & \\bzero_{(p\/2)\\times (p\/4)} & \\bzero_{(p\/2)\\times (p\/2)}\\\\\n\t~~ \\bR^{(k)} ~~ \\cdots ~~~~~ \\bR^{(k)} & -\\bR^{(k)} ~~ \\cdots ~~ -\\bR^{(k)} & \\bzero_{(p\/4)\\times (p\/2)} \\\\\n\t\\underbrace{-\\bR^{(k)} ~~ \\cdots ~~ - \\bR^{(k)}}_{l_0} & \\underbrace{~~\\bR^{(k)} ~~ \\cdots ~~~~~~ \\bR^{(k)}}_{l_0} & \\bzero_{(p\/4)\\times (p\/2)} \\\\\n\t\\end{bmatrix},\n\t\\end{split}\n\t\\end{equation}\n\twhere $l_0 = \\frac{p}{4(r - 1)}$, $\\bR^{(k)} \\in \\{0, 1\\}^{(p\/4)\\times (r-1)}$, and $\\eta$ is some positive value to be determined later. Let \n\t\\begin{equation}\n\t\\mu := \\biggl(\\frac{2-\\alpha}{p} 1_{p\/2}^\\top ~~ \\frac{\\alpha}{p} 1_{p\/2}^\\top\\biggr)^\\top.\n\t\\end{equation}\n\tFirst of all, regardless of the value of $\\bR^{(k)}$, one can see that for any $k \\in [m]$, \n\t\\begin{enumerate}\n\t\t\\item ${\\rm rank}(\\mathbf{P}^{(k)})\\leq r$; \n\t\t\\item $\\mu^\\top \\mathbf{P}^{(k)} = \\mu^\\top$, and hence $\\mu$ is the invariant distribution of $\\mathbf{P}^{(k)}$; \n\t\t\\item $\\mathbf{P}^{(k)} \\in \\Theta$. \n\t\\end{enumerate} \n\tLet $\\{\\bR^{(k)}\\}_{k = 1}^m$ be i.i.d. matrices of independent Rademacher entries, i.e., for any $k \\in [m]$, $\\{R^{(k)}_{ij}\\}_{i \\in [n], j\\in [d]}$ are independent Rademacher variables, and $\\{\\bR^{(k)}\\}_{k \\in [m]}$ are independent. For any $k \\neq l$, one can see that $\\bigl\\{\\bigl|\\bR^{(k)}_{ij} - \\bR^{(l)}_{ij}\\bigr|\\bigr\\}$ are i.i.d. uniformly distributed on $\\{0, 2\\}$, and that \n\t$$\\mathbb{E}\\bigl|\\bR^{(k)}_{ij} - \\bR^{(l)}_{ij}\\bigr| = 1, \\quad {\\rm Var}\\bigl(\\bigl|\\bR^{(k)}_{ij} - \\bR^{(l)}_{ij}\\bigr|\\bigr)= 1,\\quad \\bigl|\\bigl|\\bR^{(k)}_{ij} - \\bR^{(l)}_{ij}\\bigr| - 1\\bigr| = 1.$$\n\tBy Bernstein's inequality \\citep[][Theorem~2.10]{BLM13}, for any $t>0$, \n\t\\begin{equation*}\n\t{\\mathbb{P}}\\biggl\\{\\biggl|\\bigl\\|\\bR^{(k)} - \\bR^{(l)}\\bigr\\|_1 - \\frac{p(r-1)}{4}\\biggr| \\ge \\biggl( \\frac{p(r - 1)t}{2}\\biggr)^{\\!1\/ 2} + t \\biggr\\} \\leq 2e^{-t}. \n\t\\end{equation*}\n\tLet $t = p(r - 1)\/ 64$ and $m = \\lfloor \\exp\\{p(r-1)\/128\\} \/ 2^{1 \/ 2} \\rfloor$. Since $p(r - 1) \\ge 192\\log 2$, we have that $m \\ge 2$. Then a union bound yields that\n\t\\begin{equation*}\n\t\\begin{split}\n\t{\\mathbb{P}}\\biggl(\\forall 1\\leq k < l \\leq m, ~ \\frac{p(r - 1)}{8} \\le \\bigl\\|\\bR^{(k)} - \\bR^{(l)}\\bigr\\|_1 \\le \\frac{3p(r - 1)}{8}\\biggr) \\ge 1 - 2m^2\\exp\\biggl(\\frac{-p(r - 1)}{64}\\biggr) > 0. \n\t\\end{split}\n\t\\end{equation*}\n\tHence, there exist $\\bR^{(1)},\\ldots, \\bR^{(m)} \\subseteq \\{-1, 1\\}^{(p \/ 4)\\times (r-1)}$ such that\n\t\\begin{equation}\\label{ineq:P^k-P^l}\n\t\\forall 1\\leq k < l \\leq m,~\\frac{p(r - 1)}{8} \\le \\bigl\\|\\bR^{(k)} - \\bR^{(l)}\\bigr\\|_1 \\le \\frac{3p(r - 1)}{8}, \n\t\\end{equation}\n\twhich, given that $\\fnorm{\\bR^{(j)} - \\bR^{(k)}} ^ 2 = 2\\|\\bR^{(j)} - \\bR^{(k)}\\|_1$, further implies that \n\t\\begin{equation}\n\t\\forall 1\\leq k < l \\leq m,~\\frac{p(r - 1)}{4} \\le \\fnorm{\\bR^{(k)} - \\bR^{(l)}}^2 \\le \\frac{3p(r - 1)}{4}. \n\t\\end{equation}\n\n\n\n\n\tNow we have that \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\left\\|\\mathbf{P}^{(k)} - \\mathbf{P}^{(l)}\\right\\|_1 = \\frac{2l_0\\eta(2-\\alpha)}{p} \\|\\bR^{(k)} - \\bR^{(l)}\\|_1 \\geq \\frac{\\eta p (2-\\alpha)}{16}, \\left\\|\\mathbf{P}^{(k)} - \\mathbf{P}^{(l)}\\right\\|_1 \\le \\frac{3\\eta p (2 - \\alpha)}{16}, \\\\\n\t\\fnorm{\\mathbf{P}^{(k)} - \\mathbf{P}^{(l)}} ^ 2 = \\frac{l_0\\eta ^ 2(2-\\alpha) ^ 2}{p ^ 2} \\fnorm{\\bR^{(k)} - \\bR^{(l)}} ^ 2 \\geq \\frac{\\eta ^ 2(2-\\alpha) ^ 2}{16}, \\left\\|\\mathbf{P}^{(k)} - \\mathbf{P}^{(l)}\\right\\|_1 \\le \\frac{3\\eta ^ 2(2-\\alpha) ^ 2}{16}. \n\t\\end{split}\n\t\\end{equation*}\n\tBesides, \n\t\\begin{equation*}\n\t\\begin{split}\n\tD_{\\mathrm{KL}} ({\\mathcal X}^{(k)} || {\\mathcal X}^{(l)}) & = n\\sum_{i\\in [p]} \\pi_i D_{\\mathrm{KL}}\\bigl(\\mathbf{P}_{[i,:]}^{(k)}, \\mathbf{P}^{(l)}_{[i,:]}\\bigr)\n\t= n\\sum_{i=(p\/2)+1}^{p} \\sum_{j=1}^{p\/2} \\frac{\\alpha}{p} P_{ij}^{(k)}\\log \\bigl(P_{ij}^{(k)}\/P_{ij}^{(l)}\\bigr) \\\\\n\t& = \\frac{2n\\alpha}{p}\\sum_{i=1}^{p\/4} \\frac{2-\\alpha}{2} D_{\\mathrm{KL}}\\bigl(u^{(k)}_i, u^{(l)}_i\\bigr),\n\t\\end{split}\n\t\\end{equation*}\n\twhere $u^{(k)}_i = \\frac{2}{p}1_{p\/2} + \\frac{\\eta}{p}\\left[\\bR^{(k)}_{[i,:]} ~ \\cdots ~ \\bR^{(k)}_{[i,:]} ~ -\\bR^{(k)}_{[i,:]} ~ \\cdots ~ -\\bR^{(k)}_{[i,:]}\\right]$ corresponds to a $(p\/2)$-dimensional distribution. By \\citet[][Lemma~4]{ZWa19}, we have that \n\t\\begin{equation*}\n\tD_{\\mathrm{KL}}\\bigl(u^{(k)}_i, u^{(l)}_i\\bigr) \\leq \\frac{3l_0\\eta ^ 2}{p}\\bigl\\|\\bR_{[i,:]}^{(k)} - \\bR_{[i,:]}^{(l)}\\bigr\\|_2^2 = \\frac{6 l_0\\eta^2}{p}\\bigl\\|\\bR_{[i,:]}^{(k)} - \\bR_{[i,:]}^{(l)}\\bigr\\|_1.\n\t\\end{equation*}\n\tTherefore, \n\t\\begin{equation*}\n\t\\begin{split}\n\tD_{\\mathrm{KL}}({\\mathcal X}^{(k)}, {\\mathcal X}^{(l)}) & = \\frac{6n\\alpha(2-\\alpha)l_0 \\eta ^ 2}{p ^ 2}\\sum_{i=1}^{p\/4} \\bigl\\|\\bR_{[i,:]}^{(k)} - \\bR_{[i,:]}^{(l)}\\bigr\\|_1 \\leq \\frac{12n\\alpha l_0\\eta^2}{p^2}\\|\\bR^{(k)} - \\bR^{(l)}\\|_1 \\\\\n\t& \\leq \\frac{12n\\alpha l_0 \\eta^2}{p^2} \\frac{3p(r-1)}{8} = \\frac{9n\\eta^2\\alpha}{8}.\n\t\\end{split}\n\t\\end{equation*}\n\n\tBy Fano's inequality \\citep[][Lemma~3]{yu1997assouad}, we have that\n\t\\begin{equation*}\n\t\\begin{split}\n\n\t\\inf_{\\widehat{\\mathbf{P}}} \\sup_{\\mathbf{P}\\in \\Theta} \\fnorm{\\widehat{\\mathbf{P}} - \\mathbf{P}} ^ 2 \\geq & \\inf_{\\widehat{\\mathbf{P}}} \\sup_{\\mathbf{P}\\in \\{\\mathbf{P}^{(1)}, \\ldots, \\mathbf{P}^{(m)}\\}} \\fnorm{\\hat{\\mathbf{P}} - \\mathbf{P}} ^ 2 \\geq \\frac{\\eta ^ 2(2-\\alpha) ^ 2}{16} \\left(1 - \\frac{9n\\eta^2\\alpha - \\log 2}{\\log m}\\right). \n\n\n\t\\end{split}\n\t\\end{equation*}\n\tThere exist universal constants $c_1, c_2 > 0$ such that when $p(r - 1) \\ge 192 \\log 2$, choosing $\\eta = c_1\\{p(r - 1) \/ (n \\alpha)\\}^{\\!1 \/ 2}$ yields that \n\t\\[\n\t\\inf_{\\widehat{\\mathbf{P}}} \\sup_{\\mathbf{P}\\in \\Theta} \\fnorm{\\widehat{\\mathbf{P}} - \\mathbf{P}} ^ 2 \\ge c_2\\frac{p(r - 1)}{n\\alpha}. \n\t\\]\n\t\\quad $\\square$\n\t\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{thm:uv}}\n\\begin{proof}\n\t~Let $\\widehat{\\bU}_{\\perp}, \\widehat{\\bV}_{\\perp} \\in \\Re^{p\\times (p-r)}$ be the orthogonal complement of $\\widehat{\\bU}$ and $\\widehat{\\bV}$. Since $\\bU, \\bV, \\widehat{\\bU}$, and $\\widehat{\\bV}$ are the leading left and right singular vectors of $\\mathbf{P}$ and $\\widehat{\\mathbf{P}}$, we have\n\t\\begin{equation*}\n\t\\begin{split}\n\t\t\\|\\widehat{\\mathbf{P}} - \\mathbf{P}\\|_F \\geq & \\|\\widehat{\\bU}_{\\perp}^\\top(\\widehat{\\mathbf{P}} - \\bU\\bU^\\top \\mathbf{P})\\|_F = \\|\\widehat{\\bU}_{\\perp}^\\top \\bU\\bU^\\top\\mathbf{P}\\|_F \\geq \\|\\widehat{\\bU}^\\top_{\\perp} \\bU\\|_F \\sigma_r(\\bU^\\top \\mathbf{P}) = \\|\\sin\\Theta(\\widehat{\\bU}, \\bU)\\|_F \\sigma_r(\\mathbf{P}).\n\t\\end{split}\n\t\\end{equation*}\n\tSimilar argument also applies to $\\|\\sin\\Theta(\\widehat{\\bV}, \\bV)\\|$. Thus,\n\t$$\\max\\bigl(\\|\\sin\\Theta(\\widehat{\\bU}, \\bU)\\|_F, \\|\\sin\\Theta(\\widehat{\\bV}, \\bV)\\|_F\\bigr)\\le \\min\\biggl(\\frac{\\|\\widehat{\\mathbf{P}}-\\mathbf{P}\\|_F}{\\sigma_r(\\mathbf{P})}, r^{1 \/ 2}\\biggr).$$\n\tThe rest of the proof immediately follows from Theorem \\ref{thm:nuclear}.\n\\end{proof}\n\n\n\n\\section{Proof of Lemma \\ref{lem:large_lambda}}\n\n\\begin{proof}\n\t~By the inequality (52) in Lemma 3 in the Appendix of \\citet{negahban2012restricted}, we have for any $\\boldsymbol{\\Delta} \\in \\Re^{p \\times p}$, \n\t\\[\n\t\t\\nnorm{\\mathbf{P} + \\boldsymbol{\\Delta}} - \\nnorm{\\mathbf{P}} \\ge \\nnorm{\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}} - \\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} - 2\\nnorm{\\mathbf{P}_{{\\mathcal M}^\\perp}}. \n\t\\]\n\tBesides, \n\t\\[\n\t\t\\begin{aligned}\n\t\t\\ell_n(\\mathbf{P} + \\boldsymbol{\\Delta}) - \\ell_n(\\mathbf{P}) & \\ge \\inn{\\nabla \\ell_n(\\mathbf{P}), \\boldsymbol{\\Delta}} = \\inn{\\Pi_{{\\mathcal N}}(\\nabla\\ell_n(\\mathbf{P})), \\boldsymbol{\\Delta}} \\ge - |\\inn{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n (\\mathbf{P})), \\boldsymbol{\\Delta}}| \\\\\n\t\t& \\ge - \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla \\ell_n(\\mathbf{P}))} \\nnorm{\\boldsymbol{\\Delta} } \\ge - \\frac{\\lambda}{2} \\bigl(\\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} + \\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}^\\perp}} \\bigr). \n\t\t\\end{aligned}\n\t\\]\n\tBy the optimality of $\\widehat\\mathbf{P}$, $\\ell_n(\\widehat\\mathbf{P}) + \\lambda \\nnorm{\\widehat\\mathbf{P}} \\le \\ell_n(\\mathbf{P}) + \\lambda \\nnorm{\\mathbf{P}}$. Therefore, \n\t\\[\n\t\t\\lambda \\bigl(\\nnorm{\\boldsymbol{\\Delta}_{\\overline {\\mathcal M}}} + 2\\nnorm{\\mathbf{P}_{{\\mathcal M}^\\perp}} -\\nnorm{\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}} \\bigr) \\ge \\lambda (\\nnorm{\\mathbf{P}} - \\nnorm{\\widehat\\mathbf{P}}) \\ge - \\frac{\\lambda}{2}\\bigl( \\nnorm{\\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}}} + \\nnorm{ \\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}}\\bigr), \n\t\\]\n\tfrom which we deduce that \n\t\\[\n\t\\nnorm{ \\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}^\\perp}} \\le 3\\nnorm{\\widehat\\boldsymbol{\\Delta}_{\\overline{\\mathcal M}}} + 4\\nnorm{\\mathbf{P}_{{\\mathcal M}^\\perp}}. \n\t\\]\n\\end{proof}\n\n\\section{Proof of Lemma \\ref{lem:gradient}}\n\n\\begin{proof}\n\t\t~Some algebra yields that \n\t\\be\n\t\\label{eq:log_likelihood}\n\t\\nabla \\ell_n(\\mathbf{Q} ) = \\frac{1}{n} \\sum\\limits_{i=1}^n -\\frac{\\mathbf{X}_i}{\\inn{\\mathbf{Q}, \\mathbf{X}_i}}. \n\t\\ee\n\tFor ease of notation, write $\\bZ_i := - \\mathbf{X}_i \/ \\inn{\\mathbf{P}, \\mathbf{X}_i}$. Note that $\\bZ_i$ is well-defined almost surely. Besides, \n\t\\[\n\t\\mathbb{E}(\\bZ_i | \\bZ_{i - 1}) = \\mathbb{E}(\\bZ_i | \\mathbf{X}_{i-1}) = \\sum\\limits_{j=1}^p -\\frac{e_{X_{i - 1}}e_{j}^\\top}{P_{X_{i - 1}, j}} P_{X_{i - 1}, j} = -e_{X_{i - 1}} 1^\\top. \n\t\\]\n\tThus $\\opnorm{\\bZ_i - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1})} \\le p \/ \\alpha + \\sqrt{p} =: R < \\infty$. Define $\\mathbf{S}_k := \\sum_{i=1}^k \\bZ_i - \\mathbb{E}(\\bZ_i| \\bZ_{i - 1})$, then $\\{\\mathbf{S}_k\\}_{k = 1}^n$ is a matrix martingale. In addition, \n\t\\[\n\t\\begin{aligned}\n\t\\mathbb{E} & \\bigl\\{(\\bZ_i - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1}))^\\top (\\bZ_i - \\mathbb{E} (\\bZ_i | \\bZ_{i - 1})) | \\{\\mathbf{S}_k\\}_{k = 1}^{i -1}\\bigr\\} = \\mathbb{E} \\bigl\\{(\\bZ_i - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1}))^\\top (\\bZ_i - \\mathbb{E} (\\bZ_i | \\bZ_{i - 1})) | \\bZ_{i - 1}\\bigr\\} \\\\\n\t& = \\mathbb{E} (\\bZ^\\top_i \\bZ_i | \\bZ_{i - 1}) - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1})^\\top \\mathbb{E} (\\bZ_i | \\bZ_{i - 1}) = \\biggl(\\sum_{j = 1}^p \\frac{e_je_j^\\top}{P_{X_{i - 1}, j}}\\biggr) - 11^\\top =: \\bW^{(1)}_i, \n\t\\end{aligned}\n\t\\]\n\tand similarly, \n\t\\[\n\t\\begin{aligned}\n\t\\mathbb{E} \\bigl\\{(\\bZ_i - \\mathbb{E}(\\bZ_i | \\bZ_{i - 1}))(\\bZ_i - \\mathbb{E} (\\bZ_i | \\bZ_{i - 1}))^\\top | \\{\\mathbf{S}_k\\}_{k = 1}^{i -1} \\bigr\\} = \\biggl(\\sum\\limits_{j = 1}^ p \\frac{ e_{X_{i - 1}}e_{X_{i - 1}}^\\top}{P_{X_{i - 1}, j}} \\biggr) - p e_{X_{i - 1}}e_{X_{i - 1}}^\\top =: \\bW^{(2)}_i. \t\n\t\\end{aligned}\n\t\\]\n\tWrite $\\opnorm{\\sum_{i=1}^n \\bW^{(1)}_i}$ as $W_n^{(1)}$, $\\opnorm{\\sum_{i = 1}^n \\bW^{(2)}_i}$ as $W_n^{(2)}$ and $\\max(W_n^{(1)}, W_n^{(2)})$ as $W_n$. By the matrix Freedman inequality \\citep[Corollary~1.3]{tropp2011freedman}, for any $t \\ge 0$ and $\\sigma^2 > 0$, \n\t\\be\n\t\\label{eq:matrix_freedman}\n\t\\mathbb{P} ( \\opnorm{\\mathbf{S}_n} \\ge t, W_n\\le \\sigma^2 ) \\le 2p \\exp\\biggl(-\\frac{t^2 \/2 }{\\sigma^2 + Rt \/ 3}\\biggr). \n\t\\ee\n\tNow we need to choose an appropriate $\\sigma^2$ so that $W_n \\le \\sigma^2$ holds with high probability. Note that $W^{(1)}_n \\le np(\\alpha^{-1} + 1)$ and $W_n^{(2)} \\le (p^2 \\alpha^{-1} - p) \\sup_{j\\in [p]} \\sum_{i = 1}^n 1_{\\{X_i = s_j\\}}$. In the following we derive a bound for $\\sup_{j\\in [p]} \\sum_{i = 1}^n 1_{\\{X_i = s_j\\}}$. For any $j \\in [p]$, by \\citet[Theorem~1.2]{JFS18}, which is a variant of Bernstein's inequality for Markov chains, we have that \n\t\\be\n\t\\label{eq:mc_bernstein}\n\t\\mathbb{P}\\biggl\\{\\frac{1}{n}\\sum_{i=1}^n (1_{\\{X_i = s_j\\}} - \\pi_j) > \\epsilon \\biggr\\} \\le \\exp\\biggl( -\\frac{n\\epsilon ^ 2}{2(A_1\\beta \/ p + A_2\\epsilon)}\\biggr), \n\t\\ee\n\twhere \n\t\\[\n\tA_1 = \\frac{1 + \\max(\\rho_+, 0)}{1 - \\max(\\rho_+, 0)}~~~\\textnormal{and}~~~A_2 = \\frac{1}{3}1_{\\{\\rho_{+} \\le 0\\}} + \\frac{5}{1 - \\rho_+} 1_{\\{\\rho_+ > 0\\}}. \n\t\\]\n\tSome algebra yields that for any $\\xi > 0$, \n\t\\[\n\t\\mathbb{P}\\biggl\\{\\frac{1}{n}\\sum_{i=1}^n 1_{\\{X_i = s_j\\}} - \\pi_j > \\biggl(\\frac{4A_1\\xi}{np}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi}{n} \\biggr\\} \\le \\exp( - \\xi). \n\t\\]\n\tA union bound over $j \\in [p]$ yields that \n\t\\[\n\t\\mathbb{P}\\biggl\\{ \\sup_{j \\in [p]} \\frac{1}{n}\\sum_{i=1}^n (1_{\\{X_i = s_j\\}}- \\pi_j) > \\biggl(\\frac{4A_1\\xi \\log p}{np}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi \\log p}{n}\\biggr\\} \\le p^{-(\\xi - 1)}, \n\t\\]\n\twhich implies that \n\t\\[\n\t\\mathbb{P}\\biggl\\{\\sup_{j \\in [p]} \\frac{1}{n}\\sum\\limits_{i=1}^n 1_{\\{X_i = s_j\\}} > \\pi_{\\max} + \\biggl(\\frac{4A_1\\xi \\log p}{np}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi \\log p}{n}\\biggr\\} \\le p^{-(\\xi - 1)}. \n\t\\]\n\tTherefore, whenever $n \\pi_{\\max} (1 - \\rho_+) \\ge 2 \\log p$, we have that \n\t\\[\n\t\t\\mathbb{P}\\biggl(\\sup_{j \\in [p]} \\frac{1}{n} \\sum_{i = 1}^n 1_{\\{X_i = s_j\\}} \\gtrsim \\pi_{\\max}\\biggr) \\le \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr). \n\t\\]\n\tCombining this with the bounds of $W_n^{(1)}$ and $W_n^{(2)}$, we have that \n\t\\[\n\t\t\\mathbb{P}\\biggl(W_n \\ge \\frac{C_1np ^ 2\\pi_{\\max}}{\\alpha}\\biggr) \\le \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr), \n\t\\]\n\twhere $C_1$ is a universal constant. Now choosing $\\sigma^2 = C_1 np^2 \\pi_{\\max} \/ \\alpha$, we deduce that for any $t \\ge 0$, \n\t\\[\n\t\\begin{aligned}\n\t\t\\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t) & = \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n \\le \\sigma^2 ) + \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n > \\sigma^2 ) \\\\\n\t\t& \\le \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n \\le \\sigma^2 ) + \\mathbb{P}(W_n > \\sigma^2 ) \\\\\n\t\t& \\le 2p \\exp\\biggl(-\\frac{t^2 \/2 }{\\sigma^2 + Rt \/ 3}\\biggr) + \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr). \n\t\\end{aligned}\n\t\\]\n\tEquivalently, for any $\\xi > 1$, \n\t\\[\n\t\t\\mathbb{P}\\biggl\\{\\biggl\\|\\frac{1}{n}\\mathbf{S}_n\\biggr\\|_2 \\gtrsim \\biggl(\\frac{\\xi p ^ 2\\pi_{\\max} \\log p}{n\\alpha}\\biggr)^{1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\} \\le 4p^{-(\\xi - 1)} + \\exp\\biggl(- \\frac{n\\pi_{\\max} (1 - \\rho_+)}{2}\\biggr). \n\t\\]\t\n\tFinally, observe that for any $i \\in [n]$, $\\Pi_{{\\mathcal N}}(\\mathbb{E}(\\bZ_i | \\bZ_{i - 1})) = \\Pi_{{\\mathcal N}}( -e_{X_{i - 1}} 1^\\top) = \\bzero$. Therefore, $\\Pi_{{\\mathcal N}}(\\nabla\\ell_n(\\mathbf{P})) = n^{-1}\\mathbf{S}_n$ and the final conclusion then follows. \t\n\\end{proof}\n\n\n\\section{Proof of Lemma \\ref{lem:uniform_law}}\n\\begin{proof}\n\t\\noindent We first split $\\mathcal{C}(\\eta)$ as the union of the sets \n\t\\begin{equation*}\n\t\\mathcal{C}_l := \\left\\{\\mathbf{Q} \\in \\mathcal{C}(\\eta): 2^{l-1}\\eta \\leq D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) \\leq 2^l\\eta, ~ {\\rm rank}(\\mathbf{Q})\\leq r\\right\\}, \\quad l=1,2,3,\\ldots. \n\t\\end{equation*}\n\tDefine\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\gamma_l = & \\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\bigl|D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) - \\widetilde D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})\\bigr| \\\\\n\t= & \\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\biggl|\\frac{1}{n}\\sum_{i=1}^n \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle\\biggr|.\n\t\\end{split}\n\t\\end{equation*}\n\tFirst, we wish to apply \\citet[][Theorem~7]{Ada08} to bound $|\\gamma_l - \\mathbb{E} \\gamma_l|$. Adamczak's bound entails the following asymptotic weak variance\n\t\\[\n\t\\sigma^2 := \\sup_{\\mathbf{Q}\\in{\\mathcal C}_l} {\\rm Var} \\biggl\\{\\sum_{i = S_1 + 1}^{S_2} \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle \\biggr\\} \/ \\mathbb{E} T_2. \n\t\\]\n\tWe have that \n\t\\[\n\t\\begin{aligned}\n\t\\sigma^2 & \\le \\sup_{\\mathbf{Q}\\in{\\mathcal C}_l}\\mathbb{E} \\biggl[\\biggl\\{ \\sum_{i = S_1 + 1}^{S_2} \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle \\biggr\\}^{\\!2}\\biggr] \/ \\mathbb{E} T_2\\\\\n\t& = \\frac{1}{2} \\sup_{\\mathbf{Q}\\in{\\mathcal C}_l} \\sum\\limits_{j = 1}^{\\infty} \\mathbb{E} \\biggl[\\biggl\\{ \\sum_{i = S_1 + 1}^{S_2} \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle \\biggr\\}^{\\!2} 1_{\\{T_2 = j\\}}\\biggr] \\\\\n\t& \\le \\frac{1}{2} \\sum\\limits_{j = 1}^{\\infty} {4j^2\\log^ 2(\\beta \/ \\alpha) }\\mathbb{P}(T_2 = j) = {2 \\log^2(\\beta \/ \\alpha) \\mathbb{E} (T_2^2)} = {8\\log ^ 2(\\beta \/ \\alpha)}. \n\t\\end{aligned}\n\t\\]\n\tBy \\citet[][Theorem~7]{Ada08}, there exists a universal constant $K > 1$ such that for any $\\xi > 0$, \n\t\\[\n\t\\mathbb{P}\\biggl\\{ |\\gamma_l - \\mathbb{E} \\gamma_l| \\ge K \\mathbb{E}\\gamma_l + {2\\log(\\beta \/ \\alpha)}\\biggl(\\frac{2K\\xi }{n}\\biggr)^{\\! 1 \/ 2} + \\frac{16 K\\log(\\beta \/ \\alpha)\\xi \\log n }{n}\\biggr\\} \\le K e^{- \\xi}. \n\t\\]\n\tSince $n^{- 1\/ 2} \\ge 2 n^{-1}\\log n$ for any positive integer $n$, we have that\n\t\\be\n\t\\label{eq:gamma_l_uniform_concentration}\n\t\\mathbb{P}\\biggl\\{ |\\gamma_l - \\mathbb{E} \\gamma_l| \\ge K \\mathbb{E}\\gamma_l + {11K\\log(\\beta \/ \\alpha)}\\biggl(\\frac{\\xi }{n}\\biggr)^{\\! 1 \/ 2}\\biggr\\} \\le K e^{- \\xi}. \t\n\t\\ee\n\tNext, we bound $\\mathbb{E} \\gamma_l$. Let $\\{\\varepsilon_i\\}_{i=1}^n$ be $n$ independent Rademacher random variables. By a symmetrization argument, \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\mathbb{E}\\gamma_l = & \\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\biggl|\\frac{1}{n}\\sum_{i=1}^n \\langle \\log (\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i\\rangle - \\mathbb{E}\\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i \\rangle\\biggr|\\biggr)\\\\\n\t\\leq & 2\\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l}\\biggl|\\frac{1}{n}\\sum_{i=1}^n \\varepsilon_k \\langle \\log(\\mathbf{P}) - \\log(\\mathbf{Q}), \\mathbf{X}_i\\rangle\\biggr|\\biggr).\n\t\\end{split}\n\t\\end{equation*}\n\tLet $\\phi_i(t) = (\\alpha \/ p)\\{\\log(\\langle\\mathbf{P}, \\mathbf{X}_i\\rangle + t) - \\log(\\langle \\mathbf{P}, \\mathbf{X}_i\\rangle)\\}$. Then $\\phi_{i}(0) = 0$ and $|\\phi_i'(t)| \\leq 1$ for all $t$ such that $t + \\langle\\mathbf{P}, \\mathbf{X}_i\\rangle \\geq \\alpha\/p$. In other words, $\\phi_i$ is a contraction map for $t \\geq \\min_{j, k \\in [p]}(P_{jk} - \\alpha\/p)$. By the contraction principle (Theorem 4.12 in \\cite{ledoux2013probability}),\n\n\n\n\n\n\n\n\t\\begin{equation}\\label{ineq:expected-gamma_l}\n\t\t\\begin{split}\n\t\t\t\\mathbb{E}\\gamma_l \\leq & \\frac{2p}{\\alpha}\\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l}\\biggl|\\frac{1}{n}\\sum_{i=1}^n\\varepsilon_i \\phi_i\\left(\\langle \\mathbf{Q}-\\mathbf{P}, \\mathbf{X}_i\\rangle\\right)\\biggr|\\biggr) \\leq \\frac{4p}{\\alpha}\\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l}\\biggl|\\frac{1}{n}\\sum_{i = 1}^n \\varepsilon_i\\langle \\mathbf{Q}-\\mathbf{P}, \\mathbf{X}_i\\rangle\\biggr|\\biggr)\\\\\n\t\t\t\\leq & \\frac{4p}{\\alpha}\\mathbb{E}\\biggl(\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l}\\biggl\\|\\frac{1}{n}\\sum_{i = 1}^n \\varepsilon_i\\mathbf{X}_i\\biggr\\|_2 \\|\\mathbf{Q}-\\mathbf{P}\\|_\\ast\\biggr) \\leq \\frac{4p}{\\alpha} \\mathbb{E}\\biggl\\|\\frac{1}{n}\\sum_{i = 1}^n \\varepsilon_i\\mathbf{X}_i\\biggr\\|_2 \\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\|\\mathbf{Q} - \\mathbf{P}\\|_\\ast. \n\t\t\\end{split}\n\t\\end{equation}\t\n\tBy Lemma \\ref{lem:kl_to_l2}, \n\n\n\n\n\n\n\t\\begin{equation}\n\t\\label{ineq:Q-P-nuclear}\n\t\\begin{split}\n\t\t\\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} \\|\\mathbf{Q} - \\mathbf{P}\\|_\\ast & \\leq \\sup_{\\mathbf{Q} \\in \\mathcal{C}_l} (2r)^{1 \/ 2} \\fnorm{\\mathbf{Q} - \\mathbf{P}} \\le 2\\beta \\biggl(\\frac{2^l \\eta r}{p\\alpha \\pi_{\\min}}\\biggr)^{\\!1 \/ 2}. \n\t\\end{split}\n\t\\end{equation}\t\n\n\tHence, the remaining task is to bound $\\mathbb{E}\\|n^{-1}\\sum_{i = 1}^n \\varepsilon_i\\mathbf{X}_i\\|$. From now on, we denote $\\varepsilon_i \\mathbf{X}_i$ by $\\bZ_i$. One can see that $(\\bZ_i)_{i = 1}^n$ is a martingale difference sequence. We wish to apply the matrix Freedman inequality \\citep[Corollary~1.3]{tropp2011freedman} to bound the average of $(\\bZ_i)_{i = 1}^n$. We have that\n\t\\begin{equation*}\n\t\\begin{split}\n\t\\biggl\\|\\sum_{i = 1}^n \\mathbb{E} \\bigl(\\bZ_i^\\top \\bZ_i \\vert X_{i - 1}\\bigr) \\biggr\\|_2 = & \\biggl\\|\\sum_{i=1}^n \\sum_{j=1}^p P_{X_{i - 1}, j} (e_{X_{i - 1}} e_j^\\top)^\\top (e_{X_{i - 1}}e_j^\\top)\\biggr\\|_2 = \\biggl\\|\\sum_{j=1}^p \\sum_{i=1}^n P_{X_{i - 1}, j} e_je_j^\\top\\biggr\\|_2 \\\\\n\t= & \\max_{j \\in [p]} \\sum_{i = 1}^n P_{X_{i - 1}, j} =: W_n^{(1)}\n\t\\end{split}\n\t\\end{equation*} \n\tand that \n\t\\begin{equation*}\n\t\\begin{split}\n\t\\biggl\\|\\sum_{i = 1}^n \\mathbb{E} \\bigl(\\bZ_i\\bZ_i^\\top\\vert X_{i - 1}\\bigr) \\biggr\\|_2 = & \\biggl\\|\\sum_{i=1}^n \\sum_{j=1}^p P_{X_{i - 1}, j} e_{X_{i - 1}} e_{X_{i - 1}}^\\top \\biggr\\|_2 = \\biggl\\|\\sum_{i=1}^n e_{X_{i - 1}} e_{X_{i - 1}}^\\top\\biggr\\|_2 \\\\\n\t= & \\max_{j \\in [p]} \\sum_{i = 1}^n 1_{\\{X_{i - 1} = j\\}} =: W_n^{(2)}. \n\t\\end{split}\n\t\\end{equation*}\t\n\tWe first bound $W_n^{(1)}$. Note that for any $j \\in [p]$, $\\mathbb{E} (P_{X_{i - 1}, j}) = \\pi_j $, and that\n\t\\[\n\t{\\rm Var}_{\\pi}(P_{X_{i - 1}, j}) = \\sum_{k = 1}^p \\pi_k (P_{kj} - \\pi_j)^2 = \\sum_{k = 1}^p \\pi_k P^2_{kj} - \\pi_j ^ 2 \\le \\pi_j(1 - \\pi_j). \n\t\\]\n\tBy a variant of Bernstein's inequality for Markov chains \\citep[Theorem~1.2]{JFS18}, we have that for any $j \\in [p]$, \n\t\\[\n\t\\label{eq:mc_bernstein}\n\t\\mathbb{P}\\biggl(\\frac{1}{n} \\sum_{i = 1}^n P_{X_{i - 1}, j} - \\pi_j > \\epsilon \\biggr) \\le \\exp\\biggl\\{ -\\frac{n\\epsilon ^ 2}{2(A_1\\pi_j + A_2\\epsilon)}\\biggr\\}, \n\t\\]\n\twhere\n\t\\[\n\tA_1 := \\frac{1 + \\max(\\rho_+, 0)}{1 - \\max(\\rho_+, 0)}~~~\\textnormal{and}~~~A_2 := \\frac{1}{3}1_{\\{\\rho_{+} \\le 0\\}} + \\frac{5}{1 - \\rho_+} 1_{\\{\\rho_+ > 0\\}}. \n\t\\]\n\tA union bound yields that \n\t\\be\n\t\\label{eq:w1}\n\t\\mathbb{P}\\bigl\\{W_n^{(1)} \\ge n\\pi_{\\max} + (4nA_1 \\pi_{\\max}\\xi \\log p)^{1 \/ 2} + {4A_2\\xi \\log p} \\bigr\\} \\le p^{-(\\xi - 1)}. \n\t\\ee\n\tNext we bound $W_n^{(2)}$. Note that $W_n^{(2)} \\le \\max_{j \\in [p]} \\sum_{i =1}^n 1_{\\{X_{i - 1} = s_j\\}}$. Similarly, by \\citet[Theorem~1.2]{JFS18}, for any $j \\in [p]$, \n\n\n\t\\be\n\t\\label{eq:mc_bernstein}\n\t\\mathbb{P}\\biggl\\{\\frac{1}{n}\\sum_{i=1}^n 1_{\\{X_{i - 1} = s_j\\}} - \\pi_j > \\epsilon \\biggr\\} \\le \\exp\\biggl\\{ -\\frac{n\\epsilon ^ 2}{2(A_1\\pi_j + A_2\\epsilon)}\\biggr\\}, \n\t\\ee\n\tSome algebra yields that for any $\\xi > 0$, \n\t\\[\n\t\\mathbb{P}\\biggl\\{\\frac{1}{n}\\sum_{i=1}^n 1_{\\{X_{i - 1} = s_j\\}} - \\pi_j > \\biggl(\\frac{4A_1\\pi_j\\xi }{n}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi}{n} \\biggr\\} \\le \\exp( - \\xi). \n\t\\]\n\tBy a union bound over $j \\in [p]$, \n\t\\[\n\t\\mathbb{P}\\biggl\\{ \\max_{j \\in [p]} \\frac{1}{n}\\sum_{i=1}^n 1_{\\{X_{i - 1} = s_j\\}} > \\pi_{\\max} + \\biggl(\\frac{4A_1\\pi_{\\max}\\xi \\log p}{n}\\biggr)^{1 \/ 2} + \\frac{4A_2\\xi \\log p}{n}\\biggr\\} \\le p^{-(\\xi - 1)}, \n\t\\]\n\twhich further implies that \n\t\\be\n\t\\label{eq:w2}\n\t\\mathbb{P}\\bigl\\{ W_n^{(2)} \\ge n\\pi_{\\max} + (4nA_1 \\pi_{\\max}\\xi \\log p)^{1 \/ 2} + {4A_2\\xi \\log p} \\bigr\\} \\le p^{-(\\xi - 1)}. \n\t\\ee\n\tDefine $W_n := \\max(W_n^{(1)}, W_n^{(2)})$. Let $\\mathbf{S}_n := \\sum_{i = 1}^n \\varepsilon_i\\mathbf{X}_i$. By matrix Freedman's inequality \\citep[][Corollary~1.3]{tropp2011freedman}, for any $t \\ge 0$ and $\\sigma^2 > 0$, \n\t\\be\n\t\\label{eq:matrix_freedman}\n\t\\mathbb{P} ( \\opnorm{\\mathbf{S}_n} \\ge t, W_n\\le \\sigma^2 ) \\le 2p \\exp\\biggl(-\\frac{t^2 \/2 }{\\sigma^2 + t \/ 3}\\biggr). \n\t\\ee\n\tNow we need to choose an appropriate $\\sigma^2$ so that $W_n \\le \\sigma^2$ holds with high probability.\n\tGiven that $\\rho_+ > 0$ and $n\\pi_{\\max} \\ge 10\\xi \\log p \/ (1 - \\rho_+)$, combining \\eqref{eq:w1} and \\eqref{eq:w2} yields that \n\n\n\n\n\t\\be\n\t\\label{eq:w}\t\n\t\\mathbb{P}\\bigl( W_n \\ge 4n\\pi_{\\max}\\bigr) \\le 2p^{-(\\xi - 1)}. \n\t\\ee\n\tNow choosing $\\sigma^2 = 4n\\pi_{\\max}$ in \\eqref{eq:matrix_freedman}, we deduce that \n\t\\[\n\t\\begin{aligned}\n\t\\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t) & = \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n \\le \\sigma^2 ) + \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n > \\sigma^2 ) \\\\\n\t& \\le \\mathbb{P}(\\opnorm{\\mathbf{S}_n} \\ge t, W_n \\le \\sigma^2 ) + \\mathbb{P}(W_n > \\sigma^2 ) \\\\\n\t& \\le 2p \\exp\\biggl(-\\frac{t^2 \/2 }{\\sigma^2 + t \/ 3}\\biggr) + 2p^{-(\\xi - 1)}. \n\t\\end{aligned}\n\t\\]\n\tChoose $\\xi = n \\pi_{\\max} (1 - \\rho_+) \/ (10 \\log p)$. As long as $n\\pi_{\\max}(1 - \\rho_+)\\ge \\max (20 \\log p, \\log n)$, we have that \n\t\\be\n\t\\label{eq:expectation_sn_op}\n\t\\mathbb{E} \\biggl\\|\\frac{1}{n}\\mathbf{S}_n\\biggr\\|_2 \\lesssim \\biggl(\\frac{\\pi_{\\max}\\log p}{n}\\biggr)^{\\!1 \/2}. \t\t\n\n\n\t\\ee\n\tCombining \\eqref{ineq:expected-gamma_l}, \\eqref{ineq:Q-P-nuclear} and \\eqref{eq:expectation_sn_op} yields that\n\t\\begin{equation*}\n\t\\mathbb{E} \\gamma_l \\lesssim \\frac{\\beta}{\\alpha ^ {3 \/ 2}}\\biggl(\\frac{2^{l}\\eta \\pi_{\\max}rp\\log p}{\\pi_{\\min}n}\\biggr)^{\\!1 \/2}. \n\t\\end{equation*}\n\tThen combining this with \\eqref{eq:gamma_l_uniform_concentration} yields that \n\t\\begin{equation*}\n\t\\mathbb{P}\\biggl\\{\\gamma_l \\gtrsim \\frac{\\beta}{\\alpha ^ {3 \/ 2}}\\biggl(\\frac{2^{l}\\eta \\pi_{\\max}rp\\log p}{\\pi_{\\min}n}\\biggr)^{\\!1 \/2} + \\log(\\beta \/ \\alpha)\\biggl(\\frac{\\xi }{n}\\biggr)^{\\! 1 \/ 2} \\hspace{0cm} \\biggr\\} \\lesssim e^{-\\xi}. \n\t\\end{equation*}\n\tLet $\\xi = 2^l \\eta \\pi_{\\max} rp \\log p \/ \\pi_{\\min}$. Then there exist universal constants $C_1, C_2 > 0$ such that\n\t\\begin{equation*}\n\t\\mathbb{P}\\biggl\\{\\gamma_l \\ge \\frac{C_1\\beta}{\\alpha ^ {3 \/ 2}}\\biggl(\\frac{2^{l}\\eta \\pi_{\\max}rp\\log p}{\\pi_{\\min}n}\\biggr)^{\\!1 \/2} \\biggr\\} \\le C_2\\exp\\biggl\\{- \\frac{(2l + 1) \\eta \\pi_{\\max} rp \\log p}{\\pi_{\\min}}\\biggr\\}. \t\n\t\\end{equation*}\n\tWe can thus deduce that there exists a universal constant $C_3 > 0$ such that \n\t\\begin{equation*}\n\t\\begin{split}\n\t&\\mathbb{P}\\biggl( |\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) - D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})|> \\frac{1}{2}D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) + \\frac{C_3\\pi_{\\max} \\beta ^ 2rp\\log p}{\\pi_{\\min}\\alpha ^ 3n} \\biggr)\\\\\n\t\\leq & \\sum_{l=0}^\\infty P\\left(\\exists \\mathbf{Q} \\in \\mathcal{C}_l, ~\\left|\\widetilde{D}_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q}) - D_{\\mathrm{KL}}(\\mathbf{P}, \\mathbf{Q})\\right|> 2^{l - 2}\\eta + \\frac{C_3\\pi_{\\max} \\beta ^ 2rp\\log p}{\\pi_{\\min}\\alpha ^ 3n}\\right)\\\\\n\t\\leq & \\sum_{l=0}^\\infty P\\biggl\\{\\gamma_l \\ge \\frac{C_1\\beta}{\\alpha ^ {3 \/ 2}}\\biggl(\\frac{2^l\\eta\\pi_{\\max}rp\\log p}{\\pi_{\\min}n}\\biggr)^{\\!1 \/2}\\biggr\\}\\\\\n\t\\leq & C_2 \\sum_{l=0}^\\infty \\exp\\biggl\\{- \\frac{(2l + 1) \\eta \\pi_{\\max} rp \\log p}{\\pi_{\\min}}\\biggr\\} \\le 2C_2 \\exp\\biggl(- \\frac{\\eta \\pi_{\\max} rp \\log p}{\\pi_{\\min}}\\biggr). \n\t\\end{split}\n\t\\end{equation*}\n\twhere we use the Cauchy-Schwarz inequality in the second step. \n\\end{proof}\n\n\n\\section{Alternative statistical error analysis}\n\\label{sec:alt}\n\n\\subsection{Main results}\n\nIn this section, we provide an alternative proof strategy that follows \\citet{NRW12} to bound the statistical error of $\\widehat \\mathbf{P}$ and $\\widehat \\mathbf{P} ^ r$. This strategy resolves the inconsistency issue of Theorems \\ref{thm:nuclear} and \\ref{thm:rank} when $n \\gg \\{rp \\pi_{\\max}(\\log p) \\beta\/ (\\pi_{\\min} \\alpha ^ {\\!3 \/ 2})\\} ^ 2$. For any $R>0$, define a constraint set ${\\mathcal C} (\\beta, R, \\kappa):= \\{\\boldsymbol{\\Delta} \\in \\Re^{p \\times p}: \\supnorm{\\boldsymbol{\\Delta}} \\le \\beta \/ p, \\fnorm{\\boldsymbol{\\Delta}}\\le R, \\nnorm{\\boldsymbol{\\Delta}} \\le \\kappa r^{1 \/ 2}\\fnorm{\\boldsymbol{\\Delta}} \\}$. An important ingredient of this statistical analysis is the localized restricted strong convexity \\citep{NWa11, FLS18} of the loss function $\\ell_n(\\mathbf{P})$ near $\\mathbf{P}$. This property allows us to bound the distance in the parameter space by the difference in the objective function value. Define the first-order Taylor remainder term of the negative log-likelihood function $\\ell_n(\\mathbf{Q})$ around $\\mathbf{P}$ as\n\\[\n\t\\delta\\ell_n(\\mathbf{Q}; \\mathbf{P}) := \\ell_n(\\mathbf{Q}) - \\ell_n(\\mathbf{P}) - \\nabla\\ell_n(\\mathbf{P})^\\top (\\mathbf{Q} - \\mathbf{P}). \n\\]\nThe following lemma establishes the desired local restricted strong convexity. \n\\begin{lemma}\n\t\\label{lem:restricted_strong_convexity}\n\tUnder Assumption \\ref{asp:1}, there exists a universal constant $K$ such that for any $\\xi > 1$, it holds with probability at least $1 - K\\exp(-\\xi)$ that for any $\\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)$, \n\t\\be\n\t\\begin{aligned}\n\t\\delta\\ell_n(\\mathbf{P} + \\boldsymbol{\\Delta}; \\mathbf{P}) \\ge \\frac{\\alpha ^2}{8\\beta^2}\\fnorm{\\boldsymbol{\\Delta}}^2 & - 8R\\biggl({\\frac{3K\\xi }{n}}\\biggr)^{1 \/ 2} - \\frac{8K\\xi\\alpha^2 \\log n }{\\beta^2 n} - \\frac{Kp\\kappa R}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}. \n\\end{aligned}\n\t\\ee\n\\end{lemma}\n\nNow we present the statistical rates of $\\widehat \\mathbf{P}$ and $\\widehat \\mathbf{P} ^ r$. \n\n\\begin{theorem}[Alternative statistical guarantee for $\\widehat \\mathbf{P}$]\n\t\\label{thm:nuclear_alt}\n\tUnder the same assumptions of Theorem \\ref{thm:nuclear}, there exists a universal constant $C_1 > 0$, such that for any $\\xi > 1$, if we choose \n\t\\[\n\t\\lambda = C_1 \\biggl\\{\\biggl(\\frac{\\xi p ^ 2\\pi_{\\max} \\log p}{n\\alpha}\\biggr)^{\\! 1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\}, \n\t\\]\n\tthen whenever $n\\pi_{\\max}(1 - \\rho_+) \\ge \\max\\{\\max(20, \\xi ^ 2) \\log p, \\log n\\}$, we have with probability at least $1 - K\\exp(-\\xi) - 4p^{-(\\xi - 1)} - p ^{-1}$ that \n\t\\[\n\t\\begin{aligned}\n\t\\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}} \\lesssim \\frac{\\beta ^ 2}{\\alpha ^ 2}\\biggl(\\frac{\\xi r p ^ 2 \\pi_{\\max} \\log p}{n \\alpha}\\biggr) ^ {\\!1 \/ 2}~~\\text{and}~~D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P}) \\lesssim \\frac{\\xi \\beta ^ 6 \\pi_{\\max} r p ^ 2 \\log p}{n \\alpha ^ 7},\n\t\\end{aligned}\n\t\\]\n\twhere $K$ is the same constant as in Lemma \\ref{lem:restricted_strong_convexity}. \n\\end{theorem}\n\n\n\\begin{theorem}[Alternative statistical guarantee for $\\widehat \\mathbf{P} ^ r$]\n\t\\label{thm:rank_alt}\n\tUnder the same assumptions of Theorem \\ref{thm:nuclear}, there exists a universal constant $C_1 > 0$, for any $\\xi > 1$, we have with probability at least $1 - K\\exp(-\\xi) - 4p^{-(\\xi - 1)} - p ^{-1}$ that \n\t\\[\n\t\\fnorm{\\widehat\\mathbf{P} ^ r- \\mathbf{P}} \\lesssim \\frac{\\beta ^ 2}{\\alpha ^ 2}\\biggl(\\frac{\\xi r p ^ 2 \\pi_{\\max} \\log p}{n \\alpha}\\biggr) ^ {\\!1 \/ 2} ~~\\text{and}~~D_{\\mathrm{KL}}(\\mathbf{P}, \\widehat\\mathbf{P} ^ r) \\lesssim \\frac{\\xi \\beta ^ 6 \\pi_{\\max} r p ^ 2 \\log p}{n \\alpha ^ 7}, \n\t\\]\n\twhere $K$ is the same constant as in Lemma \\ref{lem:restricted_strong_convexity}. \n\\end{theorem}\n\nOne can see from the theorems above that the derived error bounds converge to zero as $n$ goes to infinity. Nevertheless, their dependence on $\\alpha$ and $\\beta$ is worse than those in Theorems \\ref{thm:nuclear} and \\ref{thm:rank} when $n \\lesssim \\{rp \\pi_{\\max}(\\log p) \\beta\/ (\\pi_{\\min} \\alpha ^ {\\!3 \/ 2})\\} ^ 2$. This is why we do not present this result in the main text. \n\n\\subsection{Proof of Lemma \\ref{lem:restricted_strong_convexity}}\n\n\\begin{proof}\n\t~Given any $\\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)$, it holds that for some $0\\le v \\le 1$ that \n\t\\be\n\t\\label{eq:rsc_first_step}\n\t\\begin{aligned}\n\t\t\\delta \\ell_n(\\mathbf{P} + \\boldsymbol{\\Delta}; \\mathbf{P}) & = \\frac{1}{2} \\text{vec}(\\boldsymbol{\\Delta})^\\top \\mathbf{H}_n(\\mathbf{P} + v\\boldsymbol{\\Delta})\\text{vec}(\\boldsymbol{\\Delta}) = \\frac{1}{2n} \\sum\\limits_{i=1}^n \\frac{\\inn{\\mathbf{X}_i, \\boldsymbol{\\Delta}}^2}{\\inn{\\mathbf{P} + v\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2} \\\\\n\t\t& \\ge \\frac{1}{2n}\\sum\\limits_{i=1}^n \\frac{p ^ 2}{4\\beta^2} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2. \n\t\\end{aligned}\n\t\\ee\n\tDefine \n\t\\[\n\t\\Gamma_n := \\sup_{\\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)} \\biggl |\\frac{1}{n} \\sum\\limits_{i=1}^n \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 - \\mathbb{E}(\\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2) \\biggr |. \n\t\\]\n\tWe first bound the deviation of $\\Gamma_r$ from its expectation $\\mathbb{E} \\Gamma_r$. Note that $\\{\\mathbf{X}_i\\}_{i = 1}^n$ is a Markov chain on ${\\mathcal M} := \\{e_je_k^\\top\\}_{j, k = 1}^p$. Here we apply a tail inequality for suprema of unbounded empirical processes due to \\citet[][Theorem~7]{Ada08}. To apply this result, we need to verify that $\\{\\mathbf{X}_i\\}_{i = 1}^n$ satisfies the ``minorization condition'' as stated in Section 3.1 of \\citet{Ada08}. Below we characterize a specialized version of this condition. \n\t\n\t\\begin{condition}[Condition 1 (minorized condition).]\n\t\t\\label{con:minorization}\n\t\tWe say that a Markov chain ${\\mathcal X}$ on ${\\mathcal S}$ satisfies the minorized condition if there exist $\\delta > 0$, a set ${\\mathcal C}\\subset {\\mathcal S}$ and a probability measure $\\nu$ on ${\\mathcal S}$ for which $\\forall_{x \\in {\\mathcal C}} \\forall_{{\\mathcal A} \\subset {\\mathcal S}} \\mathbb{P}(x, {\\mathcal A}) \\ge \\delta \\nu({\\mathcal A})$ and $\\forall_{x \\in {\\mathcal S}} \\exists_{n \\in \\mathbb{N}} \\mathbb{P}^n(x, {\\mathcal C}) > 0$. \n\t\\end{condition} \n\tOne can verify that the Markov chain $\\{\\mathbf{X}_i\\}_{i = 1}^n$ satisfies Condition \\ref{con:minorization} with $\\delta = 1\/ 2$, ${\\mathcal C} = \\{e_1e_2^\\top\\}$ and $\\nu(e_je_k^\\top) = P_{jk}1_{\\{j = 2\\}}$ for $j, k \\in [p]$.\n\t\n\tNow consider a new Markov chain $\\{(\\widetilde \\mathbf{X}_i, R_i)\\}_{i=1}^n$ constructed as follows. Let $\\{R_i\\}_{i=1}^n$ be i.i.d. Bernoulli random variables with $\\mathbb{E} R_1 = \\delta$. For any $i \\in \\{0, \\ldots, n - 1\\}$, at step $i$, if $\\mathbf{X}_i \\notin {\\mathcal C}$, we sample $\\widetilde \\mathbf{X}_{i + 1}$ according to $\\mathbb{P}(\\widetilde \\mathbf{X}_i, \\cdot)$; otherwise, the distribution of $\\widetilde \\mathbf{X}_i$ depends on $R_i$: if $R_i = 1$, the chain regenerates in the sense that we draw $\\widetilde \\mathbf{X}_i$ from $\\nu$, and if $R_i = 0$, we draw $\\widetilde \\mathbf{X}_i$ from $(\\mathbb{P}(\\mathbf{X}_i, \\cdot) - \\delta \\nu(\\cdot)) \/ (1 - \\delta)$. One can verify that the sequence $\\{\\widetilde\\mathbf{X}_i\\}_{i=1}^n$ has exactly the same distribution as the original Markov chain $\\{\\mathbf{X}_i\\}_{i=1}^n$. Define $T_1 := \\inf\\{n > 0: R_n = 1\\}$ and $T_{i + 1} := \\inf\\{n > 0: R_{T_1 + \\ldots + T_i + n} = 1\\}$ for $i \\ge 0$. Note that $\\{T_i\\}_{i \\ge 0}$ are i.i.d. Geometric random variables with $\\mathbb{E} T_1 = 2$ and $\\|T_1\\|_{\\psi_1} \\le 4$. Let $S_0 := -1$, $S_j := T_1 + \\ldots + T_j$ and ${\\mathcal Y}_j := \\{ \\widetilde \\mathbf{X}_i\\}_{i = S_{j - 1} + 1}^{S_j}$ for $j \\ge 1$. Based on our construction, we deduce that $\\{{\\mathcal Y}_j\\}_{j \\ge 1}$ are independent. Thus we chop the original Markov chain $\\{X_i\\}_{i \\in [n]}$ into independent sequences. Finally, Adamazak's bound entails the following asymptotic weak variance\n\t\\[\n\t\\sigma^2 := \\sup_{\\boldsymbol{\\Delta}\\in{\\mathcal C}(\\beta, R, \\kappa)} {\\rm Var} \\biggl\\{\\sum_{i = S_1 + 1}^{S_2} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 - \\mathbb{E}(\\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2) \\biggr\\} \/ \\mathbb{E} T_2. \n\t\\]\n\tWe have\n\t\\[\n\t\\begin{aligned}\n\t\\sigma^2 & \\le \\sup_{\\boldsymbol{\\Delta}\\in{\\mathcal C}(\\beta, R, \\kappa)}\\mathbb{E} \\biggl[\\biggl\\{ \\sum_{i = S_1 + 1}^{S_2} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 - \\mathbb{E}(\\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2) \\biggr\\}^2\\biggr] \/ \\mathbb{E} T_2\\\\\n\t& = \\frac{1}{2} \\sup_{\\boldsymbol{\\Delta}\\in{\\mathcal C}(\\beta, R, \\kappa)} \\sum\\limits_{j = 1}^{\\infty} \\mathbb{E} \\biggl[\\biggl\\{ \\sum_{i = S_1 + 1}^{S_2} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 - \\mathbb{E}(\\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2) \\biggr\\}^2 1_{\\{T_2 = j\\}}\\biggr] \\\\\n\t& \\le \\frac{1}{2} \\sum\\limits_{j = 1}^{\\infty} \\frac{j^2R^2\\beta^4}{p^4}\\mathbb{P}(T_2 = j) = \\frac{R^2 \\beta^4\\mathbb{E} (T_2^2)}{2p^4} = \\frac{3\\beta^4R^2}{p^4}. \n\t\\end{aligned}\n\t\\]\n\tBy \\citet[][Theorem~7]{Ada08}, there exists a universal constant $K$ such that for any $\\xi > 0$, \n\t\\be\n\t\\label{eq:1.7}\n\t\\mathbb{P}\\biggl\\{ |\\Gamma_n - \\mathbb{E} \\Gamma_n| \\ge K \\mathbb{E}\\Gamma_n + \\frac{R\\beta^2}{ p^2}\\biggl(\\frac{3K\\xi }{n}\\biggr)^{1 \/ 2} + \\frac{64K\\xi\\alpha^2 \\log n }{np^2}\\biggr\\} \\le K \\exp(-\\xi). \n\t\\ee\n\t\t\n\tNext, by the symmetrization argument and Ledoux-Talagrand contraction inequality \\citep{ledoux2013probability}, for $n$ independent and identically distributed Rademacher variables $\\{\\gamma_i\\}_{i=1}^n$, when $n \\pi_{\\max}(1 - \\rho_+) \\ge \\max(20\\log p, \\log n)$, we have that\n\t\\be\n\t\\label{eq:16}\n\t\\begin{aligned}\n\t\t\\mathbb{E} \\Gamma_n & \\le 2\\mathbb{E} \\sup_{\\substack{\\fnorm{\\boldsymbol{\\Delta}} \\le R, \\\\ \\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)}} \\biggl | \\frac{1}{n} \\sum\\limits_{i=1}^n \\gamma_i \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 \\biggr | \\le \\frac{8\\beta}{p}~\\mathbb{E} \\sup_{\\substack{\\fnorm{\\boldsymbol{\\Delta}} \\le R, \\\\ \\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)}} \\biggl | \\inn{\\frac{1}{n}\\sum\\limits_{i=1}^n \\gamma_i \\mathbf{X}_i, \\boldsymbol{\\Delta}} \\biggr| \\\\\n\t\t& \\le \\frac{8\\beta\\nnorm{\\boldsymbol{\\Delta}}}{p} ~\\mathbb{E} \\biggl\\| \\frac{1}{n}\\sum\\limits_{i=1}^n \\gamma_i \\mathbf{X}_i \\biggr \\|_{2} \\le \\frac{8\\kappa\\beta r^{1 \/ 2} R}{p}\\mathbb{E} \\biggl\\| \\frac{1}{n}\\sum\\limits_{i=1}^n \\gamma_i \\mathbf{X}_i \\biggr\\|_2 \\le \\frac{8\\kappa\\beta R}{p}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr)^{\\!1 \/ 2}, \n\t\\end{aligned}\n\t\\ee\n\twhere the penultimate inequality is due to the fact that $\\boldsymbol{\\Delta} \\in {\\mathcal C}(\\beta, R, \\kappa)$, and where the last inequality is due to \\eqref{eq:expectation_sn_op}. \n\t\n\tFinally, \n\t\\be\n\t\\mathbb{E} \\inn{\\boldsymbol{\\Delta}, \\mathbf{X}_i}^2 = \\sum\\limits_{1 \\le j,k \\le d} \\pi_j P_{jk}\\Delta^2_{jk} \\ge \\frac{\\alpha^2}{p^2} \\fnorm{\\boldsymbol{\\Delta}}^2. \n\t\\ee\n\tCombining all the bounds above, we have for any $\\xi > 1$, with probability at least $1 - K \\exp(-\\xi)$, \n\t\\be\n\t\\begin{aligned}\n\t\t\\delta\\ell_n(\\mathbf{P} + \\boldsymbol{\\Delta}; \\mathbf{P}) \\ge \\frac{\\alpha ^2}{8\\beta^2}\\fnorm{\\boldsymbol{\\Delta}}^2 & - 8R\\biggl({\\frac{3K\\xi }{n}}\\biggr)^{1 \/ 2} - \\frac{8K\\xi\\alpha^2 \\log n }{\\beta^2 n} - \\frac{Kp\\kappa R}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}. \n\t\\end{aligned}\n\t\\ee\n\\end{proof}\n\n\\subsection{Proof of Theorem \\ref{thm:nuclear_alt}}\n\n\\begin{proof}\n\t~For a specific $R$ whose value will be determined later, we construct an intermediate estimator $\\widehat\\mathbf{P}_{\\eta} $ between $\\widehat\\mathbf{P}$ and $\\mathbf{P}$:\n\t\\[\n\t\\widehat \\mathbf{P}_{\\eta} = \\mathbf{P} + \\eta (\\widehat \\mathbf{P} - \\mathbf{P}), \n\t\\]\n\twhere $\\eta = 1$ if $\\fnorm{\\widehat \\mathbf{P} - \\mathbf{P}} \\le R$ and $\\eta = R\/\\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}}$ if $\\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}} > R$. For any $\\xi > 1$, there exists a universal constant $C > 0$ such that when \n\t\\[\n\t\\lambda = C\\biggl\\{\\biggl(\\frac{\\xi p ^ 2\\pi_{\\max}\\log p}{n\\alpha}\\biggr)^{1 \/ 2} + \\frac{\\xi p\\log p}{n \\alpha}\\biggr\\}, \n\t\\] \n\twe have by Lemmas \\ref{lem:restricted_strong_convexity} and \\ref{lem:gradient} that with probability at least $1 - K\\exp(-\\xi) - 4p^{-(\\xi - 1)} - p ^{-1}$, \n\t\\be\n\t\\label{eq:stat_error_fnorm}\n\t\\begin{aligned}\n\t\t\\frac{\\alpha ^2}{8\\beta^2} & \\fnorm{\\boldsymbol{\\Delta}}^2 - 8R\\biggl({\\frac{3K\\xi }{n}}\\biggr)^{1 \/ 2} - \\frac{8K\\xi\\alpha^2 \\log n }{\\beta^2 n} - \\frac{Kp\\kappa R}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}\\\\\n\t\t& \\le \\delta\\ell_n(\\widehat\\mathbf{P}_{\\eta}; \\mathbf{P}) \\le - \\inn{\\Pi_{{\\mathcal N}}(\\nabla{\\mathcal L}_n(\\mathbf{P})), \\widehat \\boldsymbol{\\Delta}_{\\eta}} + \\lambda ( \\nnorm{\\mathbf{P}} - \\nnorm{\\widehat\\mathbf{P}_{\\eta}} ) \\\\\n\t\t& \\le - \\inn{\\Pi_{{\\mathcal N}}(\\nabla{\\mathcal L}_n(\\mathbf{P})), \\widehat \\boldsymbol{\\Delta}_{\\eta}} + \\lambda\\nnorm{\\widehat\\boldsymbol{\\Delta}_{\\eta}} \\le (\\opnorm{\\Pi_{{\\mathcal N}}(\\nabla {\\mathcal L}_n(\\mathbf{P}))} + \\lambda) \\nnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}} \\\\\n\t\t& \\le 8\\lambda \\nnorm{[\\widehat \\boldsymbol{\\Delta}_{\\eta}]_{\\overline{\\mathcal M}}} \\le 8\\lambda\\sqrt{r}\\fnorm{\\widehat\\boldsymbol{\\Delta}_{\\eta}},\n\t\\end{aligned}\n\t\\ee\n\twhere $K$ is the same universal constant as in Theorem \\ref{lem:restricted_strong_convexity}. Some algebra yields that \n\t\\be\n\t\\label{eq:5.24}\n\t\\begin{aligned}\n\t\t\\fnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}}^2 \\lesssim \\frac{\\beta ^ 2}{\\alpha ^ 2} \\max \\biggl\\{ \\frac{\\lambda ^ 2r\\beta ^ 2}{\\alpha ^ 2}, R\\biggl({\\frac{\\xi}{n}}\\biggr)^{1 \/ 2}, \\frac{\\xi \\alpha ^ 2 \\log n}{\\beta ^ 2n}, \\frac{pR}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}\\biggr\\}. \n\t\\end{aligned} \n\t\\ee\n\tLetting $R ^ 2$ be greater than the RHS of the inequality above, we can find a universal constant $C_4 > 0$ such that\n\t\\[\n\t\\begin{aligned}\n\tR \\ge \\frac{C_4\\beta ^ 2}{\\alpha ^ 2}\\biggl(\\frac{\\xi r p ^ 2 \\pi_{\\max} \\log p}{n \\alpha}\\biggr) ^ {\\!1 \/ 2} =: R_0. \n\t\\end{aligned}\n\t\\]\n\tChoose $R = R_0$. Therefore, $\\fnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}} \\le R$ and $\\widehat \\boldsymbol{\\Delta}_{\\eta} = \\widehat \\boldsymbol{\\Delta}$. We can thus reach the conclusion. As to the KL-Divergence, by \\citet[][Lemma~4]{zhang2018optimal}, we deduce that \n\t\\be\n\t\\label{eq:l2_to_kl}\n\tD_{\\mathrm{KL}}(\\widehat\\mathbf{P}, \\mathbf{P}) = \\sum\\limits_{j = 1}^p \\pi_j D_{\\mathrm{KL}}(\\mathbf{P}_{j\\cdot}, \\widehat\\mathbf{P}_{j\\cdot}) \\le \\sum\\limits_{j = 1}^p \\frac{\\beta^2 }{2 \\alpha^2}\\ltwonorm{\\mathbf{P}_{j\\cdot} - \\widehat\\mathbf{P}_{j\\cdot}}^2 = \\frac{\\beta^2}{2\\alpha^2} \\fnorm{\\widehat\\mathbf{P} - \\mathbf{P}}^2, \n\t\\ee\n\tfrom which we attain the conclusion. \n\\end{proof}\n\n\\subsection{Proof of Theorem \\ref{thm:rank_alt}}\n\n\\begin{proof}\t\n\t~Define ${\\widehat\\boldsymbol{\\Delta}}(r) := \\widehat \\mathbf{P}^r - \\mathbf{P}$. Since ${\\rm rank}(\\mathbf{P}) \\le r$ and ${\\rm rank}(\\widehat\\mathbf{P}^r) \\le r$, ${\\rm rank}(\\widehat\\boldsymbol{\\Delta}(r))\\le 2r$. Thus $\\fnorm{\\widehat \\boldsymbol{\\Delta}(r)} \\le (2r)^{1 \/ 2} \\nnorm{\\widehat \\boldsymbol{\\Delta}(r)}$. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\tNow we follow the proof strategy of Theorem \\ref{thm:nuclear} to establish the statistical error bound for $\\widehat \\mathbf{P}^r$. Similarly, for a specific $R > 0$ whose value will be determined later, we can construct an intermediate estimator $\\widehat\\mathbf{P}^r_{\\eta} $ between $\\widehat\\mathbf{P}^r$ and $\\mathbf{P}$:\n\t\\[\n\t\\widehat \\mathbf{P}^r_{\\eta} = \\mathbf{P} + \\eta (\\widehat \\mathbf{P}^r - \\mathbf{P}), \n\t\\]\n\twhere $\\eta = 1$ if $\\fnorm{\\widehat \\mathbf{P}^r - \\mathbf{P}} \\le R$ and $\\eta = R\/\\fnorm{\\widehat\\mathbf{P}^r - \\mathbf{P}}$ if $\\fnorm{\\widehat\\mathbf{P}^r - \\mathbf{P}} > R$. Let $\\widehat \\boldsymbol{\\Delta}_\\eta(r) := \\widehat \\mathbf{P}^r_\\eta - \\mathbf{P}$. Since $\\widehat \\boldsymbol{\\Delta}_\\eta(r) \\in {\\mathcal C}(\\beta, R, \\sqrt{2})$, applying Lemma \\ref{lem:restricted_strong_convexity} yields that \n\t\\be\n\t\\begin{aligned}\n\t\t\\frac{\\alpha ^2}{8\\beta^2} & \\fnorm{\\boldsymbol{\\Delta}}^2 - 8R\\biggl({\\frac{3K\\xi }{n}}\\biggr)^{1 \/ 2} - \\frac{8K\\xi\\alpha^2 \\log n }{\\beta^2 n} - \\frac{Kp\\kappa R}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}\\\\\n\t\t& \\le \\delta\\ell_n(\\widehat\\mathbf{P}^r_{\\eta}; \\mathbf{P}) \\le - \\inn{\\Pi_{{\\mathcal N}}(\\nabla\\ell_n(\\mathbf{P})), \\widehat \\boldsymbol{\\Delta}_{\\eta}(r)} \\le \\opnorm{\\Pi_{{\\mathcal N}}(\\nabla {\\mathcal L}_n(\\mathbf{P}))} \\nnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}(r)}\\\\\n\t\t& \\le \\sqrt{2r}\\opnorm{\\Pi_{{\\mathcal N}}(\\nabla {\\mathcal L}_n(\\mathbf{P}))} \\fnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}(r)}, \n\t\\end{aligned}\n\t\\ee\n\twhich futher implies that there exists $C_1$ depending only on $\\alpha$ and $\\beta$ such that \n\t\\[\n\t\\fnorm{\\widehat \\boldsymbol{\\Delta}_{\\eta}(r)}^2 \\le C_1 \\max \\biggl\\{r\\opnorm{\\Pi_{{\\mathcal N}}(\\nabla {\\mathcal L}_n(\\mathbf{P}))} ^ 2, R\\biggl({\\frac{\\xi}{n}}\\biggr)^{1 \/ 2}, \\frac{\\xi \\alpha ^ 2 \\log n}{\\beta ^ 2n}, \\frac{pR}{\\beta}\\biggl(\\frac{r\\pi_{\\max}\\log p}{n}\\biggr) ^ {\\!1 \/ 2}\\biggr\\}. \n\t\\]\n\tBy a contradiction argument as in the proof of Theorem \\ref{thm:nuclear}, we can choose an appropriate $R$ large enough such that $\\widehat \\mathbf{P}^r_\\eta = \\widehat \\mathbf{P}^r $ and attain the conclusion. \n\t\n\\end{proof}\n\n\n\\section{Proof of Proposition \\ref{prop:penlowrank}\n\\begin{proof}\n\n\n\n\n\t~Since ${\\rm rank}({\\mathbf{X}}_c^*)\\le r$, we know that ${\\mathbf{X}}_c^*$ is in fact a feasible solution to the original problem (5) and $\\norm{{\\mathbf{X}}_c^*}_{*} - \\norm{{\\mathbf{X}}_c^*}_{(r)} = 0$. Therefore, for any feasible solution $X$ to\n\t(5), it holds that \n\t\\begin{align*} \n\tf({\\mathbf{X}}_c^*) ={}& f({\\mathbf{X}}_c^*) + c(\\norm{{\\mathbf{X}}_c^*}_{*} - \\norm{{\\mathbf{X}}_c^*}_{(r)})\\\\[5pt]\n\t\\le{}& f({\\mathbf{X}}) + c(\\norm{{\\mathbf{X}}}_* - \\norm{{\\mathbf{X}}}_{(r)})\n\t= f({\\mathbf{X}}).\n\t\\end{align*}\n\tThis completes the proof of the proposition.\n\\end{proof}\n\n\\section{Convergence and $o(1\/k)$ non-ergodic iteration complexity of Algorithm 1 (sGS-ADMM)}\n\nBefore deriving the desired results of Algorithm \\ref{alg:sGS-ADMM} for solving problem \\eqref{prob:D}, we present some notation and definitions for the subsequent analysis. Assume that the solution sets of \\eqref{prob:gen-convex-nuc} and \\eqref{prob:D} are nonempty. Then, the primal-dual solution pairs associated with problems \\eqref{prob:gen-convex-nuc} and \\eqref{prob:D} satisfy the following Karush-Kuhn-Tucker (KKT) system: \n\\begin{equation}\n\\label{KKT}\n0 \\in R({\\bf X}, {\\bf \\Xi}, {\\bf S}), \\quad {\\mathcal A}({\\bf X}) = b, \\quad \n{\\bf \\Xi} + {\\mathcal A}^*(y) + {\\bf S} = 0,\n\\end{equation}\nwith \n\\begin{equation*}\nR({\\bf X}, {\\bf \\Xi}, {\\bf S}): = \\begin{pmatrix}\n{\\bf \\Xi} + \\partial g({\\bf X}) \\\\[5pt]\n{\\bf X} + \\partial \\delta(\\norm{{\\bf S}}_\n\n2 \\le c)\n\\end{pmatrix}, \\quad ({\\bf X}, {\\bf \\Xi}, {\\bf S})\\in {\\rm dom}\\,g \\times \\Re^{p\\times p}\\times \\left\\{{\\bf S}\\in \\Re^{p\\times p}\\mid \\norm{{\\bf S}}_2 \\le c \\right\\}.\n\\end{equation*}\nDefine the KKT residual function $D:{\\rm dom}\\,g \\times \\Re^{p\\times p}\\times \\Re^n \\times \\left\\{{\\bf S}\\in \\Re^{p\\times p}\\mid \\norm{{\\bf S}}_2 \\le c \\right\\} \\to [0, +\\infty)$ as\n\\[\nD({\\bf X}, {\\bf \\Xi}, y, {\\bf S}):= {\\rm dist}^2(0, R({\\bf X}, {\\bf \\Xi}, {\\bf S})) + \\norm{{\\mathcal A}({\\bf X}) - b}^2 + \n\\norm{{\\bf \\Xi} + {\\mathcal A}^*(y) + {\\bf S}}^2.\n\\]\nWe say $({\\bf X}, {\\bf \\Xi}, y, {\\bf S})\\in {\\rm dom}\\,g \\times \\Re^{p\\times p}\\times \\Re^n \\times \\left\\{{\\bf S}\\in \\Re^{p\\times p}\\mid \\norm{{\\bf S}}_2 \\le c \\right\\}$ be an $\\epsilon$-approximate primal-dual solution pair for problems \\eqref{prob:gen-convex-nuc} and \\eqref{prob:D} if\n$D({\\bf X}, {\\bf \\Xi}, y, {\\bf S}) \\le \\epsilon$. We show in the following theorem the global convergence and the $o(1\/k)$ iteration complexity results of Algorithm sGS-ADMM.\n\\begin{theorem}\n\t\\label{thm:sGS-ADMM}\n\tSuppose that the solution sets of \\eqref{prob:gen-convex-nuc} and \\eqref{prob:D} are nonempty. Let $\\{({\\bf\\Xi}^k,y^k,\\mathbb{S}^k,{\\mathbf{X}}^k)\\}$ be the sequence generated by Algorithm \\ref{alg:sGS-ADMM}. If $\\tau\\in(0,(1+\\sqrt{5}\\,)\/2)$, then the sequence $\\{({\\bf\\Xi}^k,y^k,\\mathbb{S}^k)\\}$ converges to an optimal solution of \\eqref{prob:D} and $\\{{\\mathbf{X}}^k\\}$ converges to an optimal solution of \\eqref{prob:gen-convex-nuc}.\n\tMoreover, there exist a constant $\\omega >0$ such that \n\t\\[\n\t\\min_{1\\le i \\le k} \\left\\{ D({\\bf X}^k, {\\bf \\Xi}^k, y^k, {\\bf S}^k) \\right\\} \\le \\frac{\\omega}{k}, \\ \\forall\\, k\\ge 1, \\quad{\\rm and} \\quad \n\t\\lim_{k\\to \\infty} \\left\\{ k \\min_{1\\le i \\le k} \\left\\{ D({\\bf X}^k, {\\bf \\Xi}^k, y^k, {\\bf S}^k) \\right\\} \\right\\}= 0.\n\t\\]\n\\end{theorem}\n\n\\begin{proof}\n\t~In order to use \\citep[Theorem 3]{li2016schur}, we need to write problem \\eqref{prob:D} as following\n\t\\begin{equation*} \n\t\\begin{array}{rll}\n\t\\min & g^*(-{\\bf\\Xi}) - \\inprod{b}{y} + {\\delta( \\norm{\\mathbb{S}}_2 \\le c)} \\\\\n\t\\mbox{s.t.} & {\\mathcal F} ({\\bf\\Xi}) + {\\mathcal A}_1^*(y) + {\\mathcal G}(\\mathbb{S}) = 0,\n\t\\end{array}\n\t\\end{equation*}\n\twhere ${\\mathcal F}, {\\mathcal A}_1$ and ${\\mathcal G}$ are linear operators such that for all $({\\bf \\Xi}, y, \\mathbb{S}) \\in \\Re^{p\\times p} \\times \\Re^n \\times \\Re^{p\\times p}$, ${\\mathcal F}({\\bf\\Xi}) = {\\bf \\Xi}$, \n\t${\\mathcal A}_1^*(y) = {\\mathcal A}^*(y)$ and ${\\mathcal G}(\\mathbb{S}) = \\mathbb{S}$. \n\tClearly, ${\\mathcal F} = {\\mathcal G} = {\\mathcal I}$ where ${\\mathcal I}:\\Re^{p\\times p} \\to \\Re^{p\\times p}$ is the identity map.\n\tTherefore, we have ${\\mathcal A}_1{\\mathcal A}_1^* \\succ 0$ and ${\\mathcal F}\\cF^* = {\\mathcal G}\\cG^* = {\\mathcal I} \\succ 0$. Hence, the assumptions and conditions in \\citep[Theorem 3]{li2016schur} are satisfied. The convergence results thus follow directly. Meanwhile, the non-ergodic iteration complexity results follows from \\citep[Theorem 6.1]{chen2017efficient}.\n\\end{proof}\n\n\n\\section{Proof of Theorems \\ref{thm:convergence-alg-MM} and \\ref{thm: MMconvergence}}\nWe only need to prove Theorem \\ref{thm: MMconvergence} \nas Theorem \\ref{thm:convergence-alg-MM} \nis a special incidence. To prove Theorem \\ref{thm: MMconvergence}, \nwe first introduce the following lemma.\t\n\\begin{lemma}\\label{lemma:decrease}\n\tSuppose that $\\{ {x}^k \\}$ is the sequence generated by Algorithm 3.\n\tThen $\\theta({x}^{k+1})\\le \\theta({x}^k) - \\frac{1}{2}\\|{x}^{k+1} - {x}^k\\|^2_{\\mathcal{G} + 2\\mathcal{T}}$.\n\\end{lemma}\n\\begin{proof}\n\n\n\n\n\n~For any $k\\geq 0$, by the optimality condition of problem (10) at\n\t${x}^{k+1}$, we know that \n\tthere exist $\\eta^{k+1}\\in \\partial p({x}^{k+1})$ such that\n\t\\begin{equation*}\\label{eq:major-k-optimality}\n\t0 = \\nabla g ({x}^k) + (\\mathcal{G} + \\mathcal{T})({x}^{k+1} -\n\tx^{k}) + \\eta^{k+1}-\\xi^k = 0.\n\t\\end{equation*}\n\tThen for any $k\\ge 0$, we deduce\n\t\\begin{equation*}\n\t\\begin{array}{rl}\n\t& \\theta({x}^{k+1}) - \\theta({x}^k)\n\t\\le \\widehat{\\theta}({x}^{k+1};{x}^k) - \\theta({x}^k)\\\\[0.1in]\n\t= & p(x^{k+1}) - p(x^k) + \\langle {x}^{k+1} - {x}^k , \\nabla g({x}^k)-\\xi^k \\rangle +\n\t\\frac{1}{2} \\|{x}^{k+1} - {x}^k\\|^2_{\\mathcal{G}} \\\\[0.1in]\n\t\\le & \\langle \\nabla g(x^k)+\\eta^{k+1} -\\xi^k, {x}^{k+1} - {x}^k\\rangle+\n\t\\frac{1}{2} \\|{x}^{k+1} - {x}^k\\|^2_{\\mathcal{G}}\n\t\\\\[0.1in]\n\t= & - \\frac{1}{2}\\|x^{k+1} - x^k\\|^2_{\\mathcal{G} + 2\\mathcal{T}}.\n\t\\end{array}\n\t\\end{equation*}\n\tThis completes the proof of this lemma.\n\\end{proof}\n\nNow we are ready to prove Theorem \\ref{thm: MMconvergence}.\n\\begin{proof}\n\t~From the optimality condition at $x^{k+1}$, we have that \n\t\\[ 0 \\in \\nabla g ({x}^k) + (\\mathcal{G} + \\mathcal{T})({x}^{k+1} -\n\tx^{k}) + \\partial p(x^{k+1})-\\xi^k.\\]\n\tSince $x^{k+1} = x^k$, this implies that \n\t\\[ 0 \\in \\nabla g ({x}^k) + \\partial p(x^{k})- \\partial q(x^k), \\]\n\ti.e., $x^k$ is a critical point.\n\tObserve that the sequence $\\{\\theta (x^{k})\\}$ is non-increasing since\n\t$${\\theta}(x^{k+1}) \\le \\widehat{\\theta}(x^{k+1}; x^{k}) \\le \\widehat{\\theta}(x^{k}; x^{k}) =\\theta(x^{k}), \\quad k\\geq 0.$$\n\tSuppose that there exists a subsequence $\\{x^{k_j}\\}$ that converging to $\\bar{x}$, i.e., one of the accumulation points of $\\{x^k\\}$.\n\tBy Lemma \\ref{lemma:decrease} and the assumption that $\\mathcal{G} + 2\\mathcal{T}\\succeq 0$, we know that for all $x\\in \\mathbb{X}$\n\t\\begin{align*}\n\t&\\widehat{\\theta}(x^{k_{j+1}};x^{k_{j+1}}) = \\theta(x^{k_{j+1}}) \\\\\n\t\\le &\\theta(x^{k_j+1})\\le \\widehat{\\theta}(x^{k_j+1};x^{k_j})\\le \\widehat{\\theta}(x;x^{k_j}).\n\t\\end{align*}\n\tBy letting $j\\to\\infty$ in the above inequality, we obtain that\n\t$$\n\t\\widehat{\\theta}(\\bar{x};\\bar{x})\\le \\widehat{\\theta}(x;\\bar{x}).\n\t$$\n\tBy the optimality condition of $\\widehat{\\theta}(x; \\bar{x})$, we have that\n\tthere exists $\\bar{u}\\in \\partial p(\\bar{x})$ and $\\bar{v}\\in \\partial q(\\bar{x})$ such that\n\t$$\n\t0 \\in \\nabla g(\\bar{x}) + \\bar{u} - \\bar{v}. \n\t$$\n\tThis implies that $\\left(\\nabla g(\\bar{x}) + \\partial p(\\bar{x}) \\right)\\cap \\partial q(\\bar{x})\\neq \\emptyset$.\n\tTo establish the rest of this proposition, \n\twe obtain from Lemma 1 that\n\t\\begin{align*}\n\t&\\lim_{t \\to + \\infty}\\frac{1}{2} \\sum_{i=0}^t \\|{x}^{k+1}-{x}^k\\|^2_{\\mathcal{G}+2\\mathcal{T}} \\\\\n\t\\le {}&\\liminf_{t\\to +\\infty} \\big( \\theta(x^0)\n\t-\\theta({x}^{k+1})\\big) \\le \\theta(x^0) < +\\infty \\,,\n\t\\end{align*}\n\twhich implies $ \\lim_{i\\to +\\infty} \\|{x}^{k+1} -\n\tx^{i}\\|_{\\mathcal{G}+2\\mathcal{T}} = 0.$ The proof of this theorem is thus complete by the positive definiteness of the operator $\\mathcal{G} + 2\\mathcal{T}$.\n\\end{proof}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsubsection*{#1}}\n\n\\newcommand{S}{S}\n\\newcommand{s}{s}\n\\newcommand{\\pv}[1]{\\mathbf{#1}}\n\\newcommand{\\mathrm{t}}{\\mathrm{t}}\n\\newcommand{z_\\mr{diff}}{z_\\mr{diff}}\n\\newcommand{z_\\mr{same}}{z_\\mr{same}}\n\n\\newcommand{\\medskip}{\\medskip}\n\n\\newcommand{\\propertygroup}[2]{\\paragraph*{#1.~#2}}\n\\newcommand{\\property}[1]{\\emph{#1:}\\ }\n\n\\newcommand{\\dtxt}[1]{\\medskip\\begin{quote}%\n\\setlength{\\fboxsep}{2mm}%\n\\fbox{\\parbox{0.82\\textwidth}{\\sl #1}}%\n\\end{quote}\\medskip}\n\n\n\\newcommand{\\magn}[1]{|#1|}\n\\newcommand{\\pd}[1]{\\widetilde{#1}}\n\n\\newcommand{\\smp}[1]{\\Delta_{#1}}\n\\newcommand{\\ismp}[1]{\\Delta_{#1}^\\circ}\n\\newcommand{\\rstr}[2]{#1|_{#2}}\n\\newcommand{\\mr{supp}}{\\mr{supp}}\n\n\\newcommand{\\mr{c}}{\\mr{c}}\n\\newcommand{\\mr{d}}{\\mr{d}}\n\n\\newcommand{\\Dmax}[1]{D_\\mr{max}(#1)}\n\\newcommand{\\ptl}[1]{\\frac{\\partial}{\\partial #1}}\n\\newcommand{\\indprop}[1]{\\textbf{\\textsf{Q}}${}_{#1}$}\n\n\\newcommand{\\passage}[1]{\\paragraph{\\textit{#1}}}\n\n\\newcommand{\\lbl}[1]{\\label{#1}}\n\\renewcommand{\\iff}{\\Leftrightarrow}\n\\newcommand{\\trianglelefteq}{\\trianglelefteq}\n\\newcommand{\\approx}{\\approx}\n\\newcommand{\\simeq}{\\simeq}\n\\newcommand{\\um}[2]{U_{#1}^{#2}}\n\\newcommand{\\ip}[2]{\\langle #1, #2 \\rangle}\n\n\n\\section{Statement of the problem}\n\\lbl{sec:statement}\n\n\\passage{Basic definitions} Fix an integer $n \\geq 1$ throughout. A\n\\demph{similarity matrix} is an $n\\times n$ symmetric matrix $Z$ with entries\nin the interval $[0, 1]$, such that $Z_{ii} = 1$ for all $i$. A\n\\demph{(probability) distribution} is an $n$-tuple $\\pv{p} = (p_1, \\ldots,\np_n)$ with $p_i \\geq 0$ for all $i$ and $\\sum_{i = 1}^n p_i = 1$.\n\nGiven a similarity matrix $Z$ and a distribution $\\pv{p}$, thought of as a\ncolumn vector, we may form the matrix product $Z\\pv{p}$ (also a column\nvector), and we denote by $(Z\\pv{p})_i$ its $i$th entry.\n\n\\begin{lemma} \\lbl{lemma:positive}\nLet $Z$ be a similarity matrix and $\\pv{p}$ a distribution. Then $p_i \\leq\n(Z\\pv{p})_i \\leq 1$ for all $i \\in \\{1, \\ldots, n\\}$. In particular, if $p_i\n> 0$ then $(Z\\pv{p})_i > 0$.\n\\end{lemma}\n\n\\begin{proof}\nWe have\n\\[\n(Z\\pv{p})_i \n=\n\\sum_{j = 1}^n Z_{ij} p_j\n=\np_i + \\sum_{j \\neq i} Z_{ij} p_j\n\\geq\np_i\n\\]\nand $(Z \\pv{p})_i \\leq \\sum_{j = 1}^n 1 p_j = 1$.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nLet $Z$ be a similarity matrix and let $q \\in [0, \\infty)$. The function\n$H_q^Z$ is defined on distributions $\\pv{p}$ by\n\\[\nH^Z_q(\\pv{p})\n=\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle\n\\frac{1}{q - 1} \n\\left( 1 - \\sum_{i:\\,p_i > 0} p_i (Z\\pv{p})_i^{q - 1} \\right) &\n\\textrm{if } q \\neq 1 \\\\\n\\displaystyle\n- \\sum_{i:\\,p_i > 0} p_i \\log (Z\\pv{p})_i &\n\\textrm{if } q = 1.\n\\end{array}\n\\right.\n\\]\nLemma~\\ref{lemma:positive} guarantees that these definitions are valid. The\ndefinition in the case $q = 1$ is explained by the fact that $H_1^Z(\\pv{p}) =\n\\lim_{q\\to 1} H_q^Z(\\pv{p})$ (easily shown using l'H\\^opital's rule). We\ncall $H_q^Z$ the \\demph{entropy of order $q$}.\n\n\\passage{Notes on the literature} The entropies $H_q^Z$ were introduced in\nthis generality by Ricotta and Szeidl in 2006, as an index of the diversity of\nan ecological community~\\cite{RS}. Think of $n$ as the number of species,\n$Z_{ij}$ as indicating the similarity of the $i$th and $j$th species, and\n$p_i$ as the relative abundance of the $i$th species. Ricotta and Szeidl used\nnot similarities $Z_{ij}$ but dissimilarities or `distances' $d_{ij}$; the\nformulas above become equivalent to theirs on putting $Z_{ij} = 1 - d_{ij}$.\n\nThe case $Z = I$ goes back further. Something very similar to $H_q^I$, using\nlogarithms to base $2$ rather than base $e$, appeared in information theory in\n1967, in a paper of Havrda and Charv\\'at~\\cite{HC}. Later, the entropies\n$H_q^I$ were discovered in statistical ecology, in a 1982 paper of Patil and\nTaillie~\\cite{PT}. Finally, they were rediscovered in physics, in a 1988\npaper of Tsallis~\\cite{Tsa}.\n\nStill in the case $Z = I$, certain values of $q$ give famous quantities. The\nentropy $H_1^I$ is Shannon entropy (except that Shannon used logarithms to\nbase~$2$). The entropy $H_2^I$ is known in ecology as the Simpson or\nGini--Simpson index; it is the probability that two individuals chosen at\nrandom are of different species.\n\nFor general $Z$, the entropy of order $2$ is known as Rao's quadratic\nentropy~\\cite{Rao}. It is usually stated in terms of the matrix with $(i,\nj)$-entry $1 - Z_{ij}$, that is, the matrix of dissimilarities mentioned\nabove. \n\nOne way to obtain a similarity matrix is to start with a finite metric space\n$\\{a_1, \\ldots, a_n\\}$ and put $Z_{ij} = e^{-d(a_i, a_j)}$. Matrices of this\nkind are investigated in depth in~\\cite{MMS} and other papers cited therein.\nHere, metric spaces will only appear in two examples~(\\ref{eg:ultra}\nand~\\ref{eg:metric}).\n\n\\passage{The maximum entropy problem} Let $Z$ be a similarity matrix and\nlet $q \\in [0, \\infty)$. The maximum entropy problem is this:\n\\dtxt{%\nFor which distribution(s) $\\pv{p}$ is\n$H_q^Z(\\pv{p})$ maximal, and what is the maximum value?}\n\nThe solution is given in Theorem~\\ref{thm:main-entropy}. The terms used in\nthe statement of the theorem will be defined shortly. However, the following\nstriking fact can be stated immediately:\n\\dtxt{%\nThere is a distribution maximizing $H_q^Z$ for all\n$q$ simultaneously.}\nSo even though the entropies of different orders rank distributions\ndifferently, there is a distribution that is maximal for all of them.\n\nFor example, this fully explains the numerical coincidence noted in\nthe Results section of~\\cite{AKB}.\n\n\\passage{Restatement in terms of diversity}\nLet $Z$ be a similarity matrix. For each $q \\in [0, \\infty)$, define a\nfunction $D_q^Z$ on distributions $\\pv{p}$ by\n\\begin{eqnarray*}\nD_q^Z(\\pv{p}) &\n= &\n\\left\\{\n\\begin{array}{ll}\n\\left( 1 - (q - 1)H_q^Z(\\pv{p}) \\right)^\\frac{1}{1 - q} &\n\\textrm{if } q \\neq 1 \\\\\ne^{H_1^Z(\\pv{p})} &\n\\textrm{if }q = 1\n\\end{array}\n\\right. \\\\\n &= &\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle\n\\left(\n\\sum_{i:\\,p_i > 0}\np_i (Z\\pv{p})_i^{q - 1}\n\\right)^\\frac{1}{1 - q} &\n\\textrm{if } q \\neq 1 \\\\\n\\displaystyle\n\\prod_{i:\\,p_i > 0}\n(Z\\pv{p})_i^{-p_i} &\n\\textrm{if } q = 1.\n\\end{array}\n\\right.\n\\end{eqnarray*}\nWe call $D_q^Z$ the \\demph{diversity of order $q$}. As for entropy, it is\neasily shown that $D_1^Z(\\pv{p}) = \\lim_{q \\to 1} D_q^Z(\\pv{p})$.\n\nThese diversities were introduced informally in~\\cite{EDC2}, and are explained\nand developed in~\\cite{MD}. The case $Z = I$ is well known in several fields:\nin information theory, $\\log D_q^I$ is called the R\\'enyi entropy of order\n$q$~\\cite{Ren}; in ecology, $D_q^I$ is called the Hill number of order\n$q$~\\cite{Hill}; and in economics, $1\/D_q^I$ is the Hannah--Kay measure of\nconcentration~\\cite{HK}.\n\nThe transformation between $H_q^Z$ and $D_q^Z$ is invertible and\norder-preserving (increasing). Hence the maximum entropy problem is\nequivalent to the maximum diversity problem:\n\\dtxt{%\nFor which distribution(s) $\\pv{p}$ is\n$D_q^Z(\\pv{p})$ maximal, and what is the maximum value?}\nThe solution is given in Theorem~\\ref{thm:main}. It will be more convenient\nmathematically to work with diversity rather than entropy. Thus, we prove\nresults about diversity and deduce results about entropy.\n\nWhen stated in terms of diversity, a further striking aspect of the solution\nbecomes apparent:\n\\dtxt{%\nThere is a distribution maximizing $D_q^Z$ for all\n$q$ simultaneously. The maximum value of $D_q^Z$ is the same for all $q$.}\nSo every similarity matrix has an unambiguous `maximum diversity', the\nmaximum value of $D_q^Z$ for any $q$.\n\nA similarity matrix may have more than one maximizing distribution---but the\ncollection of maximizing distributions is independent of $q > 0$. In other\nwords, a distribution that maximizes $D_q^Z$ for some $q$ actually maximizes\n$D_q^Z$ for all $q$ (Corollary~\\ref{cor:some-all}).\n\nThe diversities $D_q^Z$ are closely related to generalized means~\\cite{HLP},\nalso called power means. Given a finite set $I$, positive real numbers\n$(x_i)_{i \\in I}$, positive real numbers $(p_i)_{i \\in I}$ such that $\\sum_i\np_i = 1$, and $t \\in \\mathbb{R}$, the \\demph{generalized mean} of $(x_i)_{i \\in\nI}$, weighted by $(p_i)_{i \\in I}$, of order $t$, is\n\\[\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle\n\\left( \\sum_{i \\in I} p_i x_i^t \\right)^{1\/t} &\n\\textrm{if } t \\neq 0 \\\\\n\\displaystyle\n\\prod_{i \\in I} x_i^{p_i} &\n\\textrm{if } t = 0.\n\\end{array}\n\\right.\n\\]\nFor example, if $p_i = p_j$ for all $i, j \\in I$ then the generalized means of\norders $1$, $0$ and $-1$ are the arithmetic, geometric and harmonic means,\nrespectively. \n\nGiven a similarity matrix $Z$ and a distribution $\\pv{p}$, take $I = \\{ i\n\\in \\{1, \\ldots, n\\} \\:|\\: p_i > 0\\}$. Then $1\/D_q^Z(\\pv{p})$ is the\ngeneralized mean of $((Z\\pv{p})_i)_{i \\in I}$, weighted by $(p_i)_{i \\in I}$,\nof order $q - 1$. We deduce the following.\n\n\\begin{lemma} \\label{lemma:means}\nLet $Z$ be a similarity matrix and $\\pv{p}$ a distribution. Then:\n\\begin{enumerate}\n\\item \\label{part:cts}\n$D_q^Z(\\pv{p})$ is continuous in $q \\in [0, \\infty)$\n\\item \\label{part:dec}\nif $(Z\\pv{p})_i =\n(Z\\pv{p})_j$ for all $i, j$ such that $p_i, p_j > 0$ then $D_q^Z(\\pv{p})$ is\nconstant over $q \\in [0, \\infty)$; otherwise, $D_q^Z(\\pv{p})$ is strictly\ndecreasing in $q \\in [0, \\infty)$\n\\item \\label{part:infty} \n$\\lim_{q \\to \\infty} D_q^Z(\\pv{p}) = 1\/\\max_{i: p_i > 0} (Z\\pv{p})_i$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nAll of these assertions follow from standard results on generalized means.\nContinuity is clear except perhaps at $q = 1$, where it follows from Theorem~3\nof~\\cite{HLP}. Part~(\\ref{part:dec}) follows from Theorem~16 of~\\cite{HLP},\nand part~(\\ref{part:infty}) from Theorem~4. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nIn the light of this, we define\n\\[\nD_\\infty^Z(\\pv{p}) \n= \n1 \/ \\max_{i:\\,p_i > 0} (Z\\pv{p})_i.\n\\]\nThere is no useful definition of $H_\\infty^Z$, since $\\lim_{q \\to \\infty}\nH_q^Z(\\pv{p}) = 0$ for all $Z$ and $\\pv{p}$.\n\n\n\n\\section{Preparatory results}\n\\lbl{sec:prep}\n\nHere we make some definitions and prove some lemmas in preparation for solving\nthe maximum diversity and entropy problems. Some of these definitions and\nlemmas can also be found in~\\cite{MMS} and~\\cite{AMSES}.\n\n\\emph{Convention:} for the rest of this work, unlabelled summations $\\sum$ are\nunderstood to be over all $i \\in \\{1, \\ldots, n\\}$ such that $p_i > 0$.\n\n\n\\passage{Weightings and magnitude}\n\n\\begin{defn}\nLet $Z$ be a similarity matrix. A \\demph{weighting} on $Z$ is a column vector\n$\\pv{w} \\in \\mathbb{R}^n$ such that\n\\[\nZ\\pv{w} \n= \n\\left(\\!\\!\n\\begin{array}{c}\n1 \\\\\n\\vdots \\\\\n1\n\\end{array}\n\\!\\!\\right).\n\\]\nA weighting $\\pv{w}$ is \\demph{non-negative} if $w_i \\geq 0$ for all\n$i$, and \\demph{positive} if $w_i > 0$ for all $i$. \n\\end{defn}\n\n\\begin{lemma}\nLet $\\pv{w}$ and $\\pv{x}$ be weightings on $Z$. Then $\\sum_{i = 1}^n w_i =\n\\sum_{i = 1}^n x_i$.\n\\end{lemma}\n\n\n\\begin{proof} Write $\\pv{u}$ for the column vector $(1\\ \\cdots\\ 1)^\\mathrm{t}$,\nwhere $(\\emptybk)^\\mathrm{t}$ means transpose. Then \n\\[\n\\sum_{i = 1}^n w_i\n=\n\\pv{u}^\\mathrm{t} \\pv{w}\n=\n(Z\\pv{x})^\\mathrm{t} \\pv{w}\n=\n\\pv{x}^\\mathrm{t} (Z \\pv{w})\n=\n\\pv{x}^\\mathrm{t} \\pv{u}\n=\n\\sum_{i = 1}^n x_i,\n\\]\nusing symmetry of $Z$.\n\\hfill\\ensuremath{\\Box} \n\\end{proof}\n\n\\begin{defn}\nLet $Z$ be a similarity matrix on which there exists at least one weighting.\nIts \\demph{magnitude} is $\\magn{Z} = \\sum_{i = 1}^n w_i$, for any weighting\n$\\pv{w}$ on $Z$. \n\\end{defn}\n\nFor example, if $Z$ is invertible then there is a unique weighting $\\pv{w}$ on\n$Z$, and $w_i$ is the sum of the $i$th row of $Z^{-1}$. So then\n\\[\n\\magn{Z} = \\sum_{i, j = 1}^n (Z^{-1})_{ij},\n\\]\nthe sum of all $n^2$ entries of $Z^{-1}$. This formula also appears\nin~\\cite{SP}, \\cite{Shi} and~\\cite{POP}, for closely related reasons to do\nwith diversity and its maximization.\n\n\n\\passage{Weight distributions}\n\n\\begin{defn}\nLet $Z$ be a similarity matrix. A \\demph{weight distribution} for $Z$ is a\ndistribution $\\pv{p}$ such that $(Z\\pv{p})_1 = \\cdots = (Z\\pv{p})_n$.\n\\end{defn}\n\n\\begin{lemma} \\lbl{lemma:wt-distribs}\nLet $Z$ be a similarity matrix. \n\\begin{enumerate}\n\\item If $Z$ admits a non-negative weighting then $\\magn{Z} > 0$.\n\\item If $\\pv{w}$ is a non-negative weighting on $Z$ then $\\pv{w}\/\\magn{Z}$ is\na weight distribution for $Z$, and this defines a one-to-one correspondence\nbetween non-negative weightings and weight distributions.\n\\item \nIf $Z$ admits a weight distribution then $Z$ admits a weighting and $\\magn{Z}\n> 0$. \n\\item \\lbl{part:wt-distribs-reciprocal}\nIf $\\pv{p}$ is a weight distribution for $Z$ then $(Z\\pv{p})_i =\n1\/\\magn{Z}$ for all $i$.\n\\end{enumerate}\n\\end{lemma}\n\n\n\\begin{proof}\n\\begin{enumerate}\n\\item Let $\\pv{w}$ be a non-negative weighting. Certainly $\\magn{Z} =\n\\sum_{i = 1}^n w_i \\geq 0$. Since we are assuming that $n \\geq 1$, the vector\n$\\pv{0}$ is not a weighting, so $w_i > 0$ for some $i$. Hence $\\magn{Z} > 0$.\n\n\\item The first part is clear. To see that this defines a one-to-one\ncorrespondence, take a weight distribution $\\pv{p}$, writing $(Z\\pv{p})_i = K$\nfor all $i$. Since $\\sum p_i = 1$, we have $p_i > 0$ for some $i$, and then\n$K = (Z\\pv{p})_i > 0$ by Lemma~\\ref{lemma:positive}. The vector $\\pv{w} =\n\\pv{p}\/K$ is then a non-negative weighting.\n\nThe two processes---passing from a non-negative weighting to a weight\ndistribution, and vice versa---are easily shown to be mutually inverse.\n\n\\item Follows from the previous parts.\n\n\\item Follows from the previous parts.\n\\hfill\\ensuremath{\\Box}\n\\end{enumerate}\n\\end{proof}\n\nThe first connection between magnitude and diversity is this:\n\\begin{lemma} \\lbl{lemma:mag-div}\nLet $Z$ be a similarity matrix and $\\pv{p}$ a weight distribution for $Z$.\nThen $D_q^Z(\\pv{p}) = \\magn{Z}$ for all $q \\in [0, \\infty]$.\n\\end{lemma}\n\n\\begin{proof}\nBy continuity, it is enough to prove this for $q \\neq 1, \\infty$. In that\ncase, using Lemma~\\ref{lemma:wt-distribs}(\\ref{part:wt-distribs-reciprocal}),\n\\[\nD_q^Z(\\pv{p})\n=\n\\left(\n\\sum p_i (Z\\pv{p})_i^{q - 1}\n\\right)^\\frac{1}{1 - q}\n=\n\\left(\n\\sum p_i \\magn{Z}^{1 - q}\n\\right)^\\frac{1}{1 - q}\n=\n\\magn{Z},\n\\]\nas required.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\passage{Invariant distributions}\n\n\\begin{defn}\nLet $Z$ be a similarity matrix. A distribution $\\pv{p}$ is \\demph{invariant}\nif $D_q^Z(\\pv{p}) = D_{q'}^Z(\\pv{p})$ for all $q, q' \\in [0, \\infty]$.\n\\end{defn}\n\n\nSoon we will classify the invariant distributions. To do so, we need some\nmore notation and a lemma.\n\nGiven a similarity matrix $Z$ and a subset $B \\subseteq \\{1, \\ldots, n\\}$, let\n$Z_B$ be the matrix $Z$ restricted to $B$, so that $(Z_B)_{ij} = Z_{ij}$ ($i,\nj \\in B$). If $B$ has $m$ elements then $Z_B$ is an $m \\times m$ matrix, but\nit will be more convenient to index the rows and columns of $Z_B$ by the\nelements of $B$ themselves than by $1, \\ldots, m$.\n\nWe will also need to consider distributions on subsets of $\\{1, \\ldots, n\\}$.\nA distribution on $B \\subseteq \\{1, \\ldots, n\\}$ is said to be invariant, a weight\ndistribution, etc., if it is invariant, a weight distribution, etc., with\nrespect to $Z_B$. Similarly, we will sometimes speak of `weightings on $B$',\nmeaning weightings on $Z_B$. Distributions are understood to be on $\\{1,\n\\ldots, n\\}$ unless specified otherwise.\n\n\\begin{lemma} \\lbl{lemma:absent}\nLet $Z$ be a similarity matrix, let $B \\subseteq \\{1, \\ldots, n\\}$, and let\n$\\pv{r}$ be a distribution on $B$. Write $\\pv{p}$ for the distribution\nobtained by extending $\\pv{r}$ by zero. Then $D_q^{Z_B}(\\pv{r}) =\nD_q^Z(\\pv{p})$ for all $q \\in [0, \\infty]$. In particular, $\\pv{r}$ is\ninvariant if and only if $\\pv{p}$ is.\n\\end{lemma}\n\n\\begin{proof}\nFor $i \\in B$ we have $r_i = p_i$ and $(Z_B \\pv{r})_i = (Z\\pv{p})_i$. The\nresult follows immediately from the definition of diversity of order $q$.\n\\hfill\\ensuremath{\\Box} \n\\end{proof}\n\nBy Lemma~\\ref{lemma:mag-div}, any weight distribution is invariant, and by\nLemma~\\ref{lemma:absent}, any extension by zero of a weight distribution is\nalso invariant. We will prove that these are all the invariant distributions\nthere are.\n\nFor a distribution $\\pv{p}$ we write $\\mr{supp}(\\pv{p}) = \\{ i \\in \\{1, \\ldots,\nn\\} \\:|\\: p_i > 0\\}$, the \\demph{support} of $\\pv{p}$.\n\nLet $Z$ be a similarity matrix. Given $\\emptyset \\neq B \\subseteq \\{1, \\ldots,\nn\\}$ and a non-negative weighting $\\pv{w}$ on $Z_B$, let $\\pd{\\pv{w}}$ be the\ndistribution obtained by first taking the weight distribution\n$\\pv{w}\/\\magn{Z_B}$ on $B$, then extending by zero to $\\{1, \\ldots, n\\}$.\n\n\n\\begin{propn} \\lbl{propn:invt}\nLet $Z$ be a similarity matrix and $\\pv{p}$ a distribution. The following\nare equivalent:\n\\begin{enumerate}\n\\item \\lbl{part:invt-invt}\n$\\pv{p}$ is invariant\n\\item \\lbl{part:invt-support}\n$(Z\\pv{p})_i = (Z\\pv{p})_j$ for all $i, j \\in \\mr{supp}(\\pv{p})$\n\\item \\lbl{part:invt-wt-dist}\n$\\pv{p}$ is the extension by zero of a weight distribution on a nonempty\nsubset of $\\{1, \\ldots, n\\}$ \n\\item \\lbl{part:invt-weighting}\n$\\pv{p} = \\pd{\\pv{w}}$ for some non-negative weighting $\\pv{w}$ on some\nnonempty subset of $\\{1, \\ldots, n\\}$.\n\\end{enumerate}\n\\end{propn}\n\n\\begin{proof}\n(\\ref{part:invt-invt}$\\,\\Rightarrow\\,$\\ref{part:invt-support}): Follows from\nLemma~\\ref{lemma:means}. \n\n(\\ref{part:invt-support}$\\,\\Rightarrow\\,$\\ref{part:invt-wt-dist}): Suppose\nthat~(\\ref{part:invt-support}) holds, and write $B = \\mr{supp}(\\pv{p})$. The\ndistribution $\\pv{p}$ on $\\{1, \\ldots, n\\}$ restricts to a distribution\n$\\pv{r}$ on $B$. This $\\pv{r}$ is a weight distribution on\n$B$, since for all $i \\in B$ we have $(Z_B \\pv{r})_i = (Z\\pv{p})_i$,\nwhich by~(\\ref{part:invt-support}) is constant over $i \\in B$. Clearly\n$\\pv{p}$ is the extension by zero of $\\pv{r}$.\n\n(\\ref{part:invt-wt-dist}$\\,\\Rightarrow\\,$\\ref{part:invt-invt}): Suppose that $\\pv{p}$\nis the extension by zero of a weight distribution $\\pv{r}$ on a\nnonempty subset $B \\subseteq \\{1, \\ldots, n\\}$. Then for all $q \\in [0, \\infty]$,\n\\[\nD_q^Z(\\pv{p})\n=\nD_q^{Z_B}(\\pv{r})\n=\n\\magn{Z_B}\n\\]\nby Lemmas~\\ref{lemma:absent} and~\\ref{lemma:mag-div} respectively; hence\n$\\pv{p}$ is invariant.\n\n(\\ref{part:invt-wt-dist}$\\iff$\\ref{part:invt-weighting}): Follows from\nLemma~\\ref{lemma:wt-distribs}.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nThere is at least one invariant distribution on any given similarity matrix.\nFor we may choose $B$ to be a one-element subset, which has a unique\nnon-negative weighting $\\pv{w} = (1)$, and this gives the invariant\ndistribution $\\pd{\\pv{w}} = (0, \\ldots, 0, 1, 0, \\ldots, 0)$.\n\n\n\\passage{Maximizing distributions}\n\n\n\\begin{defn} \\lbl{defn:max}\nLet $Z$ be a similarity matrix. Given $q \\in [0,\n\\infty]$, a distribution $\\pv{p}$ is \\demph{$q$-maximizing} if $D_q^Z(\\pv{p})\n\\geq D_q^Z(\\pv{p}')$ for all distributions $\\pv{p}'$. A distribution is\n\\demph{maximizing} if it is $q$-maximizing for all $q \\in [0, \\infty]$.\n\\end{defn}\n\nIt makes no difference to the definition of `maximizing' if we omit $q =\n\\infty$; nor does it make a difference to either definition if we replace\ndiversity $D_q^Z$ by entropy $H_q^Z$.\n\nWe will eventually show that every similarity matrix has a\nmaximizing distribution.\n\n\\begin{lemma} \\lbl{lemma:max}\nLet $Z$ be a similarity matrix and $\\pv{p}$ an invariant distribution. Then\n$\\pv{p}$ is $0$-maximizing if and only if it is maximizing.\n\\end{lemma}\n\n\\begin{proof}\nSuppose that $\\pv{p}$ is $0$-maximizing. Then for all $q \\in [0, \\infty]$ and\nall distributions $\\pv{p}'$, \n\\[\nD_q^Z(\\pv{p}) \n=\nD_0^Z(\\pv{p})\n\\geq\nD_0^Z(\\pv{p}')\n\\geq\nD_q^Z(\\pv{p}'),\n\\]\nusing invariance in the first step and Lemma~\\ref{lemma:means} in the\nlast. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{lemma} \\lbl{lemma:smaller-max}\nLet $Z$ be a similarity matrix and $B \\subseteq \\{1, \\ldots, n\\}$. Suppose that\n$\\sup_{\\pv{r}} D_0^{Z_B}(\\pv{r}) \\geq \\sup_{\\pv{p}} D_0^Z(\\pv{p})$, where the\nfirst supremum is over distributions $\\pv{r}$ on $B$ and the second is over\ndistributions $\\pv{p}$ on $\\{1, \\ldots, n\\}$. Suppose also that $Z_B$ admits\nan invariant maximizing distribution. Then so does $Z$.\n\\end{lemma}\n(In fact, $\\sup_{\\pv{r}} D_0^{Z_B}(\\pv{r}) \\leq \\sup_{\\pv{p}} D_0^Z(\\pv{p})$\nin any case, by Lemma~\\ref{lemma:absent}. So the `$\\geq$' in the statement\nof the present lemma could equivalently be replaced by `$=$'.)\n\n\\begin{proof}\nLet $\\pv{r}$ be an invariant maximizing distribution on $Z_B$. Define a\ndistribution $\\pv{p}$ on $\\{1, \\ldots, n\\}$ by extending $\\pv{r}$ by zero. By\nLemma~\\ref{lemma:absent}, $\\pv{p}$ is invariant. Using\nLemma~\\ref{lemma:absent} again,\n\\[\nD_0^Z(\\pv{p})\n=\nD_0^{Z_B}(\\pv{r})\n=\n\\sup_{\\pv{r}'} D_0^{Z_B}(\\pv{r}')\n\\geq\n\\sup_{\\pv{p}'} D_0^Z(\\pv{p}'),\n\\]\nso $\\pv{p}$ is $0$-maximizing. Then by Lemma~\\ref{lemma:max}, $\\pv{p}$ is\nmaximizing. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\passage{Decomposition}\n\nLet $Z$ be a similarity matrix. Subsets $B$ and $B'$ of $\\{1, \\ldots, n\\}$\nare \\demph{complementary} (for $Z$) if $B \\cup B' = \\{1, \\ldots, n\\}$, $B \\cap\nB' = \\emptyset$, and $Z_{i i'} = 0$ for all $i \\in B$ and $i' \\in B'$. For\nexample, there exist nonempty complementary subsets if $Z$ can be expressed as\na nontrivial block sum\n\\[\n\\left(\\!\\!\n\\begin{array}{cc}\nX &0 \\\\\n0 &X'\n\\end{array}\n\\!\\!\\right).\n\\]\n\nGiven a distribution $\\pv{p}$ and a subset $B \\subseteq \\{1, \\ldots, n\\}$ such that\n$p_i > 0$ for some $i \\in B$, let\n$\\rstr{\\pv{p}}{B}$ be the distribution on $B$ defined by \n\\[\n(\\rstr{\\pv{p}}{B})_i \n=\n\\frac{p_i}{\\sum_{j \\in B} p_j}.\n\\] \n\n\\begin{lemma} \\lbl{lemma:complementary-basic}\nLet $Z$ be a similarity matrix, and let $B$ and $B'$ be nonempty complementary\nsubsets of $\\{1, \\ldots, n\\}$. Then:\n\\begin{enumerate}\n\\item \\lbl{part:complementary-weightings}\nFor any weightings $\\pv{v}$ on $Z_B$ and $\\pv{v}'$ on $Z_{B'}$,\nthere is a weighting $\\pv{w}$ on $Z$ defined by\n\\[\nw_i\n=\n\\left\\{\n\\begin{array}{ll}\nv_i &\\textrm{if } i \\in B \\\\\nv'_i &\\textrm{if } i \\in B'.\n\\end{array}\n\\right. \n\\]\n\\item \\lbl{part:complementary-invt}\nFor any invariant distributions $\\pv{r}$ on $B$ and $\\pv{r}'$ on $B'$, there\nexists an invariant distribution $\\pv{p}$ on $\\{1, \\ldots, n\\}$ such that\n$\\rstr{\\pv{p}}{B} = \\pv{r}$ and $\\rstr{\\pv{p}}{B'} = \\pv{r}'$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n\\begin{enumerate}\n\\item For $i \\in B$, we have\n\\[\n(Z\\pv{w})_i\n=\n\\sum_{j \\in B} Z_{ij} v_j + \\sum_{j \\in B'} Z_{ij} v'_j\n=\n\\sum_{j \\in B} (Z_B)_{ij} v_j\n=\n(Z_B \\pv{v})_i \n=\n1.\n\\]\nSimilarly, $(Z\\pv{w})_i = 1$ for all $i \\in B'$. So $\\pv{w}$ is a weighting.\n\n\\item By Proposition~\\ref{propn:invt}, $\\pv{r} = \\pd{\\pv{v}}$ for some\nnon-negative weighting $\\pv{v}$ on some nonempty subset $C \\subseteq B$.\nSimilarly, $\\pv{r}' = \\pd{\\pv{v}'}$ for some non-negative weighting $\\pv{v}'$\non some nonempty $C' \\subseteq B'$. By~(\\ref{part:complementary-weightings}),\nthere is a non-negative weighting $\\pv{w}$ on the nonempty set $C \\cup C'$\ndefined by \n\\[\nw_i\n=\n\\left\\{\n\\begin{array}{ll}\nv_i &\\textrm{if } i \\in C \\\\\nv'_i &\\textrm{if } i \\in C'.\n\\end{array}\n\\right.\n\\]\nLet $\\pv{p} = \\pd{\\pv{w}}$, a distribution on $\\{1, \\ldots, n\\}$, which is\ninvariant by Proposition~\\ref{propn:invt}. For $i \\in C$ we have\n\\begin{eqnarray*}\n(\\rstr{\\pv{p}}{B})_i &= &\n\\frac{p_i}{\\sum_{j \\in B} p_j}\n=\n\\frac{w_i\/\\magn{Z_{C \\cup C'}}}%\n{\\sum_{j \\in C} w_j\/\\magn{Z_{C \\cup C'}}} \\\\\n &= &\n\\frac{w_i}{\\sum_{j \\in C} w_j}\n=\n\\frac{v_i}{\\sum_{j\\in C} v_j} \\\\\n &= &\nr_i.\n\\end{eqnarray*}\nFor $i \\in B\\setminus C$ we have\n\\[\n(\\rstr{\\pv{p}}{B})_i \n=\n\\frac{p_i}{\\sum_{j \\in B} p_j}\n=\n0\n=\nr_i.\n\\]\nHence $\\rstr{\\pv{p}}{B} = \\pv{r}$, and similarly $\\rstr{\\pv{p}}{B'} =\n\\pv{r}'$. \n\\hfill\\ensuremath{\\Box}\n\\end{enumerate}\n\\end{proof}\n\n\\begin{lemma} \\lbl{lemma:complementary-D0}\nLet $Z$ be a similarity matrix, let $B$\nand $B'$ be complementary subsets of $\\{1, \\ldots, n\\}$, and let $\\pv{p}$ be a\ndistribution on $\\{1, \\ldots, n\\}$ such that $p_i > 0$ for some $i \\in B$ and\n$p_i > 0$ for some $i \\in B'$. Then \n\\[\nD_0^Z(\\pv{p})\n=\nD_0^{Z_B}(\\rstr{\\pv{p}}{B}) + \nD_0^{Z_{B'}}(\\rstr{\\pv{p}}{B'}).\n\\]\n\\end{lemma}\n\n\\begin{proof}\nBy definition,\n\\[\nD_0^Z(\\pv{p})\n=\n\\sum \\frac{p_i}{(Z\\pv{p})_i}\n=\n\\sum_{i \\in B:\\ p_i > 0} \\frac{p_i}{(Z\\pv{p})_i}\n+\n\\sum_{i \\in B':\\ p_i > 0} \\frac{p_i}{(Z\\pv{p})_i}.\n\\]\nNow for $i \\in B$, \n\\[\np_i \n=\n\\left( \\sum_{j \\in B} p_j \\right)\n\\left( \\rstr{\\pv{p}}{B} \\right)_i\n\\]\nby definition of $\\rstr{\\pv{p}}{B}$, and \n\\[\n(Z\\pv{p})_i \n=\n\\sum_{j \\in B} Z_{ij} p_j + \\sum_{j \\in B'} Z_{ij} p_j\n=\n\\sum_{j \\in B} (Z_B)_{ij} p_j\n=\n\\left( \\sum_{j \\in B} p_j \\right)\n(Z_B \\rstr{\\pv{p}}{B})_i.\n\\]\nSimilar equations hold for $B'$, so\n\\begin{eqnarray*}\nD_0^Z(\\pv{p}) &\n= &\n\\sum_{i \\in B:\\ p_i > 0} \n\\frac{(\\rstr{\\pv{p}}{B})_i}{(Z_B \\rstr{\\pv{p}}{B})_i}\n+\n\\sum_{i \\in B':\\ p_i > 0} \n\\frac{(\\rstr{\\pv{p}}{B'})_i}{(Z_{B'} \\rstr{\\pv{p}}{B'})_i} \\\\\n &= &\nD_0^{Z_B}(\\rstr{\\pv{p}}{B}) + \nD_0^{Z_{B'}}(\\rstr{\\pv{p}}{B'}),\n\\end{eqnarray*}\nas required.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\begin{propn} \\lbl{propn:comp-invt-max}\nLet $Z$ be a similarity matrix and let $B$ and $B'$ be nonempty complementary\nsubsets of $\\{1, \\ldots, n\\}$. Suppose that $Z_B$ and $Z_{B'}$ each admit an\ninvariant maximizing distribution. Then so does $Z$.\n\\end{propn}\n\n\\begin{proof}\nChoose invariant maximizing distributions $\\pv{r}$ on $B$ and $\\pv{r}'$ on\n$B'$. By\nLemma~\\ref{lemma:complementary-basic}(\\ref{part:complementary-invt}), there\nexists an invariant distribution $\\pv{p}$ on $\\{1, \\ldots, n\\}$ such that\n$\\rstr{\\pv{p}}{B} = \\pv{r}$ and $\\rstr{\\pv{p}}{B'} = \\pv{r}'$. I claim that\n$\\pv{p}$ is maximizing. Indeed, let $\\pv{s}$ be a distribution on $\\{1,\n\\ldots, n\\}$. If $s_i > 0$ for some $i \\in B$ and $s_i > 0$ for some $i \\in\nB'$ then\n\\[\nD_0^Z(\\pv{s})\n=\nD_0^{Z_B}(\\rstr{\\pv{s}}{B}) + D_0^{Z_{B'}}(\\rstr{\\pv{s}}{B'}) \n\\leq\nD_0^{Z_B}(\\pv{r}) + D_0^{Z_{B'}}(\\pv{r}')\n\\]\nby Lemma~\\ref{lemma:complementary-D0}. If not then without loss of\ngenerality, $s_i = 0$ for all $i \\in B'$; then $s_i > 0$ for some $i \\in B$,\nand\n\\[\nD_0^Z(\\pv{s})\n=\nD_0^{Z_B}(\\rstr{\\pv{s}}{B})\n\\leq\nD_0^{Z_B}(\\pv{r})\n\\leq \nD_0^{Z_B}(\\pv{r}) + D_0^{Z_{B'}}(\\pv{r}')\n\\]\nby Lemma~\\ref{lemma:absent}. So in any case we have\n\\begin{eqnarray*}\nD_0^Z(\\pv{s}) \n &\\leq &\nD_0^{Z_B}(\\pv{r}) + D_0^{Z_{B'}}(\\pv{r}') \\\\\n &= &\nD_0^{Z_B}(\\rstr{\\pv{p}}{B}) + D_0^{Z_{B'}}(\\rstr{\\pv{p}}{B'}) \\\\\n &= &\nD_0^Z(\\pv{p}),\n\\end{eqnarray*}\nusing Lemma~\\ref{lemma:complementary-D0} in the last step. Hence $\\pv{p}$ is\n$0$-maximizing, and by Lemma~\\ref{lemma:max}, $\\pv{p}$ is maximizing.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\passage{Positive definite similarity matrices}\nThe solution to the maximum diversity problem turns out to be simpler when the\nsimilarity matrix is positive definite and satisfies certain further\nconditions. Here are some preparatory results. They are not\nneeded for the proof of the main theorem~(\\ref{thm:main}) itself, but \nwill be used for the corollaries in Section~\\ref{sec:cors}.\n\n\n\\begin{lemma} \\lbl{lemma:pos-def-basic}\nLet $Z$ be a positive definite similarity matrix. Then $Z$ has a unique\nweighting and $\\magn{Z} > 0$.\n\\end{lemma}\n\n\\begin{proof}\nA positive definite matrix is invertible, so $Z$ has a unique weighting\n$\\pv{w}$. By the definitions of magnitude and weighting,\n\\[\n\\magn{Z}\n=\n\\sum_{i = 1}^n w_i\n=\n\\pv{w}^\\mathrm{t} Z \\pv{w}.\n\\]\nBut $n \\geq 1$, so $\\pv{0}$ is not a weighting, so $\\pv{w} \\neq \\pv{0}$; then\nsince $Z$ is positive definite, $\\pv{w}^\\mathrm{t} Z \\pv{w} > 0$.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\begin{lemma} \\lbl{lemma:pos-def-sup}\nLet $Z$ be a positive definite similarity matrix. Then \n\\[\n\\magn{Z}\n=\n\\sup_{\\pv{x}}\n\\frac{(\\sum_{i = 1}^n x_i)^2}{\\pv{x}^\\mathrm{t} Z \\pv{x}}\n\\]\nwhere the supremum is over all column vectors $\\pv{x} \\neq \\pv{0}$. The\npoints at which the supremum is attained are exactly the nonzero scalar\nmultiples of the unique weighting on $Z$.\n\\end{lemma}\n\n\\begin{proof}\nSince $Z$ is positive definite, there is an inner product\n$\\ip{-}{-}$ on $\\mathbb{R}^n$ defined by\n\\[\n\\ip{\\pv{x}}{\\pv{y}} = \\pv{x}^\\mathrm{t} Z \\pv{y}\n\\]\n($\\pv{x}, \\pv{y} \\in \\mathbb{R}^n$). The Cauchy--Schwarz inequality states that\nfor all $\\pv{x}, \\pv{y} \\in \\mathbb{R}^n$,\n\\[\n\\ip{\\pv{x}}{\\pv{x}} \\cdot \\ip{\\pv{y}}{\\pv{y}}\n\\geq\n\\ip{\\pv{x}}{\\pv{y}}^2\n\\]\nwith equality if and only if one of $\\pv{x}$ and $\\pv{y}$ is a scalar multiple\nof the other. Let $\\pv{y}$ be the unique weighting on $Z$. Then the\ninequality states that for all $\\pv{x} \\in \\mathbb{R}^n$,\n\\[\n\\pv{x}^\\mathrm{t} Z \\pv{x} \\cdot \\magn{Z}\n\\geq\n\\left(\\sum_{i = 1}^n x_i \\right)^2.\n\\]\nSince $\\pv{y} \\neq \\pv{0}$, equality holds if and only if $\\pv{x}$ is a scalar\nmultiple of $\\pv{y}$. The result follows. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\nA vector $\\pv{x}$ is \\demph{nowhere zero} if $x_i \\neq 0$ for all $i$.\n\n\\begin{propn} \\lbl{propn:pos-def-sub}\nLet $Z$ be a positive definite similarity matrix and $B \\subseteq \\{1, \\ldots,\nn\\}$. Then $Z_B$ is positive definite and $\\magn{Z_B} \\leq \\magn{Z}$.\nThe inequality is strict if $B$ is a proper subset and the unique weighting on\n$Z$ is nowhere zero.\n\\end{propn}\n\n\\begin{proof}\nSuppose without loss of generality that $B = \\{1, \\ldots, m\\}$, where $0 \\leq m\n\\leq n$. \nLet $\\pv{y}$ be an $m$-dimensional column vector and write \n\\[\n\\pv{x} \n=\n(y_1, \\ldots, y_m, 0, \\ldots, 0)^\\mathrm{t}.\n\\]\nThen\n\\begin{equation} \\label{eq:quad-form}\n\\pv{y}^\\mathrm{t} Z_B \\pv{y}\n=\n\\sum_{i, j = 1}^m y_i (Z_B)_{ij} y_j\n=\n\\sum_{i, j = 1}^n x_i Z_{ij} x_j\n= \n\\pv{x}^\\mathrm{t} Z \\pv{x}\n\\end{equation}\nand\n\\begin{equation} \\label{eq:square}\n\\left( \\sum_{i = 1}^m y_i \\right)^2\n=\n\\left( \\sum_{i = 1}^n x_i \\right)^2.\n\\end{equation}\nBy~(\\ref{eq:quad-form}) and positive definiteness of $Z$, we have\n$\\pv{y}^\\mathrm{t} Z_B \\pv{y} \\geq 0$, with equality if and only if $\\pv{x} = 0$,\nif and only if $\\pv{y} = 0$. So $Z_B$ is positive definite. Then\nby~(\\ref{eq:quad-form}), (\\ref{eq:square}) and Lemma~\\ref{lemma:pos-def-sup},\n$\\magn{Z_B} \\leq \\magn{Z}$. \n\nNow suppose that $m < n$ and the weighting $\\pv{w}$ on $Z$ is nowhere zero.\nThe supremum in Lemma~\\ref{lemma:pos-def-sup} is attained only at nonzero\nscalar multiples of $\\pv{w}$; in particular, any vector $\\pv{x}$ at which it\nis attained satisfies $x_n \\neq 0$. Let $\\pv{y}$ be the unique weighting\non $Z_B$ and let $\\pv{x}$ be the corresponding $n$-dimensional column vector,\nas above. Since $x_n = 0$, we have\n\\[\n\\magn{Z_B}\n=\n\\frac{(\\sum_{i = 1}^m y_i)^2}{\\pv{y}^\\mathrm{t} Z_B \\pv{y}}\n=\n\\frac{(\\sum_{i = 1}^n x_i)^2}{\\pv{x}^\\mathrm{t} Z \\pv{x}}\n<\n\\magn{Z},\n\\]\nas required. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{lemma} \\lbl{lemma:scattered}\nLet $Z$ be a similarity matrix with $Z_{ij} < 1\/(n - 1)$ for all $i \\neq j$.\nThen $Z$ is positive definite, and the unique weighting on $Z$ is positive.\n\\end{lemma}\n\n\\begin{proof}\nTheorem~2 of~\\cite{AMSES} shows that $Z$ is positive definite. Now,\nfor each $i \\in \\{1, \\ldots, n\\}$ and $r \\geq 0$, put\n\\[\nc_{i, r}\n= \n\\sum_{i = i_0 \\neq \\cdots \\neq i_r} \nZ_{i_0 i_1} Z_{i_1 i_2} \\cdots Z_{i_{r - 1} i_r},\n\\]\nwhere the sum is over all $i_0, \\ldots, i_r \\in \\{1, \\ldots, n\\}$ such\nthat $i_0 = i$ and $i_{s - 1} \\neq i_s$ whenever $1 \\leq s \\leq r$. In\nparticular, $c_{i, 0} = 1$. Write $\\gamma = \\max_{j \\neq k} Z_{jk}$. Then\nfor all $r \\geq 0$,\n\\begin{equation} \\label{eq:wt-alt}\nc_{i, r + 1} \n\\leq \n\\sum_{i = i_0 \\neq \\cdots \\neq i_r \\neq i_{r + 1}}\nZ_{i_0 i_1} Z_{i_1 i_2} \\cdots Z_{i_{r - 1} i_r} \\gamma\n=\n(n - 1)\\gamma \\cdot c_{i, r}.\n\\end{equation}\nHence $c_{i, r} \\leq ((n - 1)\\gamma)^r$ for all $r \\geq 0$; and $(n - 1)\\gamma\n< 1$, so the sum $w_i := \\sum_{r = 0}^\\infty (-1)^r c_{i, r}$ converges.\nAgain using~(\\ref{eq:wt-alt}), we have $c_{i, r + 1} < c_{i, r}$ for all $r$,\nso $w_i > 0$.\n\nIt remains to show that $\\pv{w} = (w_1, \\ldots, w_n)^\\mathrm{t}$ is a weighting.\nLet $i \\in \\{1, \\ldots, n\\}$. Then\n\\begin{eqnarray*}\n(Z\\pv{w})_i &= &\nw_i + \\sum_{j \\neq i} Z_{ij} w_j \\\\\n &= &\nw_i + \n\\sum_{j \\neq i} Z_{ij} \n\\sum_{r = 0}^\\infty (-1)^r\n\\sum_{j = j_0 \\neq \\cdots \\neq j_r} Z_{j_0 j_1} \\cdots Z_{j_{r-1} j_r} \\\\\n &= &\nw_i - \n\\sum_{r = 0}^\\infty (-1)^{r + 1} c_{i, r + 1} \\\\\n &= &\nw_i - (w_i - c_{i, 0}) \n\\\\\n &= &\n1,\n\\end{eqnarray*}\nas required.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\n\\pagebreak\n\n\n\\section{The main theorem}\n\n\n\\passage{Solution to the maximum diversity problem}\n\n\\begin{thm}[Main Theorem] \\lbl{thm:main}\nLet $Z$ be a similarity matrix. Then:\n\\begin{enumerate}\n\\item \\lbl{part:main-value}\nFor all $q \\in [0, \\infty]$,\n\\begin{equation} \\label{eq:sup-max}\n\\sup_{\\pv{p}} D_q^Z(\\pv{p})\n=\n\\max_B \\magn{Z_B}\n\\end{equation}\nwhere the supremum is over all distributions $\\pv{p}$ and the\nmaximum is over all subsets $B \\subseteq \\{1, \\ldots, n\\}$ such that $Z_B$\nadmits a non-negative weighting.\n\\item \\lbl{part:main-place}\nThe maximizing distributions are precisely those of the form\n$\\pd{\\pv{w}}$, where $\\pv{w}$ is a non-negative weighting on a subset\n$B \\subseteq \\{1, \\ldots, n\\}$ such that $\\magn{Z_B}$ attains the\nmaximum~(\\ref{eq:sup-max}). \n\\end{enumerate}\nIn particular, there exists a maximizing distribution, and the maximum\ndiversity of order $q$ is the same for all $q \\in [0, \\infty]$.\n\\end{thm}\n\nFor the definitions, including that of `maximizing distribution', see\nSection~\\ref{sec:prep}.\nThe proof is given later in this section. First we make some remarks on\ncomputation and on maximum entropy. \n\nThe \\demph{maximum diversity} of a similarity matrix $Z$ is $\\Dmax{Z} :=\n\\sup_\\pv{p} D_q^Z(\\pv{p})$, which by Theorem~\\ref{thm:main} is independent of\nthe value of $q \\in [0, \\infty]$.\n\n\n\\passage{Remarks on computation}\nSuppose that we are given a similarity matrix $Z$ and want to compute its\nmaximizing distribution(s) and maximum diversity. The theorem gives the\nfollowing algorithm. For each of the $2^n$ subsets $B$ of $\\{1, \\ldots, n\\}$:\n\\begin{itemize}\n\\item perform some simple linear algebra to decide whether $Z_B$ admits a\nnon-negative weighting\n\\item if it does, tag $B$ as `good' and record the magnitude $\\magn{Z_B}$\n(the sum of the entries of any weighting).\n\\end{itemize}\nThe maximum of all the recorded magnitudes is the maximum\ndiversity $\\Dmax{Z}$. For each good $B$ such that $\\magn{Z_B} =\n\\Dmax{Z}$, find all non-negative weightings $\\pv{w}$ on $Z_B$; the\ncorresponding distributions $\\pd{\\pv{w}}$ are the maximizing distributions.\n\nThis algorithm takes exponentially many steps. However, each step is fast, so\nit might be possible to handle reasonably large values of $n$ in a reasonable\nlength of time. Moreover, the results of Section~\\ref{sec:cors} may allow\nthe speed of the algorithm to be improved.\n\n\\passage{Solution to the maximum entropy problem}\nWe can translate the solution to the maximum \\emph{diversity} problem into a\nsolution to the maximum \\emph{entropy} problem. The first part, giving the\nvalue of the maximum, becomes more complicated. The second part, giving the\nmaximizing distribution(s), is unchanged.\n\n\\begin{thm} \\lbl{thm:main-entropy}\nLet $Z$ be a similarity matrix. Then:\n\\begin{enumerate}\n\\item \nFor all $q \\in [0, \\infty)$,\n\\begin{equation} \\label{eq:sup-max-ent}\n\\sup_{\\pv{p}} H_q^Z(\\pv{p})\n=\n\\left\\{\n\\begin{array}{ll}\n\\max_B \\frac{1}{q - 1} \\left( 1 - \\magn{Z_B}^{1 - q} \\right) &\n\\textrm{if } q \\neq 1 \\\\\n\\max_B \\log \\magn{Z_B} &\n\\textrm{if } q = 1\n\\end{array}\n\\right.\n\\end{equation}\nwhere the supremum is over all distributions $\\pv{p}$ and the\nmaxima are over all subsets $B \\subseteq \\{1, \\ldots, n\\}$ such that $Z_B$\nadmits a non-negative weighting.\n\\item\nThe maximizing distributions are precisely those of the form\n$\\pd{\\pv{w}}$, where $\\pv{w}$ is a non-negative weighting on a subset\n$B \\subseteq \\{1, \\ldots, n\\}$ such that $\\magn{Z_B}$ is maximal among all subsets\nadmitting a non-negative weighting.\n\\end{enumerate}\nIn particular, there exists a maximizing distribution.\n\\end{thm}\n\n\\begin{proof}\nThis follows almost immediately from Theorem~\\ref{thm:main}, using the\ndefinition of $D_q^Z$ in terms of $H_q^Z$. Note that on the right-hand side\nof~(\\ref{eq:sup-max-ent}), the expressions $\\frac{1}{q - 1}(1 - \\magn{Z_B}^{1\n- q})$ and $\\log \\magn{Z_B}$ are increasing, injective functions of\n$\\magn{Z_B}$, so a subset $B$ maximizes any one of them if and only if it\nmaximizes $\\magn{Z_B}$. \\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nThe part of Theorem~\\ref{thm:main} stating that the maximum diversity of order\n$q$ is the same for all values of $q$ has no clean statement in terms of\nentropy.\n\n\n\\passage{Diversity of order zero}\n\nOur proof of Theorem~\\ref{thm:main} will depend on an analysis of the function\n$D_0^Z$, diversity of order zero. The first step is to find its critical\npoints, and for that we need a technical lemma.\n\n\\begin{lemma} \\lbl{lemma:skew}\nLet $m \\geq 1$, let $Y$ be an $m \\times m$ real skew-symmetric matrix, and let\n$\\pv{x} \\in (0, \\infty)^m$. Suppose that $Y_{ij} \\geq 0$ whenever $i \\geq j$\nand that $\\sum_{j=1}^m Y_{ij} x_j$ is independent of $i \\in \\{1, \\ldots, m\\}$.\nThen $Y = 0$.\n\\end{lemma}\n\n\\begin{proof}\nThis is true for $m = 1$; suppose inductively that $m \\geq 2$. We\nhave \n\\[\n\\sum_{j = 1}^m Y_{1j} x_j \n= \n\\sum_{j = 1}^m Y_{mj} x_j\n\\]\nwith $Y_{1j} x_j = -Y_{j1} x_j \\leq 0$ and $Y_{mj} x_j \\geq 0$ for all $j$;\nhence both sides are $0$ and $Y_{mj} x_j = 0$ for all $j$. So for all $j$ we\nhave $Y_{mj} = 0$ (since $x_j > 0$) and $Y_{jm} = 0$ (by skew-symmetry). Let\n$Y'$ be the $(m - 1) \\times (m - 1)$ matrix defined by $Y'_{ij} = Y_{ij}$.\nThen $Y'$ satisfies the conditions of the inductive hypothesis, so $Y' = 0$;\nthat is, $Y_{ij} = 0$ whenever $i, j < m$. But we already have $Y_{ij} = 0$\nwhenever $i = m$ or $j = m$, so $Y = 0$, completing the induction. \\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nWrite\n\\[\n\\smp{n}\n=\n\\{ \\pv{p} \\in \\mathbb{R}^n \n\\:|\\: \n\\sum p_i = 1,\n\\ p_i \\geq 0 \n\\}\n\\]\nfor the space of distributions, and\n\\[\n\\ismp{n}\n=\n\\{ \\pv{p} \\in \\mathbb{R}^n \n\\:|\\: \n\\sum p_i = 1,\n\\ p_i > 0 \n\\}\n\\]\nfor the space of nowhere-zero distributions. The function $D_0^Z$ on\n$\\smp{n}$ is given by\n\\[\nD_0^Z(\\pv{p})\n=\n\\sum \\frac{p_i}{(Z\\pv{p})_i}.\n\\]\n(Recall the standing convention that unlabelled summations are over all $i \\in\n\\{1, \\ldots, n\\}$ such that $p_i > 0$.) It can be defined, using the same\nformula, for all $\\pv{p} \\in [0, \\infty)^n$. It is then differentiable\non $(0, \\infty)^n$, where the summation is over all $i \\in \\{1, \\ldots, n\\}$.\n\n\\begin{propn}\nLet $Z$ be a similarity matrix and $\\pv{p} \\in \\ismp{n}$. Then $\\pv{p}$ is a\ncritical point of $D_0^Z$ on $\\ismp{n}$ if and only if for all $i, j \\in \\{1,\n\\ldots, n\\}$,\n\\[\nZ_{ij} > 0 \\,\\Rightarrow\\, (Z\\pv{p})_i = (Z\\pv{p})_j.\n\\]\n\\end{propn}\n\n\\begin{proof}\nWe find the critical points of $D_0^Z$ on $\\ismp{n}$ using Lagrange\nmultipliers and the fact that $\\ismp{n}$ is the intersection of $(0,\n\\infty)^n$ with the hyperplane $\\{ \\pv{p} \\in \\mathbb{R}^n \\:|\\: \\sum p_i = 1\n\\}$. Write $h(\\pv{p}) = \\sum p_i - 1$.\n\nFor $k, i \\in \\{1, \\ldots, n\\}$ and $\\pv{p} \\in (0, \\infty)^n$ we have\n$\\ptl{p_k} (Z\\pv{p})_i = Z_{ik}$, giving\n\\[\n\\ptl{p_k} \n\\left(\n\\frac{p_i}{(Z\\pv{p})_i}\n\\right)\n=\n\\left\\{\n\\begin{array}{ll}\n\\displaystyle\n\\frac{1}{(Z\\pv{p})_k^2} \\sum_{j \\neq k} Z_{kj} p_j &\n\\textrm{if } k = i \\\\\n\\displaystyle\n-\\frac{1}{(Z \\pv{p})_i^2} Z_{ik} p_i &\n\\textrm{otherwise.}\n\\end{array}\n\\right.\n\\]\nFrom this and symmetry of $Z$ we deduce that for $k \\in \\{1, \\ldots, n\\}$ and\n$\\pv{p} \\in (0, \\infty)^n$,\n\\[\n\\ptl{p_k} D_0^Z(\\pv{p})\n=\n\\sum_{i = 1}^n Y_{ki} p_i\n\\]\nwhere\n\\[\nY_{ki} \n=\n\\left(\n\\frac{1}{(Z\\pv{p})_k^2} - \\frac{1}{(Z\\pv{p})_i^2} \n\\right)\nZ_{ki}.\n\\]\nOn the other hand, \n\\[\n\\ptl{p_k} h(\\pv{p})\n=\n1\n\\]\nfor all $k$. A point $\\pv{p} \\in \\ismp{n}$ is a critical point of $D_0^Z$ on\n$\\ismp{n}$ if and only if there exists a scalar $\\lambda$ such that $(\\nabla\nD_0^Z)(\\pv{p}) = \\lambda (\\nabla h)(\\pv{p})$. Hence $\\pv{p}$ is critical if\nand only if $\\sum_i Y_{ki} p_i$ is independent of $k \\in \\{1, \\ldots, n\\}$.\nSo the proposition is equivalent to the statement that, for $\\pv{p} \\in\n\\ismp{n}$, the sum $\\sum Y_{ki} p_i$ is independent of $k$ if and only if the\nmatrix $Y$ is $0$.\n\nThe `if' direction is trivial. Conversely, suppose that $\\sum_i Y_{ki} p_i$\nis independent of $k$. Assume without loss of generality that $(Z \\pv{p})_1\n\\geq \\cdots \\geq (Z\\pv{p})_n$. Then Lemma~\\ref{lemma:skew} applies (taking\n$\\pv{x} = \\pv{p}$), and $Y = 0$. \\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nLet $\\sim$ be the equivalence relation on\n$\\{1, \\ldots, n\\}$ generated by $i \\sim j$ whenever $Z_{ij} > 0$. Thus, $i\n\\sim j$ if and only if there is a chain $i = i_1, i_2, \\ldots, i_{r-1}, i_r =\nj$ with $Z_{i_t, i_{t+1}} > 0$ for all $t$. Call $Z$ \\demph{connected} if $i\n\\sim j$ for all $i, j$.\n\n\\begin{cor} \\lbl{cor:conn-crit}\nLet $Z$ be a connected similarity matrix. Then every critical point of $D_0^Z$\nin $\\ismp{n}$ is a weight distribution. \n\\hfill\\ensuremath{\\Box}\n\\end{cor}\n\nWe are aiming to show, among other things, that there is a $0$-maximizing\ndistribution: the function $D_0^Z$ on $\\smp{n}$ attains its supremum. Its\nsupremum is finite, since by Lemma~\\ref{lemma:positive}, $D_0^Z(\\pv{p}) \\leq\nn$ for all distributions $\\pv{p}$. If $D_0^Z$ is continuous on $\\smp{n}$ then\nit certainly does attain its supremum. But in general, it is not. For\nexample, if $Z = I$ then $D_0^Z(\\pv{p})$ is the cardinality of the support of\n$\\pv{p}$, which is not continuous in $\\pv{p}$. We must therefore use another\nargument to establish the existence of a $0$-maximizing distribution. The\nfollowing lemma will help us.\n\n\\begin{lemma} \\lbl{lemma:seqs}\nLet $Z$ be a connected similarity matrix and let $(\\pv{p}^{k})_{k \\in \\nat}$\nbe a sequence in $\\smp{n}$. Then $(\\pv{p}^k)_{k \\in \\nat}$ has a subsequence\n$(\\pv{p}^k)_{k \\in S}$ satisfying at least one of the following conditions:\n\\begin{enumerate}\n\\item \\lbl{part:seqs-boundary}\nthere is some $i \\in \\{1, \\ldots, n\\}$ such that $p^k_i = 0$ for all $k\n\\in S$\n\\item \\lbl{part:seqs-interior}\nthe subsequence $(\\pv{p}^k)_{k \\in S}$ lies in $\\ismp{n}$ and converges\nto some point of $\\ismp{n}$\n\\item \\lbl{part:seqs-quotient}\nthe subsequence $(\\pv{p}^k)_{k \\in S}$ lies in $\\ismp{n}$, and there is\nsome $i \\in \\{1, \\ldots, n\\}$ such that\n$\\lim_{k \\in S} \\left(p^k_i\/(Z\\pv{p}^k)_i\\right) = 0$.\n\\end{enumerate}\n\\end{lemma}\nHere and in what follows, we treat sequences as families $(x_k)_{k \\in T}$\nindexed over some infinite subset $T$ of $\\nat$. A subsequence of such a\nsequence therefore amounts to an infinite subset of $T$.\n\n\\begin{proof}\nIf there exist infinitely many pairs $(k, i) \\in \\nat \\times \\{1, \\ldots, n\\}$\nsuch that $p^k_i = 0$ then there is some $i \\in \\{1, \\ldots, n\\}$ such that\n$\\{k \\in \\nat \\:|\\: p^k_i = 0\\}$ is infinite. Taking $S = \\{k \\in \\nat\n\\:|\\: p^k_i = 0\\}$ then gives condition~(\\ref{part:seqs-boundary}). \n\nSuppose, then, that there are only finitely many such pairs. We may choose a\nsubsequence $(\\pv{p}^k)_{k \\in Q}$ of $(\\pv{p}^k)_{k \\in \\nat}$ lying in\n$\\ismp{n}$. Further, since $\\smp{n}$ is compact, we may choose a subsequence\n$(\\pv{p}^k)_{k \\in R}$ of $(\\pv{p}^k)_{k \\in Q}$ converging to some point\n$\\pv{p} \\in \\smp{n}$. If $\\pv{p} \\in \\ismp{n}$\nthen $(\\pv{p}^k)_{k \\in R}$ satisfies~(\\ref{part:seqs-interior}).\n\nSuppose, then, that $\\pv{p} \\not\\in \\ismp{n}$; say $p_\\ell = 0$ where $\\ell\n\\in \\{1, \\ldots, n\\}$. \n\nDefine a binary relation $\\trianglelefteq$ on $\\{1, \\ldots, n\\}$ by $i \\trianglelefteq j$ if\nand only if $(p^k_i\/p^k_j)_{k \\in R}$ is bounded (that is, bounded above).\nThen $\\trianglelefteq$ is reflexive and transitive, and if $i \\trianglelefteq j$ and $p_j = 0$\nthen $p_i = 0$. Write $i \\approx j$ for $i \\trianglelefteq j \\trianglelefteq i$; then\n$\\approx$ is an equivalence relation.\n\nI claim that there exist $i, j \\in \\{1, \\ldots, n\\}$ with $Z_{ij} > 0$ and $i\n\\not\\trianglelefteq j$. For if not, the equivalence relation $\\approx$ satisfies\n$Z_{ij} > 0 \\,\\Rightarrow\\, i \\approx j$, and since $Z$ is connected, $i \\approx j$\nfor all $i, j$. But then $i \\trianglelefteq \\ell$ for all $i$, and $p_\\ell = 0$, so\n$p_i = 0$ for all $i$. This contradicts $\\pv{p}$ being a distribution,\nproving the claim.\n\nNow without loss of generality, $Z_{1n} > 0$ and $1 \\not\\trianglelefteq n$. So\n$(p^k_1\/p^k_n)_{k \\in R}$ is unbounded. We may choose an infinite subset $S\n\\subseteq R$ such that $\\lim_{k \\in S} (p^k_1\/p^k_n) = \\infty$. For all $k\n\\in S$ we have\n\\[\n\\frac{(Z\\pv{p}^k)_n}{p^k_n}\n\\geq\n\\frac{Z_{n1} p^k_1}{p^k_n}\n\\]\nwith $Z_{n1} = Z_{1n} > 0$, so\n\\[\n\\lim_{k \\in S}\n\\left(\n\\frac{(Z\\pv{p}^k)_n}{p^k_n}\n\\right)\n=\n\\infty,\n\\]\nand condition~(\\ref{part:seqs-quotient}) follows.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\passage{Existence of a maximizing distribution} At the heart of\nTheorem~\\ref{thm:main} is the following result, from which we will deduce the\ntheorem itself.\n\n\\begin{propn} \\lbl{propn:heart}\nEvery similarity matrix has a maximizing distribution, and every maximizing\ndistribution is invariant. \n\\end{propn}\n\n\\begin{proof}\nLet $Z$ be a similarity matrix. It is enough to prove that $Z$ admits an\ninvariant maximizing distribution: for if $\\pv{p}$ and $\\pv{p}'$ are both\nmaximizing then $D_q^Z(\\pv{p}) = D_q^Z(\\pv{p}')$ for all $q$, so $\\pv{p}$ is\ninvariant if and only if $\\pv{p}'$ is.\n\nThe result holds for $n = 1$. Suppose inductively that $n \\geq 2$.\n\n\\emph{Case 1: $Z$ is not connected.} We may partition $\\{1, \\ldots, n\\}$ into\ntwo nonempty subsets, $B$ and $B'$, each of which is a union of\n$\\sim$-equivalence classes (where $\\sim$ is as defined before\nCorollary~\\ref{cor:conn-crit}). Then $B$ and $B'$ are complementary, and by\ninductive hypothesis, $Z_B$ and $Z_{B'}$ each admit an invariant maximizing\ndistribution. So by Proposition~\\ref{propn:comp-invt-max}, $Z$ admits one\ntoo. \n\n\\emph{Case 2: $Z$ is connected.} Write $\\sigma = \\sup_{\\pv{p}}\nD_0^Z(\\pv{p})$. We may choose a sequence $(\\pv{p}^k)_{k \\in \\nat}$ in\n$\\smp{n}$ with $\\lim_{k \\to \\infty} D_0^Z(\\pv{p}^k) = \\sigma$. By\nLemma~\\ref{lemma:seqs}, at least one of the following three conditions holds.\n\\begin{enumerate}\n\\item There is a subsequence $(\\pv{p}^k)_{k \\in S}$ such that (without loss of\ngenerality) $p^k_n = 0$ for all $k \\in S$. Write $B = \\{1, \\ldots, n - 1\\}$.\nDefine a sequence $(\\pv{r}^k)_{k \\in S}$ in $\\smp{n - 1}$ by \n\\[\n\\pv{r}^k = (p^k_1, \\ldots, p^k_{n - 1}).\n\\]\nThen for all $k \\in S$ we have $D_0^{Z_B}(\\pv{r}^k) = D_0^Z(\\pv{p}^k)$ (by\nLemma~\\ref{lemma:absent}), so $\\sup_{k \\in S} D_0^{Z_B}(\\pv{r}^k) = \\sigma$.\nThen by Lemma~\\ref{lemma:smaller-max} and inductive hypothesis, $Z$ admits an\ninvariant maximizing distribution.\n\n\\item There is a subsequence $(\\pv{p^k})_{k \\in S}$ in $\\ismp{n}$ convergent\nto some point $\\pv{p} \\in \\ismp{n}$. Since $D_0^Z$ is continuous on\n$\\ismp{n}$,\n\\[\nD_0^Z(\\pv{p})\n=\n\\lim_{k \\in S} D_0^Z(\\pv{p}^k)\n=\n\\sigma.\n\\]\nSo $\\pv{p}$ is $0$-maximizing. Now $\\pv{p}$ is a critical point of $D_0^Z$ on\n$\\ismp{n}$, and $Z$ is connected, so by Corollary~\\ref{cor:conn-crit},\n$\\pv{p}$ is a weight distribution. By Lemma~\\ref{lemma:mag-div},\n$\\pv{p}$ is invariant; then by Lemma~\\ref{lemma:max}, $\\pv{p}$ is maximizing.\n\n\\item There is a subsequence $(\\pv{p}^k)_{k \\in S}$ in $\\ismp{n}$ such that\n(without loss of generality) $\\lim_{k \\in S} \\left( p^k_n \/ (Z\\pv{p}^k)_n\n\\right) = 0$. Write $B = \\{1, \\ldots, n - 1\\}$. Define a sequence\n$(\\pv{r}^k)_{k \\in S}$ in $\\ismp{n - 1}$ by $\\pv{r}^k = \\rstr{\\pv{p}^k}{B}$\n(which is possible because $\\pv{p}^k \\in \\ismp{n}$ and $n \\geq 2$). Then for\nall $k \\in S$ and $i \\in B$,\n\\[\n\\frac{r^k_i}{(Z_B \\pv{r}^k)_i}\n=\n\\frac{p^k_i}{\\sum_{j = 1}^{n - 1} Z_{ij} p^k_j}\n=\n\\frac{p^k_i}{(Z\\pv{p}^k)_i - Z_{in} p^k_n}\n\\geq\n\\frac{p^k_i}{(Z\\pv{p}^k)_i}.\n\\]\nHence for all $k \\in S$,\n\\[\nD_0^{Z_B}(\\pv{r}^k)\n=\n\\sum_{i = 1}^{n - 1}\n\\frac{r^k_i}{(Z_B \\pv{r}^k)_i}\n\\geq\n\\sum_{i = 1}^{n - 1}\n\\frac{p^k_i}{(Z\\pv{p}^k)_i}\n=\nD_0^Z(\\pv{p}^k) - \\frac{p^k_n}{(Z\\pv{p}^k)_n}.\n\\]\nBut \n\\[\n\\lim_{k \\in S} \n\\left(\nD_0^Z(\\pv{p}^k) - \\frac{p^k_n}{(Z\\pv{p}^k)_n}\n\\right)\n=\n\\sigma - 0\n=\n\\sigma,\n\\]\nso $\\sup_{k \\in S} D_0^{Z_B}(\\pv{r}^k) \\geq \\sigma$. Then by\nLemma~\\ref{lemma:smaller-max} and inductive hypothesis, $Z$ admits an\ninvariant maximizing distribution.\n\\end{enumerate}\nSo in all cases there is an invariant maximizing distribution, completing\nthe induction.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\passage{Proof of the Main Theorem,~\\ref{thm:main}}\n\\begin{enumerate}\n\\item Let $q \\in [0, \\infty]$. By Proposition~\\ref{propn:heart}, the supremum\n$\\sup_\\pv{p} D_q^Z(\\pv{p})$ is unchanged if $\\pv{p}$ is taken to run over only\nthe invariant distributions. By Proposition~\\ref{propn:invt}, any invariant\ndistribution is of the form $\\pd{\\pv{w}}$ for some non-negative weighting\n$\\pv{w}$ on some nonempty subset $B \\subseteq \\{1, \\ldots, n\\}$. Hence \n\\[\n\\sup_{\\pv{p}} D_q^Z(\\pv{p})\n=\n\\max_{B, \\pv{w}} D_q^Z(\\pd{\\pv{w}})\n\\]\nwhere the maximum is over all nonempty $B$ and non-negative weightings\n$\\pv{w}$ on $B$. But for any such $B$ and $\\pv{w}$ we have \n\\[\nD_q^Z(\\pd{\\pv{w}}) \n= \nD_q^{Z_B}(\\pv{w}\/\\magn{Z_B})\n=\n\\magn{Z_B}\n\\]\nby Lemmas~\\ref{lemma:absent} and~\\ref{lemma:mag-div} respectively. Hence \n\\[\n\\sup_{\\pv{p}} D_q^Z(\\pv{p})\n=\n\\max_B \\magn{Z_B}\n\\]\nwhere the maximum is now over all nonempty $B \\subseteq \\{1, \\ldots, n\\}$ such that\nthere exists a non-negative weighting on $Z_B$. And since $\\magn{\\emptyset} =\n0$, it makes no difference if we allow $B$ to be empty.\n\n\\item Any maximizing distribution is invariant, by\nProposition~\\ref{propn:heart}. The result now follows from\nProposition~\\ref{propn:invt}. \n\\hfill\\ensuremath{\\Box}\n\\end{enumerate}\n\n\n\n\n\\section{Corollaries and examples}\n\\lbl{sec:cors}\n\n\nHere we state some corollaries to the results of the previous section. The\nfirst is a companion to Lemma~\\ref{lemma:max}.\n\n\\begin{cor} \\lbl{cor:some-all}\nLet $Z$ be a similarity matrix and $q \\in (0, \\infty]$. Then a distribution\nis $q$-maximizing if and only if it is maximizing.\n\\end{cor}\n\nIn other words, if a distribution is $q$-maximizing for \\emph{some}\n$q > 0$ then it is $q$-maximizing for \\emph{all} $q \\geq 0$. The proof is\nbelow. \n\nHowever, a $0$-maximizing distribution is not necessarily maximizing. Take $Z\n= I$, for example. Then $D_0^Z(\\pv{p}) = \\sum_{i:\\,p_i > 0} 1 = $ cardinality\nof $\\mr{supp}(\\pv{p})$, so any nowhere-zero distribution is $0$-maximizing. On\nthe other hand, only the uniform distribution $(1\/n, \\ldots, 1\/n)$ is\nmaximizing. So the restriction $q \\neq 0$ cannot be dropped from\nCorollary~\\ref{cor:some-all}, nor can the word `invariant' be dropped\nfrom Lemma~\\ref{lemma:max}.\n\n\\begin{proof}\nLet $\\pv{p}$ be a $q$-maximizing distribution. Then\n\\[\nD_q^Z(\\pv{p})\n=\n\\Dmax{Z}\n\\geq\nD_0^Z(\\pv{p})\n\\geq\nD_q^Z(\\pv{p}),\n\\]\nwhere the second inequality is by Lemma~\\ref{lemma:means}. So we have\nequality throughout, and in particular $D_0^Z(\\pv{p}) = D_q^Z(\\pv{p})$. But\n$q > 0$, so by Lemma~\\ref{lemma:means}, $\\pv{p}$ is invariant. Hence for all\n$q' \\in [0, \\infty]$,\n\\[\nD_{q'}^Z(\\pv{p}) = D_q^Z(\\pv{p}) = \\Dmax{Z},\n\\]\nand therefore $\\pv{p}$ is maximizing.\n\\hfill\\ensuremath{\\Box} \n\\end{proof}\n\nThe importance of Corollary~\\ref{cor:some-all} is that if one has solved the\nproblem of maximizing entropy or diversity of any particular order $q > 0$,\nthen one has solved the problem of maximizing entropy and diversity of all\norders. In the following example, we observe that for a certain class of\nsimilarity matrices, the problem of maximizing the entropy of order $2$ has\nalready been solved in the literature; we can immediately deduce a\nmore general maximum entropy theorem.\n\n\\begin{example} \\lbl{eg:graphs}\nLet $G$ be a finite reflexive graph. Thus, $G$ consists of a finite set $\\{1,\n\\ldots, n\\}$ of vertices equipped with a reflexive symmetric binary relation\n$E$. Such graphs correspond to similarity matrices $Z$ whose entries are all\n$0$ or $1$, taking $Z_{ij} = 1$ if $(i, j) \\in E$ and $Z_{ij} = 0$\notherwise.\n\nFor each $q \\in [0, \\infty]$ and distribution $\\pv{p}$, define\n$D_q^G(\\pv{p}) = D_q^Z(\\pv{p})$, the \\demph{diversity of order $q$} of\n$\\pv{p}$ with respect to $G$. Thus,\n\\[\nD_q^G(\\pv{p})\n=\n\\left\\{\n\\renewcommand{\\arraystretch}{3}\n\\begin{array}{ll}\n\\displaystyle\n\\left( \\sum_{i:\\,p_i > 0} p_i\n\\left( \\sum_{j:\\,(i, j) \\in E} p_j \\right)^{q - 1}\n\\right)^\\frac{1}{1 - q} &\n\\textrm{if } q \\neq 1, \\infty \\\\\n\\displaystyle\n\\prod_{i:\\,p_i > 0} \n\\left( \\sum_{j:\\,(i, j) \\in E} p_j \\right)^{-p_i} &\n\\textrm{if } q = 1 \\\\\n\\displaystyle\n1\/\\max_{i:\\,p_i > 0} \\sum_{j:\\,(i, j) \\in E} p_j &\n\\textrm{if } q = \\infty.\n\\end{array}\n\\renewcommand{\\arraystretch}{1}\n\\right.\n\\]\n\nA set $K \\subseteq \\{1, \\ldots, n\\}$ of vertices of $G$ is \\demph{discrete} if $(i,\nj) \\not\\in E$ whenever $i, j \\in K$ with $i \\neq j$. Write $\\mr{d}(G)$\nfor the largest integer $d$ such that there exists a discrete set in $G$\nof cardinality $d$. Also, given any\nnonempty set $B \\subseteq \\{1, \\ldots, n\\}$, write $\\pv{p}^B$ for the distribution\n\\[\n\\pv{p}^B_i\n=\n\\left\\{\n\\begin{array}{ll}\n1\/|B| &\\textrm{if } i \\in B \\\\\n0 &\\textrm{otherwise}.\n\\end{array}\n\\right.\n\\]\n\n\\emph{Claim:} For all $q \\in [0, \\infty]$,\n\\[\n\\sup_{\\pv{p}} D_q^G(\\pv{p}) = \\mr{d}(G),\n\\]\nand the supremum is attained at $\\pv{p}^K$ for any discrete set $K$ of\ncardinality $\\mr{d}(G)$.\n\n\\emph{Proof:} We use the following result of Berarducci, Majer and\nNovaga~\\cite{BMN}. Let $G'$ be a finite \\emph{ir}reflexive graph with $n$\nvertices, that is, an irreflexive symmetric binary relation $E'$ on $\\{1,\n\\ldots, n\\}$. A set $K$ of vertices of $G'$ is a \\demph{clique} (or complete\nsubgraph) if $(i, j) \\in E'$ whenever $i, j \\in K$ with $i \\neq j$. Write\n$\\mr{c}(G')$ for the largest integer $c$ such that there exists a clique in\n$G'$ of cardinality $c$. Their Proposition~4.1 states that\n\\[\n\\sup_{\\pv{p}} \\sum_{(i, j) \\in E'} p_i p_j\n=\n1 - \\frac{1}{\\mr{c}(G')}\n\\]\n(which they call the `capacity' of $G'$). Their proof shows that the supremum\nis attained at $\\pv{p}^K$ for any clique $K$ of cardinality\n$\\mr{c}(G')$. \n\nWe are given a graph $G$. Let $G'$ be its dual graph, with the same vertex-set\nand with edge-relation $E'$ defined by $(i, j) \\in E'$ if and only if $(i, j)\n\\not\\in E$. Then $G'$ is irreflexive, a clique in $G'$ is the same as a\ndiscrete set in $G$, and $\\mr{c}(G') = \\mr{d}(G)$. For any distribution\n$\\pv{p}$,\n\\[\n\\sum_{(i, j) \\in E'} p_i p_j \n=\n1 - \\sum_{(i, j) \\in E} p_i p_j\n=\nH_2^Z(\\pv{p}).\n\\]\nLet $K$ be a clique in $G'$ of maximal cardinality, that is, a discrete set in\n$G$ of maximal cardinality. Then by~\\cite{BMN}, $\\pv{p}^K$ is 2-maximizing and\n\\[\nH_2^Z(\\pv{p}^K) \n= \n1 - \\frac{1}{\\mr{c}(G')} \n=\n1 - \\frac{1}{\\mr{d}(G)}.\n\\]\nBut it is a completely general fact that \n\\[\nH_2^Z(\\pv{p}) \n= \n1 - \\frac{1}{D_2^Z(\\pv{p})}\n\\]\nfor all $\\pv{p}$, directly from the definitions in\nSection~\\ref{sec:statement}. Hence $\\mr{d}(G) = D_2^Z(\\pv{p}^K) =\nD_2^G(\\pv{p}^K)$, and the claim holds for $q = 2$.\nCorollary~\\ref{cor:some-all} now tells us that the claim holds for all $q \\in\n[0, \\infty]$.\n\nThis class of examples tells us that a similarity matrix may have several\ndifferent maximizing distributions, and that a maximizing distribution\n$\\pv{p}$ may have $p_i = 0$ for some values of $i$. These phenomena have\nbeen observed in the ecological literature in the case $q = 2$ (Rao's\nquadratic entropy): see Pavoine and Bonsall~\\cite{PB} and references therein. \n\\end{example}\n\nComputing the maximum diversity is potentially slow, because in principle\none has to go through all $2^n$ subsets of $\\{1, \\ldots, n\\}$. But if the\nsimilarity matrix satisfies some further conditions, a maximizing distribution\ncan be found very quickly:\n\n\\begin{cor} \\lbl{cor:pos-def-div}\nLet $Z$ be a positive definite similarity matrix whose unique weighting\n$\\pv{w}$ is non-negative. Then $\\Dmax{Z} = \\magn{Z}$. Moreover,\n$\\pv{w}\/\\magn{Z}$ is a maximizing distribution, and if $\\pv{w}$ is\npositive then it is the unique such.\n\\end{cor}\n\n\\begin{proof}\nFollows immediately from Proposition~\\ref{propn:pos-def-sub} and\nTheorem~\\ref{thm:main}. \n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{example} \\lbl{eg:ultra}\nA similarity matrix $Z$ is \\demph{ultrametric} if $\\min\\{Z_{ij}, Z_{jk}\\} \\leq\nZ_{ik}$ for all $i, j, k$ and $Z_{ij} < 1$ for all $i \\neq j$. As shown\nbelow, every ultrametric matrix is positive definite and its weighting\nis positive. Hence its maximum diversity is its magnitude, it has a unique\nmaximizing distribution, and that distribution is nowhere zero.\n\nUltrametric matrices are closely related to \\demph{ultrametric spaces}, that\nis, metric spaces satisfying a stronger version of the triangle inequality:\n\\[\n\\max\\{ d(a, b), d(b, c) \\} \\geq d(a, c)\n\\]\nfor all points $a, b, c$. Any finite metric space $A = \\{a_1, \\ldots, a_n\\}$\ngives rise to a similarity matrix $Z$ by putting $Z_{ij} = e^{-d(a_i, a_j)}$,\nand if the space $A$ is ultrametric then so is the matrix $Z$.\n\nUltrametric matrices also arise in the quantification of biodiversity. Take a\ncollection of $n$ species, and suppose, for example, that we choose a\ntaxonomic measure of species similarity:\n\\[\nZ_{ij} =\n\\left\\{\n\\begin{array}{ll}\n1 &\\textrm{if } i = j \\\\\n0.8 &\\textrm{if $i \\neq j$ but the $i$th and $j$th species are of the same\ngenus} \\\\\n0.6 &\\textrm{if the $i$th and $j$th species are of different genera but\nthe same family} \\\\\n0 &\\textrm{otherwise.}\n\\end{array}\n\\right.\n\\]\nThis is an ultrametric matrix, so is guaranteed to have a unique maximizing\ndistribution. That distribution is nowhere zero: maximizing diversity\ndoes not eradicate any species. The same conclusion for general ultrametric\nmatrices was reached, in the case $q = 2$, by Pavoine, Ollier and\nPontier~\\cite{POP}. \n\nWe now prove that any ultrametric matrix $Z$ is positive definite with\npositive weighting. That $Z$ is positive definite was also proved by Varga\nand Nabben~\\cite{VN}, and that the weighting is positive was also proved\nin~\\cite{POP}. The following proof, which is probably not new either, seems\nmore direct.\n\nIf $n = 1$ then certainly $Z$ is positive definite and its weighting is\npositive. \n\nSuppose inductively that $n \\geq 2$. Write $z = \\min_{i, j} Z_{ij} < 1$. By\nthe ultrametric property, there is an equivalence relation $\\simeq$ on\n$\\{1, \\ldots, n\\}$ defined by $i \\simeq j$ if and only if $Z_{ij} > z$. We\nmay partition $\\{1, \\ldots, n\\}$ into two nonempty subsets, $B$ and $B'$, each\nof which is a union of $\\simeq$-equivalence classes; and without loss of\ngenerality, $B = \\{1, \\ldots, m\\}$ and $B' = \\{m + 1, \\ldots, n\\}$, where $1\n\\leq m < n$. For all $i \\leq m$ and $j \\geq m + 1$ we have $Z_{ij} \\leq z$,\nthat is, $Z_{ij} = z$. Hence \n\\[\nZ \n=\n\\left(\\!\\!\n\\begin{array}{cc}\nY &z\\um{m}{n - m} \\\\\nz\\um{n - m}{m} &Y'\n\\end{array}\n\\!\\!\\right)\n\\]\nwhere $Y$ is some $m \\times m$ matrix, $Y'$ is some $(n - m) \\times (n - m)$\nmatrix, and $\\um{k}{\\ell}$ denotes the $k \\times \\ell$ matrix all of whose\nentries are $1$. Now $Y$ and $Y'$ are ultrametric with entries in $[z, 1]$,\nso the matrices\n\\[\nX = \\frac{1}{1 - z} (Y - z\\um{m}{m}),\n\\qquad\nX' = \\frac{1}{1 - z}(Y' - z\\um{n - m}{n - m})\n\\]\nare also ultrametric. By inductive hypothesis, $X$ and $X'$ are\npositive definite and their respective weightings are positive.\n\nWe have\n\\begin{equation} \\lbl{eq:ultra-inductive}\nZ \n= \nz \\um{n}{n} + \n(1 - z)\n\\left(\\!\\!\n\\begin{array}{cc}\nX &0 \\\\\n0 &X'\n\\end{array}\n\\!\\!\\right).\n\\end{equation}\nThe matrix $\\um{n}{n}$ is positive-semidefinite, since $\\pv{x}^\\mathrm{t}\n\\um{n}{n} \\pv{x} = (x_1 + \\cdots + x_n)^2$ for all $\\pv{x} \\in \\mathbb{R}^n$.\nAlso $\\left(\\!\\!\n\\begin{array}{cc}\nX &0 \\\\\n0 &X'\n\\end{array}\n\\!\\!\\right)$ is positive definite, since $X$ and $X'$ are. Finally, $z \\geq 0$\nand $1 - z > 0$. It follows that $Z$ is positive definite.\n\nWrite $\\pv{v}$ and $\\pv{v}'$ for the weightings on $X$ and $X'$. Put\n\\[\n\\pv{w}\n=\n\\frac{1}{z\\left(\\sum_{i = 1}^m v_i + \\sum_{j = 1}^{n - m} v'_j\\right) \n+ (1 - z)}\n\\left(\\!\\!\n\\begin{array}{c}\nv_1 \\\\ \n\\vdots \\\\\nv_m \\\\\nv'_1 \\\\\n\\vdots \\\\\nv'_{n - m}\n\\end{array}\n\\!\\!\\right).\n\\]\nThe weightings $\\pv{v}$ and $\\pv{v'}$ are positive and $0 \\leq z \\leq 1$, so\n$\\pv{w}$ is positive. And it is routine to verify,\nusing~(\\ref{eq:ultra-inductive}), that $\\pv{w}$ is the weighting on $Z$.\n\\end{example}\n\n\\begin{example} \\lbl{eg:metric}\nTake a metric space with three points, $a_1, a_2, a_3$, and put $Z_{ij}\n= e^{-d(a_i, a_j)}$. This defines a $3 \\times 3$ similarity matrix $Z$ with\n$Z_{ij} < 1$ for all $i \\neq j$ and $Z_{ij} Z_{jk} \\leq Z_{ik}$ for all $i, j,\nk$. We will show that $Z$ is positive definite and that its unique weighting\nis positive. It follows that there is a unique maximizing distribution and\nthat the maximum diversity is $\\magn{Z}$. We give explicit expressions for\nboth.\n\nFirst, Sylvester's Criterion states that a symmetric real $n\\times n$ matrix\nis positive definite if and only if for all $m \\in \\{1, \\ldots, n\\}$, the\nupper-left $m \\times m$ submatrix has positive determinant. In this case:\n\\begin{itemize}\n\\item the upper-left $1 \\times 1$ matrix is $(1)$, which has determinant $1$\n\\item the upper-left $2 \\times 2$ matrix is $\\left(\\!\\! \\begin{array}{cc} 1\n&Z_{12}\\\\ Z_{12} &1 \\end{array} \\!\\!\\right)$, which has determinant $1 -\nZ_{12}^2 > 0$\n\\item the upper-left $3 \\times 3$ matrix is $Z$ itself, and\n\\begin{eqnarray*}\n\\det Z &= &\n1 - (Z_{12}^2 + Z_{23}^2 + Z_{31}^2) + 2 Z_{12} Z_{23} Z_{31} \\\\\n &= &\n(1 - Z_{12})(1 - Z_{23})(1 - Z_{31}) \n+\n(1 - Z_{12})(Z_{12} - Z_{13} Z_{32}) \\\\\n & &\n{}+\n(1 - Z_{23})(Z_{23} - Z_{21} Z_{13})\n+\n(1 - Z_{31})(Z_{31} - Z_{32} Z_{21}) \\\\\n &> &\n0.\n\\end{eqnarray*}\n\\end{itemize}\nHence $Z$ is positive definite. Next, it is easily checked that the\nunique weighting $\\pv{w}$ is given by $\\pv{w} = \\pv{v}\/\\det Z$, where, for\ninstance, \n\\begin{eqnarray*}\nv_1 &= &\n1 - (Z_{12} + Z_{13}) + (Z_{13}Z_{32} + Z_{12}Z_{23}) - Z_{23}^2 \\\\\n &= &\n(1 - Z_{12})(1 - Z_{23})(1 - Z_{31})\n+\n(1 - Z_{23})(Z_{23} - Z_{21}Z_{13}) \\\\\n &> &\n0.\n\\end{eqnarray*}\nSince $\\det Z > 0$, the weighting $\\pv{w}$ is positive. \n\nThe maximum diversity is $\\magn{Z} = w_1 + w_2 + w_3$, which is\n\\[\n1 +\n\\frac{2(1 - Z_{12})(1 - Z_{23})(1 - Z_{31})}%\n{1 - (Z_{12}^2 + Z_{23}^2 + Z_{31}^2) + 2 Z_{12}Z_{23}Z_{31}}.\n\\]\n(This expression was pointed out to me by Simon Willerton.) The unique\nmaximizing distribution $\\pv{p}$ is given by $\\pv{p} = \\pd{\\pv{w}} =\n\\pv{w}\/\\magn{Z} = \\pv{v}\/(\\magn{Z}\\det Z)$, so\n\\[\np_1\n=\n\\frac{1 - (Z_{12} + Z_{13}) + (Z_{13}Z_{32} + Z_{12}Z_{23}) - Z_{23}^2}%\n{1 - (Z_{12}^2 + Z_{23}^2 + Z_{31}^2) + 2 Z_{12}Z_{23}Z_{31}\n+ 2(1 - Z_{12})(1 - Z_{23})(1 - Z_{31})}\n\\]\nand similarly for $p_2$ and $p_3$.\n\\end{example}\n\nExample~\\ref{eg:graphs} (graphs) shows that maximizing distributions\nsometimes contain some zero entries. In ecological terms this means that\ndiversity is sometimes maximized by completely eradicating certain species,\nwhich may be contrary to acceptable practice. For this and other reasons, we\nmight seek conditions under which some or all of the maximizing distributions\n$\\pv{p}$ satisfy $p_i > 0$ for all $i$.\n\n\\begin{cor} \\lbl{cor:close-to-id}\nLet $Z$ be a similarity matrix such that $Z_{ij} < 1\/(n - 1)$ for all $i \\neq\nj$. Then $\\Dmax{Z} = \\magn{Z}$. Moreover, $Z$ has a unique weighting\n$\\pv{w}$, the unique maximizing distribution is $\\pv{p} = \\pv{w}\/\\magn{Z}$,\nand $p_i > 0$ for all $i$.\n\\end{cor}\n\n\\begin{proof}\nBy Lemma~\\ref{lemma:scattered}, $Z$ is positive definite and its unique\nweighting is positive. Then apply Corollary~\\ref{cor:pos-def-div}.\n\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nThe extra hypothesis on $Z$ is strong, possibly too strong for the corollary\nto be of any use in ecology: when $n$ is large, it forces $Z$ to be very close\nto the identity matrix. On the other hand, the ecological interpretation of\nCorollary~\\ref{cor:close-to-id} is clear: if we treat every species as highly\ndissimilar to every other, the distribution that\nmaximizes diversity conserves all of them.\n\n\n\n\\passage{Acknowledgements} Parts of this work were done during visits to the\nCentre de Recerca Matem\\`atica, Barcelona, and the School of Mathematics and\nStatistics at the University of Sheffield. I am grateful to both institutions\nfor their hospitality, and to Eugenia Cheng, David Jordan and Joachim Kock for\nmaking my visits both possible and pleasant. I thank Christina Cobbold,\nAndr\\'e Joyal and Simon Willerton for useful conversations, and Ji\\v{r}\\'\\i\\\nVelebil for tracking down a reference.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Non-ideal fabrication in fixed frequency\nqubits}\n\nLattices of coupled qubits are proposed to enable error-correction algorithms such as the `surface code' \\cite{Gambetta2017Build_s,fowler2012surface_s}. Qubits are arranged into a square grid with alternate qubits serving either data or error-checking functions. Bus-couplers provide interaction among adjacent qubits, with up to four qubits attached to each bus. A seven qubit-lattice thereby comprises 12 qubit pairs and a seventeen-qubit lattice comprises 34 pairs. However, single junction transmon qubits are challenging to fabricate at precisely set frequencies. Among dozens of identically-fabricated qubits, the frequencies typically have a spread of $\\sigma_f \\sim 200$ MHz \\cite{privcommsrosenblatt_s}. Such imprecision will inhibit functioning of qubit lattices. Considering a lattice of tansmon qubits of frequency $\\sim 5$ GHz and anharmonicity $\\delta\/2\\pi = -340$ MHz, and considering cross-resonance gate operations, we can estimate the number of undesired interactions among these pairs. Studies of the cross-resonance gate \\citep{divincenzo2013quantum_s} indicate that these gates will be dominated by undesirable interactions if the frequency separation $|\\Delta|$ between adjacent qubits is equal to zero, a degeneracy between $f_{01}$ of the qubits; equal to $-\\delta\/2\\pi$, a degeneracy between $f_{01}$ of one qubit and $f_{12}$ of the next; or if $|\\Delta| > -\\delta\/2\\pi$ (weak interaction leading to very slow gate operation). In a simple Monte Carlo model, we assign to all points in the lattice a random qubit frequency from a gaussian distribution around 5 GHz, and count the number of degenerate or weak-interaction pairs, taking a range of $\\pm (\\delta\/2\\pi)\/20$, or $\\pm 17$ MHz around each degeneracy. The results appearing in Table \\ref{table:MCModelCollisions} make it evident that the likelihood of frequency collisions increases as the lattice grows.\n\n\\begin{table}[h]\n\t\\centering\n\t\\begin{tabular}{c|c|c}\n\t\tNumber & & Mean Number \\\\\n\t\tof QBs & $\\sigma_f$ & of Collisions \\\\ \\hline\n\t\t7 & $\\frac{1}{2}|\\delta\/2\\pi|$ & 2.3 \\\\ \n\t\t7 & $\\frac{3}{4}|\\delta\/2\\pi|$ & 3.6 \\\\ \n\t\t17 & $\\frac{1}{2}|\\delta\/2\\pi|$ & 6.6 \\\\ \n\t\t17 & $\\frac{3}{4}|\\delta\/2\\pi|$ & 10.6 \n\t\\end{tabular} \n\\caption{\\label{table:MCModelCollisions} Frequency-collision modeling in lattices of transmon qubits employing cross-resonance gates. Predicted number of bad gate pairs (`frequency collisions') in two different lattice sizes. 7-qubit lattice has 12 pairs and 17-qubit lattice has 34 pairs. Mean of distribution is 5 GHz and two different distribution widths $\\sigma_f$ are considered.}\n\\end{table}\n\n\n\\section{Device design and fabrication}\n\nThe device for sample A, shown in Fig. \\ref{fig:1}, has all\neight qubit\/cavities capacitively coupled to a common feedline through\nwhich individual qubit readout was achieved via a single microwave\ndrive and output line. Sample B, shown in Fig. \\ref{fig:1}, employs a design where all qubits have separate drive and readout microwave lines. As in Ref. \\cite{Takita2016Dem_s} and \\cite{ibmquantumexp_s}, this sample is designed as a lattice of coupled qubits for use in multi-qubit gate operations, although no such operations are presented in this paper. Coplanar-waveguide buses, half-wave resonant at $\\sim$6 GHz, span the space between the qubits. Each bus resonator couples together three adjacent qubits. As compared to Ref. \\cite{Takita2016Dem_s}, here the lattice comprises eight qubits and four buses instead of the seven qubits and two buses found in Ref. \\cite{Takita2016Dem_s}.\n\nBoth samples were fabricated using standard lithographic processing\nto pattern the coplanar waveguides, ground plane, and qubit capacitors\nfrom a sputtered Nb film on a Si substrate. In sample A the Nb films are 100 nm thick. In sample B they are 200 nm. The qubits were similar\nin design to \\citep{Sheldon_Procedure_2016_s,chow2014implementing_s,corcoles2015demonstration_s,Takita2016Dem_s} \nwith large transmon capacitor pads bridged by electron-beam patterned Al traces used to\ncreate Josephson junctions. Conventional shadow-evaporated double-angle Al-AlOx-Al was used to fabricate the junctions. Transmon capacitor pads in samples A and B have different size and separation, necessitating different SQUID loop geometries, as shown in Fig. \\ref{fig:1}. The SQUID loops for qubits on sample A were created by bridging the transmon capacitor pads with\ntwo separate $0.6-\\mu{\\rm m}$ wide Al traces and Josephson junctions, with the\nasymmetry in the junctions fabricated by increasing the width of one\njunction with respect to the other, while keeping the overlap fixed at $0.2\\, \\mu{\\rm m}$. The sum of the large and small junction areas was designed to be constant, independent of $\\alpha$.\nQubits on sample A had capacitor pads separated by $20\\, \\mu{\\rm m}$ and\nthe Al electrodes separated such that the SQUID loop area was roughly\n$400\\, \\mu{\\rm m^{2}}$. In sample B, the Nb capacitor pads were separated by $70\\, \\mu{\\rm m}$. The SQUID comprises a $\\sim 20 \\times 20\\, \\mu{\\rm m}^2$ Al loop of 2 $\\mu$m trace width, placed midway between the capacitor pads and joined to Nb leads extending from the pads. In sample B, the large and small junction differ in both width and overlap. In this sample, all SQUIDs of a given $\\alpha$ were fabricated identically but SQUIDs of different $\\alpha$ had different total junction area. \n\n\\begin{figure}[!b]\n\\includegraphics[width=1.0\\columnwidth]{Device_Image_Supp2}\n\n\\caption{(color online) Optical micrographs of samples including higher magnification images of qubits and SQUID loops. Sample B image is a chip of identical design to the ones used for measurements. In sample B image, labels indicate each qubit and its individual readout resonators, while unlabeled resonators are bus resonators. \n\\label{fig:1}}\n\\end{figure}\n\n\\section{Measurement setup}\n\nMeasurements of sample A were completed in a dilution\nrefrigerator (DR) at Syracuse University (SU), while sample B was measured in a DR at the IBM TJ Watson Research Center. Both samples were wire-bonded into holders designed to suppress microwave\nchip modes. Each sample was mounted to the mixing chamber of its respective DR and placed inside a cryoperm magnetic shield, thermally anchored at the mixing chamber. Both SU and IBM DRs had room-temperature $\\mu$-metal shields. Measurements for both samples were performed using standard cQED readout techniques \\citep{Reed_High_2010_s}.\n\nFor sample A, room-temperature microwave signals were supplied through attenuated coaxial\nlines, thermalized at each stage of the DR and filtered using 10\nGHz low pass filters (K\\&L) thermalized at the mixing chamber. We used\na total of 70 dB of attenuation on the drive-lines: 20 dB at $4\\, {\\rm K}$, 20\ndB at $0.7\\, {\\rm K}$ and 30 dB at the mixing chamber, with a base temperature of $30\\,{\\rm mK}$. Output measurement signals\nfrom the sample pass through another 10 GHz low-pass filter, a microwave\nswitch, and two magnetically shielded cryogenic isolators, all thermally anchored\nto the mixing chamber. In the case of sample A, the signal was amplified\nby a low-noise HEMT at $4\\, {\\rm K}$, passing through a Nb\/Nb superconducting\ncoaxial cable between the mixing chamber and $4\\, {\\rm K}$ stage. The signal was amplified further\nat room temperature before being mixed down to 10 MHz and digitized. The eight resonators, coupled to each qubit on sample A, had measured frequencies that ranged from $6.975 - 7.136\\, {\\rm GHz}$, separated by $20 - 25\\, {\\rm MHz}$. $\\kappa\/{2\\pi}$ linewidths for these resonators were on the order of a few hundreds of kHz. \n\nFigure \\ref{fig:1} shows the layout of the sample B chip. The $\\alpha = 15$ asymmetric-SQUID transmon reported in the paper was located at position $Q_7$. It was read out through a coplanar waveguide resonator of frequency 6.559 GHz and linewidth $\\sim$ 300 kHz, and was found to have $f_{01}^{max} = 5.387$ GHz. The fixed-frequency transmon (5.346 GHz) at position $Q_2$ was read out through a 6.418 GHz resonator having linewidth $\\sim$ 300 kHz. Sample B qubits were measured via signal wiring similar to that presented in Refs. \\cite{Takita2016Dem_s,chow2014implementing_s,corcoles2015demonstration_s,sheldon2016characterizing_s}. Drive wiring included 10 dB of attenuation at 50 K, 10 dB at 4K, 6 dB at 0.7 K, 10 dB at 100 mK, and at the mixing-chamber plate 30 dB of attenuation plus a homemade `Eccosorb' low-pass filter. Drive signals entered a microwave circulator at the mixing plate. On one set of signal wiring, the 2nd port of the circulator passed directly to qubit $Q_7$. In another set of signal wiring, the second port of the circulator passed to several different qubits via a microwave switch. Signals reflected from the device passed back through the circulator to output and amplifier circuitry. Output circuitry comprised a low-pass Cu powder filter, followed by two cryogenic isolators in series, followed by an additional low-pass filter, followed by superconducting NbTi coaxial cable, followed by a low-noise HEMT amplifier at 4K and an additional low-noise amplifier at room temperature. Low-pass filters were intended to block signals above $\\sim$ 10 GHz. In the case of $Q_7$, additional amplification was afforded by a SLUG amplifier \\cite{HoverAPL2014_104_152601_s} mounted at the mixing stage, biased via two bias-tee networks and isolated from the sample by an additional cryogenic isolator. Output signals were mixed down to 5 MHz before being digitized and averaged. Mixing-plate thermometer indicated a temperature of $\\sim$ 15 to 20 mK during measurements. \n\nMagnetic flux was supplied to sample A via a 6-mm inner diameter superconducting\nwire coil placed $2\\,{\\rm mm}$ above the sample. A Stanford SRS SIM928 dc voltage source with a room-temperature $2\\,{\\rm k}\\Omega$ resistor in series supplied the bias current to the coil. The flux bias current passed through brass coaxial lines that were thermally anchored\nat each stage of the DR, with a $80~{\\rm MHz}$ $\\pi$-filter at 4K and a copper powder filter on the mixing chamber. In sample B, a similar wire-wound superconducting coil was mounted about 3 mm above the qubit chip and likewise driven from a SIM928 voltage source through a room-temperature $5\\,{\\rm k}\\Omega$ bias resistor. DC pair wiring (Cu above 4K within the fridge, NbTi below) was used to drive the coil. The coil had a self-inductance of 3.9 mH and mutual inductance to the SQUID loop of $\\sim$ 1 pH. The flux coil applied a dc flux through\nall qubits with the flux level being set just prior to qubit measurement\nand maintained at a constant level throughout the measurement. For each qubit, we measured $f_{01}$ as a function of coil current and fit this against Eq. (1) of our paper to enable scaling of $\\Phi_0$ and subtract any offset flux, as well as to determine $f_{01}^{max}$ and asymmetry $d$. We treat the sign of flux as arbitrary. \n\n\\section{Qubit Coherence}\n\nCoherence data for both samples was collected using an automated measurement algorithm. After applying a prescribed fixed flux, the system determined the qubit frequency from Ramsey fringe fitting, optimized $\\pi$ and $\\pi\/2$ pulses at this frequency, and measured coherence. $T_{2}^{*}$ measurements were completed at a frequency detuned from the qubit frequency, with the level of detuning optimized to provide a reasonable number of fringes for fitting. All raw coherence data was visually checked to confirm that a good quality measurement was achieved. If the automated tuning routine failed to find the frequency or properly scale the $\\pi$ and $\\pi\/2$ pulses, this point was omitted from the dataset.\nFor sample A, three $T_1$ measurements were made at each flux point followed by three $T_2^*$ measurements. At each flux point, the reported $T_1$ and $T_2^*$ values and error bars comprise the mean and standard deviation of the three measurements. The corresponding $\\Gamma_{\\phi}$ value is found from these mean values and its error bar is found by propagating the errors in $T_1$ and $T_2^*$ through via partial derivative and combining these in a quadrature sum. For sample B, at each flux point first $T_1$ was measured, then $T_2^*$, three times in succession. For this device the reported $T_1$ and $T_2^*$ values comprise the mean of the three measurements and the error bars are their standard deviation. Here the reported dephasing rate $\\Gamma_{\\phi}$ comprises the mean of the three values of $\\Gamma_{\\phi}=1\/T_{2}^{*}-1\/2T_{1}$ found from the three $T_1$, $T_2^*$ pairs, and the error bar is the standard deviation. \n\n\\begin{figure}\n\\includegraphics[width=0.75\\columnwidth]{T1_vs_Freq_SYR_IBM2}\n\n\\caption{$T_{1}$ vs. frequency measured for all qubits discussed in the main paper. Single points included for $T_{1}$ values measured for the fixed-frequency qubits. \n\\label{fig:2}}\n\n\\end{figure}\n\nFigure \\ref{fig:2} shows $T_{1}$ plotted versus qubit frequency, measured for the qubits discussed in our paper. We observe a trend\nof increasing $T_{1}$ with decreasing qubit frequency. In sample A, each qubit's quality factor $\\omega T_{1}$ is roughly constant, consistent with dielectric loss and a frequency-independent loss tangent, as observed in other tunable superconducting qubits \\citep{barends2013coherent_s}. On sample B, $T_{1}$ decreases by about 10 $\\mu$s from the low to high end of the frequency range, consistent with Purcell loss to the readout resonator. In addition, fine structure is occasionally observed in Fig.\n\\ref{fig:2} where $T_{1}$ drops sharply at specific frequencies. These localized features in the $T_{1}$ frequency dependence are observed\nfor all tunable qubits that we have measured. These features, similar to those observed by \\citep{barends2013coherent_s}, are attributed to frequencies where a qubit transition is resonant with a two-level system defect on or near the qubit. Additionally, on sample B, at a few frequency points inter-qubit coupling affects relaxation. Where the $Q_7$ qubit is nearly degenerate to $Q_6$ (at $\\sim$5.33 GHz) and to $Q_8$ (at $\\sim$5.22 GHz), coupling via the adjacent buses produces an avoided crossing in the energy spectrum. This effect is barely noticeable in both the frequency curve of Fig. 2 of our paper as well as the relaxation data in Fig. \\ref{eq:2} here.\n\n\\begin{figure}\n\\includegraphics[width=0.75\\columnwidth]{Ramsey_vs_Phi_All2}\n\n\\caption{$T_{2}^{*}$ vs. flux measured for the qubits discussed in the main paper. $T_{2}^{*}$ measured for the fixed-frequency qubits on both samples is included with dashed lines to help guide the eye.\n\\label{fig:3}}\n\n\\end{figure}\n\nFigure \\ref{fig:3} shows $T_{2}^{*}$ plotted versus flux, measured\nfor the qubits discussed in our paper.\nFor the tunable qubits on\nsample A, $T_{2}^{*}$ is greatest at the qubit sweet-spots and decreases\naway from these sweet spots as $D_{\\Phi}$ increases.\nIn the $\\alpha = 15$ tunable qubit on sample B, $T_{2}^{*}$ is nearly constant over the measured half flux quantum range. The small frequency dependence observed in $T_{2}^{*}$ in sample B is consistent with the observed variation of $T_{1}$ with frequency, leading to the frequency-independent dephasing rate observed for this qubit\nin Fig. 3 of our paper.\n\n\\section{Relaxation Due to coupling to Flux Bias Line}\n\nWhile using two Josephson junctions to form a dc SQUID for the inductive element of a transmon allows its frequency to be tuned via magnetic flux, this opens up an additional channel for\nenergy relaxation via emission into the dissipative environment across the bias coil that is coupled to the qubit through a mutual inductance. This was first discussed by Koch et al \\citep{koch2007charge_s}. regarding\na near symmetrical split-junction transmon. We apply the same analysis\nhere to study the effect of increasing junction asymmetry on the qubit $T_{1}$ through this loss mechanism. For an asymmetric transmon, Koch et al. show in Eq. {(}2.17{)} of Ref. \\citep{koch2007charge_s} that the Josephson portion of the qubit Hamiltonian can be written in terms of a single phase variable with a shifted minimum that depends\nupon the qubit's asymmetry and the applied flux bias. \nBy linearizing this Hamiltonian about the static flux bias point for small noise amplitudes, Koch et al. compute the relaxation rate for a particular current noise power from the bias impedance coupled to the SQUID loop through a mutual inductance $M$.\nWe followed this same analysis for our qubit parameters, assuming harmonic oscillator wavefunctions for the qubit ground and excited state, and obtained the dependence of $T_{1}$ due to this mechanism as a function of bias flux.\nUsing our typical device\nparameters ($E_{J} = 20\\,{\\rm GHz}$, $E_{c} = 350\\,{\\rm MHz}$, $M = 2\\,{\\rm pH}$, $R =\n50~\\Omega$) we obtain the intrinsic loss for the asymmetries\ndiscussed in our paper, shown in Fig. \\ref{fig:4}. This analysis\nagrees with the results described in Ref. {[}4{]}. For a 10\\% junction asymmetry, this contribution results in a $T_{1}$ that varies between $25\\,{\\rm ms}$ and a few seconds. As the junction asymmetry is increased,\nthe minimum $T_{1}$ value, obtained at odd half-integer multiples of $\\Phi_{0}$, decreases slightly. However, even for our $\\alpha = 15$ qubit, the calculated value of $T_{1}$ due to this mechanism never falls below $10\\,{\\rm ms}$. Therefore, although increasing\njunction asymmetry does place an upper bound on $T_{1}$ of an asymmetric transmon, this level is two orders of magnitude larger than the measured $T_{1}$ in current state-of-the-art superconducting qubits due to other mechanisms.\n\n\\begin{figure}\n\n\\includegraphics[width=0.75\\columnwidth]{T1max_vs_phi}\n\n\\caption{Dependence of $T_{1}$ with flux for asymmetric transmons, calculated for the asymmetries discussed in the main paper, due to coupling to an external flux bias following the analysis of Koch et al \\citep{koch2007charge_s}. Though in the main paper our symmetric qubit was an $\\alpha = 1$, in this calculation we used $\\alpha = 1.1$ so that $T_{1}$ did not diverge at $\\Phi = 0$. \n\\label{fig:4}}\n\n\\end{figure}\n\nAlso in Ref. [4], Koch et al. described a second loss channel for a transmon related to coupling to the flux-bias line. In this case, the relaxation occurs due to the oscillatory current through the inductive element of the qubit -- independent of the presence of a SQUID loop -- coupling to the flux-bias line, described by an effective mutual inductance $M'$. This mutual vanishes when the Josephson element of the qubit and the bias line are arranged symmetrically. With a moderate coupling asymmetry for an on-chip bias line, Koch et al. estimate that the $T_{1}$ corresponding to this loss mechanism would be of the order of 70 ms. Because this mechanism does not directly involve the presence or absence of a SQUID loop for the inductive element, the asymmetry between junctions that we employ in our asymmetric transmons will not play any role here and this particular limit on $T_{1}$ should be no different from that for a conventional transmon. An additional potential relaxation channel may arise due to capacitive coupling to the flux-bias line, as discussed in Ref. \\cite{JohnsonDisserationYale2011_s}. However, this is expected to be negligible where a bobbin coil is used as in our experiments.\n\n\\section{Ramsey Decay Fitting}\n\nAs described in the main paper, our analysis of qubit dephasing rates used a purely exponential fit\nto all of the measured Ramsey decays. Here we discuss why this fitting approach is appropriate for all asymmetric qubits and a large portion of\nthe coherence data measured for the symmetric qubit.\n\nOf all the qubits measured in this study, the symmetric $\\alpha = 1$ qubit\nwas most impacted by flux noise away from the qubit sweet spot because of its large energy-band gradient. Therefore,\nto illustrate the impact that flux noise has upon the Ramsey decay\nenvelope we will consider the Ramsey measurements for this qubit on\nand off the sweet spot. Example measurements are shown at flux values of 0 and 0.3 ~$\\Phi_0$ in Fig. \\ref{fig:5}a and b, respectively. At each flux point,\nwe fit the Ramsey decay with both a purely exponential (Fig. \\ref{fig:5}a\nI) and purely Gaussian form (Fig \\ref{fig:5}a II), the residuals\nof each fit are included to compare the quality of fit in each case.\nAs has been discussed in the main paper, at the upper sweet-spot, where\n$D_{\\Phi} = 0$, non-flux dependent background-dephasing\nshould dominate and the Ramsey decay should be more readily fit using\nan exponential. Figure \\ref{fig:5}a shows that this is indeed the\ncase: the purely exponential fit provides a more precise fit to the\nRamsey decay, with the residuals to this fit being smaller over the entire range compared to those corresponding to the Gaussian fit. The Ramsey decay\nshown in Fig \\ref{fig:5}b was measured at a point where $D_{\\Phi}$\nwas the maximum measured for the $\\alpha = 1$ qubit. Here, it is clear that\na purely Gaussian form results in a better fit with smaller residuals than an exponential envelope. This indicates that, at this flux point, the $\\alpha = 1$ qubit is heavily impacted by low-frequency flux noise, as a purely 1\/f dephasing source would result in a Gaussian envolope for the decay \\citep{ithier2005decoherence_s}. Although a purely Gaussian fit form is useful\nfor illustrating the impact that flux noise has upon the Ramsey decay\nform, it is not an optimal quantitative approach for investigating dephasing in these qubits. This is because tunable transmons dephase not only due to flux noise with a roughly $1\/f$ power spectrum, but also due to other noise sources with different non-$1\/f$ power spectra \\citep{sears2012photon_s,schuster2005ac_s,gambetta2006qubit_s}. These other noise sources generally result in an exponential dephasing envelope. Also, dephasing has an intrinsic loss component that is always exponential in nature. Therefore, to accurately fit decay due to dephasing in these qubits, we must account for these exponential decay envelopes in any fitting approach that is not purely exponential.\n\n\\begin{figure}\n\n\\includegraphics[width=1\\columnwidth]{Ramsey_Decay_Fit}\n\\caption{Ramsey decay envolopes measured for the $\\alpha = 1$ qubit at a) the sweet-spot $\\Phi=0$ and b) $\\Phi=0.3 \\Phi_{0}$ where $D_{\\Phi}$ was the largest value measured for this qubit. At each flux point, the Ramsey decay envelopes are fit with both a purely exponential {(}I{)} and Gaussian {(}II{)} fit form. Functions fitted to the measured data {(}blue open circles{)} plotted as solid red lines.\n\\label{fig:5}}\n\n\\end{figure}\n\nTo account for the $T_{1}$ contribution to the Ramsey decay envelope in our non-exponential fitting, we take the average $T_{1}$ measured at each flux point\nand separate this from $T_{2}^{*}$ in the Ramsey fit function using\n$1\/T_{2}^{*} = 1\/T_{\\phi} + 1\/2T_{1}$. Therefore, instead\nof fitting a $T_{2}^{*}$ time, we fit $T_{\\phi} $\ndirectly. To fit the Ramsey using a Gaussian fit form, we square the\ndephasing exponent within the fitting function {[}Eq. {(}\\ref{eq:1}{)}{]}. We can go one step\nfurther by not forcing an explicit fit form to the dephasing exponent, but instead adding another fit parameter $\\gamma$ {[}Eq. {(}\\ref{eq:2}{)}{]}, which would be 1 for a pure exponential and 2 for a pure Gaussian. Although a fit that is not explicitly exponential or Gaussian is not motivated directly by a particular theoretical model, by fitting Ramsey decays with this free exponent\n$\\gamma$, we gain insight into the transition from flux-noise dominated dephasing at large $D_{\\Phi}$ to background dephasing near the sweet-spots. The two separate fit forms described above are given by the following decay functions:\n\n\\begin{equation}\nf_{Ramsey}(t)=A+B\\{\\cos{(\\omega t+\\delta)}\\exp{(-\\Gamma_{1}t\/2)}\\exp{[-(\\Gamma_{\\phi}t)^2]}\\}\\label{eq:1},\n\\end{equation}\n\\begin{equation}\nf_{Ramsey}(t)=A+B\\{\\cos{(\\omega t+\\delta)}\\exp{(-\\Gamma_{1}t\/2)}\\exp{[-(\\Gamma_{\\phi}t)^\\gamma}]\\}\\label{eq:2},\n\\end{equation}\nwhere A and B are magnitude and offset constants to adjust the arbitrary measured signal, $\\omega$ is the detuning from the qubit frequency with a phase offset $\\delta$, $\\Gamma_{1}$ is the intrinsic loss rate {(}$1\/T_{1}${)} and $\\Gamma_{\\phi}$ is the dephasing rate. Here, A, B, $\\omega$, $\\delta$, $\\Gamma_{\\phi}$, and $\\gamma$ are fit parameters. All other components are fixed with values determined using the methods discussed above.\n\n\\begin{figure}\n\n\\includegraphics[width=0.75\\columnwidth]{1to1_gamma_vs_Phi}\n\\caption{$\\gamma$ vs flux extracted from fits to the Ramsey measurements on the $\\alpha = 1$ qubit using Eq. \\ref{eq:2}.\n\\label{fig:6}}\n\n\\end{figure}\n\nThis behavior is illustrated in Fig. \\ref{fig:6}, where we plot $\\gamma$\nvs. flux extracted from fits to the Ramsey measurements on the $\\alpha = 1$\nqubit using Eq. {(}\\ref{eq:1}{)}. In the flux region between +\/- 0.1 $\\Phi_{0}$, $\\gamma \\approx 1$,\nindicating that the dephasing envelope is primarily exponential, and thus the dominant dephasing noise affecting the qubits here does not have a $1\/f$ spectrum. At flux bias points further away from the sweet-spot, $\\gamma$ shifts towards 2 as $D_{\\Phi}$ increases and appears\nto level off close to this value at flux biases above $\\sim 0.2\\,\\Phi_{0}$. Thus, in this bias regime, the dephasing envelope is primarily Gaussian and the dephasing noise influencing the qubits is predominantly low-frequency in nature with a $1\/f$-like spectrum \\citep{ithier2005decoherence_s,yoshihara2006decoherence_s}. \n\nWe can also vizualize this variable-exponent fit by plotting $\\gamma$ vs. $D_{\\Phi}$ rather than $\\Phi$, again, for the $\\alpha = 1$ qubit {(}Fig. \\ref{fig:7}{)}.\nIn this plot, $\\gamma$ approaches 2 for $D_{\\Phi}$ values around $6~\\rm GHz\/\\Phi_{0}$. We have also included vertical dashed lines on Fig. \\ref{fig:7} indicating the maximum $D_{\\Phi}$ values reached by the less tunable $\\alpha = 4$ and 7 qubits on sample A. Below these $D_{\\Phi}$ levels, $\\gamma$ is close to 1 implying that the decay envelope is nearly exponential, and thus justifying our use of an exponential decay for fitting the asymmetrical qubits in the main paper.\n\n\\begin{figure}\n\\includegraphics[width=0.75\\columnwidth]{1to1_gamma_vs_Grad}\n\n\\caption{$\\gamma$ vs $D_{\\Phi}$ extracted from fits to the Ramsey measurements on the $\\alpha = 1$ qubit using Eq. \\ref{eq:2}. Dashed lines included to indicate the maximum $D_{\\Phi}$ reached by the $\\alpha = 7$ {(}black dashed line{)} and $\\alpha = 4$ {(}blue dot-dashed line{)} qubits measured on sample A.\n\\label{fig:7}}\n\n\\end{figure}\n\nAs yet another approach to fitting the Ramsey decay envelopes, we can employ a function that separates the exponential from background-dephasing from the Gaussian form due to dephasing from noise with a low-frequency tail.\nFor this fit, along with separating\nout the $T_{1}$ contribution to the Ramsey decay envelope, we also determine the\nnon-flux dependent background-dephasing rate at the sweet-spot,\nthen use this rate as a fixed parameter in the fitting of our Ramsey measurements\nat any given flux point. We now have a composite Ramsey fit form that\nhas three components: a $T_{1}$ contribution and background dephasing component that are purely exponential and fixed by the fitting of separate measurements, plus a Gaussian component to capture the dephasing due to noise with a $1\/f$ spectrum. This leads to a composite fitting function of the form:\n\n\\begin{equation}\nf_{Ramsey}(t)=A+B\\{\\cos{(\\omega t+\\delta)}\\exp{(-\\Gamma_{1}t\/2)}\\exp{(-\\Gamma_{\\phi ,bkg}t)}\\exp{[-(\\Gamma_{\\phi}t)^2}]\\}\\label{eq:3},\n\\end{equation}\nwhere A and B are magnitude and offset constants to adjust the arbitrary measured signal, $\\omega$ is the detuning from the qubit frequency with a phase offset $\\delta$, $\\Gamma_{1}$ is the intrinsic loss rate {(}$1\/T_{1}${)}, $\\Gamma_{\\phi ,bkg}$ is the background dephasing rate measured at $D_{\\Phi}=0$ and $\\Gamma_{\\phi}$ is the fitted dephasing rate. Here, A, B, $\\omega$, $\\delta$, and $\\Gamma_{\\phi}$ are fit parameters. All other components are fixed with values determined using methods discussed above. Though this fit form well separates the different components to dephasing decay, it has one key deficiency: it assumes that the background dephasing rate is frequency independent, which is not necessarily justified, as the background dephasing mechanism may also vary with frequency. To calculate the total dephasing rate using this fit form, we add the constant background dephasing to the fitted $\\Gamma_{\\phi}$. \n\n\\begin{figure}\n\n\\includegraphics[width=0.75\\columnwidth]{1to1_RatevsGrad_4type_MAIN}\n\\caption{$\\Gamma_{\\phi}$ vs. $D_{\\Phi}$ calculated for the $\\alpha = 1$ qubit using the exponential,\nGaussian {[}Eq. {(}\\ref{eq:1}{)}{]}, $\\gamma$-exponent {[}Eq. {(}\\ref{eq:2}{)}{]}, and composite {[}Eq. {(}\\ref{eq:3}{)}{]}fitting forms.\n\\label{fig:8}}\n\\end{figure}\n\nTo understand how the explicit fitting form impacts the dephasing rate, in Fig. \\ref{fig:8} we plot $\\Gamma_{\\phi}$ vs. $D_{\\Phi}$ calculated for\nthe $\\alpha = 1$ qubit using the four different fitting forms: exponential, Gaussian {[}Eq. {(}\\ref{eq:1}{)}{]}, $\\gamma$-exponent {[}Eq. {(}\\ref{eq:2}{)}{]}, and composite {[}Eq. {(}\\ref{eq:3}{)}{]}. We first note that any differences in the rate of dephasing calculated at each point using the various fit methods\nare subtle and the fits are reasonably consistent with one another within the fit error bars and scatter. We do observe, though, that a purely exponential fit results in\na dephasing rate that is slightly higher than the values from the Guassian fits for all flux points, resulting in the largest slope and thus the highest effective flux-noise level. Therefore, we conclude that forcing a purely exponential fit to the Ramsey decay envelopes measured for qubits that are strongly influenced\nby $1\/f$ flux noise simply puts an upper bound on the absolute flux\nnoise strength. The $\\gamma$-exponent fitting approach provides a dephasing\nrate that agrees well with that extracted from the exponential fit\nform at low $D_{\\Phi}$ values where background-dephasing\nprocesses dominate. However, at higher $D_{\\Phi}$ values\nwhere the qubit is heavily impacted by $1\/f$ flux noise, the $\\gamma$-exponent\nfit provides better agreement with the Gaussian-fitted dephasing rate.\n\nThe composite fit is rigidly fixed in the $\\Gamma_{\\phi}$ axis by the value chosen\nto match the background dephasing rate, in this case chosen to match\nthe rate observed at the lowest $D_{\\Phi}$ for the pure\nexponential fit. For this reason, direct comparisons between this\nfit and the others at individual flux points is more difficult. Despite all of these potential issues, the slope of $\\Gamma_{\\phi}$ vs. $D_{\\Phi}$ is independent of the chosen background-dephasing\nrate. Therefore, this composite fit can be used to calculate a flux-noise level for this $\\alpha = 1$ qubit that takes into account both the exponential\nnature of non-flux dependent dephasing and the Gaussian nature of\n$1\/f$ flux-noise decay. Using the same methods outlined in our paper, where we \nspecified $\\Gamma_{\\phi}=2\\pi\\sqrt{A_{\\Phi}|\\ln{(2\\pi f_{IR}t)}|}D_{\\Phi}$, following the approach described in Ref. {[}\\citep{ithier2005decoherence_s}{]}, we use the slope of this composite fit\nto extract a $1\/f$ flux noise level of $A_{\\Phi}^{1\/2}=1.3~\\pm~0.2~\\mu \\Phi_0$. \nThis $\\sim10\\%$ reduction in the extracted flux-noise level for the $\\alpha = 1$ qubit compared to the purely exponential fit {(}$A_{\\Phi}^{1\/2}~=~1.4~\\pm~0.2~\\mu \\Phi_0${)} brings it closer to the flux-noise level extracted from the fits to the measurements on the $\\alpha = 7$ and 4 qubits: $1.3~\\pm~0.2~\\mu \\Phi_0$ and $1.2~\\pm~0.2~\\mu \\Phi_0$, respectively. The Ramsey measurements for these qubits were fit using a purely exponential fit form. It is important to note though, that the $\\sim10\\%$ reduction in the composite fit extracted flux-noise level for the $\\alpha = 1$ qubit is within the errors associated with our flux-noise calculations.\n\nTo conclude this fitting study, we have shown that:\n\\begin{enumerate}\n\\item The $\\alpha = 1$ qubit in this study has a Ramsey decay envelope that is more Gaussian\nin nature at high $D_{\\Phi}$ values where the dephasing of this qubit is strongly influenced by low-frequency flux noise.\n\\item Though we have discussed different fitting approaches that better\nmodel the Ramsey decay envelope of qubits influenced by $1\/f$ flux-noise,\nusing a purely exponential decay form for the Ramsey decay simply\nputs an upper bound on the extracted flux noise strength. Also, the value of the flux-noise level and the dephasing rates are comparable to those we obtained with the various other fitting approaches.\n\\item Using a Ramsey fit function that takes into account both the exponential\nnature of the $T_{1}$ contribution to the decay envelope and non-flux dependent dephasing, as well as the\nGaussian nature of dephasing due to $1\/f$ flux noise, allows us to calculate\na flux noise level for the $\\alpha = 1$ qubit that agrees well with the other,\nasymmetric qubits on the same sample. This is expected, as qubits of the same geometry on the same chip should experience similar flux noise \\citep{sendelbach2008magnetism_s}.\n\\end{enumerate}\n\n\\section{Dephasing Rate Discussion}\n\nIn Fig. \\ref{fig:9} we present dephasing rates for several additional qubits, plotted against $D_{\\Phi}$. These qubits were similar to those in our paper, but were prepared on additional chips and measured during additional cools of our cryostats. These data are not included in our paper for reasons of clarity and consistency. However, they are presented here to support the observations found in this study across all qubits measured in both of our labs. \n\n\\begin{figure}[!b]\n\n\\includegraphics[width=1.0\\columnwidth]{RatevsGrad_SUP2}\n\\caption{$\\Gamma_{\\phi}$ vs $D_{\\Phi}$ for qubits measured during this study that were not included in the main paper. $\\Gamma_{\\phi}$ for fixed-frequency qubits included as dashed lines. Type A\/B qubits were similar in design to those on sample A\/B, measured using similar methods and device designs as those described for the corresponding sample type.\n\\label{fig:9}}\n\\end{figure}\n\nThe first observation we make from Fig. \\ref{fig:9} is that a spread in background dephasing rates is measured between both fixed-frequency and tunable qubits. As discussed in our paper, these subtle variations in qubit dephasing rate are not unexpected and are commonly observed in multi-qubit devices \\citep{corcoles2015demonstration_s,chow2014implementing_s,Takita2016Dem_s}. While these variations in dephasing rate make the figure somewhat challenging to interpret, we can still draw the same conclusions for this data as those from our main paper. We still observe that the dephasing rate due to flux-noise increases linearly with $D_{\\Phi}$ for the lower asymmetry qubits. Again at lower $D_{\\Phi}$ values, below $\\sim 1\\,{\\rm GHz}\/\\Phi_0$, the rate of dephasing is constant within the experimental spread for all qubits. Here, it is important to note that, for several of the qubits shown here and those discussed in our paper, there are specific flux bias points for each qubit where the dephasing rate is anomalously high. These points almost always coincide with places where $T_{1}$ drops sharply at specific frequencies, presumably due to localized coupling to defects in these qubits. Again, this sharp frequency dependence in $T_{1}$ is not unusual for tunable superconducting qubits and is consistent with what others have observed \\citep{barends2013coherent_s}. \n\nThe relatively flux-independent dephasing rate at low $D_{\\Phi}$ is particularly apparent in the 9:1 qubits we measured. Several of these qubits exhibited the lowest background depahsing rates we observed in our study, between 20 and 40~kHz. These dephasing rates are comparable to current state-of-the-art superconducting qubits \\cite{Takita2016Dem_s}. No fixed-frequency qubits were included on the same chips as these 9:1 asymmetric transmons, which prevents us from making a direct comparison with non-flux-noise-driven background dephasing rates as is done in the main paper. Nonetheless, for these 9:1 qubits, we can clearly see that the dephasing rate is essentially flux independent below $\\sim 1\\,{\\rm GHz}\/\\Phi_0$ even at these low background dephasing levels. This reinforces our statement that asymmetric qubits with a useful level of tunability can be incorporated into future fault-tolerant superconducting qubit devices, significantly aiding scalability in these systems. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nQuantum spin systems with energy gaps have attracted considerable attention\nbecause of their rich variety of interesting phenomena.\nIn these systems, the ground state is a nonmagnetic singlet state,\nand there exists a finite energy-gap between the ground and the excited magnetic states.\nUnder certain magnetic fields $H_{\\rm c}$, the energy of one of the magnetic excited states\nbecomes lower than that of the singlet state,\nwhere nonmagnetic-magnetic crossover occurs.\nRecently, special attention is paid to the magnetic transition\nthat develops just above $H_{\\rm c}$ in these spin-gap systems.~\\cite{Rice}\nA number of novel phenomena is reported in these fields;\nfor example, magnetic ordering of TlCuCl$_3$ in fields is interpreted as\nthe Bose-Einstein condensation (BEC) of magnons.~\\cite{Nikuni,Oosawa}\nFor the two dimensional dimer system SrCu$_2$(BO$_3$)$_2$,\nsuperlattice formation of localized triplet is observed.~\\cite{Kodama}\n\nThe Haldane systems, i.e., quasi-one-dimensional Heisenberg antiferromagnets with integer spins,\nalso are ones of the most extensively studied spin gap systems.\nFor this class, unfortunately, field-induced magnetic ordering \nhas hardly been observed experimentally.\nThe archetypal Haldane-system, Ni(C$_2$H$_8$N$_2$)$_2$NO$_2$(ClO$_4$) (NENP),~\\cite{Renard}\nshows no evidence of field-induced ordering down to 0.2 K in fields up to 13 T.~\\cite{Kobayashi92}\nInstead, the existence of an energy gap was revealed even at $H_{\\rm c}$.~\\cite{Kobayashi92}\nThis is explained by the existence of a staggered field on Ni sites,\nwhich arises because the principal axis of the $g$ tensor in NENP tilts\nalternately.~\\cite{Chiba,Fujiwara}\nThis fact results in the slow crossover from nonmagnetic to magnetically polarized state in NENP,\npreventing from phase transition induced by fields.\n\nSo far, field-induced ordering in Haldane-systems was\nreported only for two cases: in Ni(C$_5$H$_{14}$N$_2$)$_2$N$_3$(PF$_6$)\nand Ni(C$_5$H$_{14}$N$_2$)$_2$N$_3$(ClO$_4$), abbreviated NDMAP and NDMAZ, respectively.\nField-induced transitions in these systems were demonstrated \nby specific heat~\\cite{Honda97,Honda98,Honda01,Kobayashi01}\nand neutron diffraction experiments.~\\cite{Chen,Zheludev01E}\nIn the ordered state, interestingly, unusual spin excitations are observed.\nESR~\\cite{Hagiwara03} and inelastic neutron-scattering experiments~\\cite{Zheludev03} on NDMAP\nhave revealed the existence of three distinct excitations in the ordered phase.\nThis feature is quite different from those in a conventional Neel state,\nwhere dominant excitations are the spin-wave modes.~\\cite{Zheludev02A}\nField-induced ordered phase in Haldane-systems is thus expected to illustrate\nmuch kind of novel physics, if much more examples are available.\n\nHere the compound PbNi$_2$V$_2$O$_8$ would be another candidate for the Haldane\nsystem where field-induced ordering can be observed experimentally.\nPbNi$_2$V$_2$O$_8$ has a tetragonal crystal structure with Ni$^{2+}$ ($S=1$) ions \nforming a chain along the $c$-axis.\nMagnetic susceptibility, high-field magnetization, and inelastic neutron scattering\nexperiments were performed and their results consistently suggest that this system\nis a Haldane-gap system.~\\cite{Uchiyama}\nThe spin-gap closes at $H^{\\parallel}_{\\rm c} = 14$ T and $H^{\\perp}_{\\rm c} = 19$ T,~\\cite{Uchiyama}\nwhere $H^{\\parallel}_{\\rm c}$ and $H^{\\perp}_{\\rm c}$ are the critical fields\napplied parallel and perpendicular to the chain ($c$-axis), respectively.\nThese values of $H_{\\rm c}$ are within experimentally accessible range.\nMoreover, PbNi$_2$V$_2$O$_8$ is reported to exhibit\nimpurity-induced magnetic transition around 3 K.~\\cite{Uchiyama}\nThis transition was found to be a long-range magnetic ordering \nby the neutron diffraction~\\cite{Lappas}\nas well as the specific heat measurements~\\cite{Masuda}.\nThese facts suggest the relatively large interchain coupling, $J_1$.\nIn fact, the $D-J_1$ plot~\\cite{Sakai90} (Sakai-Takahashi diagram) for this compound,\nwhere $D$ is the single-ion anisotropy, \nsuggests that PbNi$_2$V$_2$O$_8$ is in the spin-liquid (disordered) regime\nbut very close to the long-range ordered regime.~\\cite{Zheludev00} \nHence, one can expect that applying fields beyond $H_{\\rm c}$\nwill result in the magnetic ordering.\nIn the present paper, we have investigated magnetic properties of PbNi$_2$V$_2$O$_8$\nat $H > H_{\\rm c}$ using temperature dependent measurements of magnetization in static fields\nup to 30 T,\nand have observed indication of magnetic ordering above $H_{\\rm c}$.\n\n\\section{Experimental}\nField-oriented powder sample of PbNi$_2$V$_2$O$_8$ was prepared \nas in the first report,~\\cite{Uchiyama}\nsince single crystalline samples are not yet available.\nPowder sample of PbNi$_2$V$_2$O$_8$ was synthesized by a solid state reaction\nfrom PbO (99.999\\% pure), NiO (99.99\\%) and V$_2$O$_5$ (99.99\\%).\nThey were mixed and heated in air, \nfirstly at 600$^\\circ$C and subsequently at 750$^\\circ$C for several days \nwith intermittent grindings. \n\nPowder X-ray diffraction (XRD) pattern agrees well with the calculated pattern\nbased on the structure refined by the neutron diffraction experiments,~\\cite{Lappas,Comment}\nand no second phase was detected.\nThe powder was aligned by a magnetic fields (6 T) in stycast.\nThe orientation was checked by the (004) XRD peak.\nThe result confirmed that the $c$-axis aligns parallel to the magnetic fields,\nas is reported previously.~\\cite{Uchiyama}\nIn the following, we refer to magnetization measured under fields\nparallel to the $c$-axis, as $M^{\\parallel}$, and\nto that under fields perpendicular to the $c$-axis, as $M^{\\perp}$.\n\nMagnetization was measured by an extraction method.\nMagnetic fields up to 15 T were generated by a superconducting magnet.\nFields higher than 15 T were generated by a hybrid magnet at the Tsukuba\nMagnet Laboratory.\nFor the measurements of magnetization as a function of magnetic fields,\nthe fields were swept at the rate about 0.3 Tesla per minute at the temperature of 1.5 K.\nFor the temperature-dependent measurements, magnetization was measured under\nconstant magnetic fields.\n\n\\section{Results and discussion}\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=8cm]{Fig1.eps}\n\\caption{\\label{fig:epsart} Field dependence of the magnetization of the\nPbNi$_2$V$_2$O$_8$ powder samples.}\n\\end{center}\n\\end{figure}\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=14cm]{Fig2.eps}\n\\caption{\\label{fig:epsart} Temperature dependence of the magnetization of \nPbNi$_2$V$_2$O$_8$ for (a) $H \\parallel c$ and (b) $H \\perp c$.\nArrows indicate $T_{\\rm min}$ shown in the text.\n}\n\\end{center}\n\\end{figure*}\nFig.1 shows field dependence of the magnetization $M^{\\parallel}(H)$\nand $M^{\\perp}(H)$ measured at $T$ = 1.5 K.\nBoth the $M^{\\parallel}(H)$ and $M^{\\perp}(H)$ curves steeply increase above \nthe critical fields, $H_{\\rm c}^{\\parallel}$ = 19 T and $H_{\\rm c}^{\\perp}$ = 13.5 T, respectively. \nThese values correspond to the critical fields at which the Haldane gap closes,\nand are in good agreement with the previous report obtained by\nthe pulsed-field experiments.~\\cite{Uchiyama}\n\nHere the $M(H)$ just above $H_c$ increases almost linearly with $H$.\nThis behavior differs from the theoretical predictions,\nwhere $M(H)$ varies as proportional to $\\sqrt{H-H_{\\rm c}}$ for axially symmetric fields\n($H \\parallel c$).~\\cite{Affleck,Takahashi}\nOne of the reason of this discrepancy can be the finite temperature effect.\nThe $\\sqrt{H-H_{\\rm c}}$ dependence easily becomes invisible by a slight thermal excitation,\nas is observed in NDMAZ.~\\cite{Kobayashi01}\nOther origins can be imperfect powder orientation,\nthat can lead axially-asymmetric fields even for $H \\parallel c$.\nHowever, it is also possible that the linear $M(H)$ is intrinsic behavior of an\nantiferromagnetically ordered system.\nWhen the measured temperature is sufficiently low compared to the energy gap,\nthe system promptly enters the antiferromagnetic ordered regime above $H_c$,\nas is shown below.\n\n\nFig.2(a) shows temperature dependence of the magnetization measured under\nfields parallel to the $c$-axis, $M^{\\parallel}(T)$.\nFor $H < H^{\\parallel}_{\\rm c} = 19$ T, the $M^{\\parallel}(T)$ curves show no anomalies.\nBelow 5 K, $M^{\\parallel}(T)$ have small but finite values (0.02-0.03$\\mu_{\\rm B}$),\nand these are attributed to the saturation magnetization of impurity and\/or defects.\nFor $H$ = 22 T, there exists a cusp-like minimum at\naround $T_{\\rm min}$ = 6.4 K. With increasing fields, $T_{\\rm min}$ \nshifts to higher temperatures systematically.\n$T_{\\rm min}$ reaches to 11.5 K at $H$ = 30 T, as is seen in the figure.\n\nFig.2(b) shows temperature dependence of the magnetization \nmeasured under fields perpendicular to the $c$-axis, $M^\\perp(T)$.\nSimilar to the above results, $M^{\\perp}(T)$ also exhibits a cusp-like minimum\nfor $H > H^{\\perp}_{\\rm c}$ = 13.5T.\n\nIn both Fig.2 (a) and (b), $M(T)$ shows a sharp change in its slope at $T_{\\rm min}$.\nMoreover, below $T_{\\rm min}$, $M(T)$ increases with decreasing $T$ showing a convex curve,\nas is most clearly seen in $M^{\\perp}(T)$ at $H$ = 30 T (Fig.2 (b)).\nSuch $M(T)$ curves quite resemble those field-induced magnetic ordering \nof the coupled dimers TlCuCl$_3$.~\\cite{Oosawa,Nikuni}\nIn this compound, it is demonstrated that $T_{\\rm min}$ at which $M(T)$ has a minimum\nis the N\\'{e}el temperature,\nfrom the neutron diffraction~\\cite{Tanaka} and the specific heat measurements.~\\cite{Oosawa01}\nSimilarly, such cusp-like anomalies in $M(T)$ curves are shown to be \nthe ordering temperature \nin the $S=1\/2$ alternating chain Pb$_2$V$_3$O$_9$ (ref.~\\cite{Waki})\nand the quasi-2-dimensional BaCuSi$_2$O$_6$ (ref.~\\cite{Jaime}) \nfrom specific heat measurements.\nWe hence conclude that the data shown in Fig.2 also demonstrate the occurrence of field-induced\nmagnetic ordering with N\\'{e}el temperatures around $T_{\\rm min}$.\n\nIt is notable that the Haldane system NDMAP exhibits a minimum in $M(T)$ for $H > H_{\\rm c}$\nat temperatures much higher than $T_{\\rm N}$.~\\cite{Honda01,HondaJAP}\nThe origin of the minimum is not yet clear, and possibly related to a\ncrossover into the low-temperature Tomonaga-Luttinger(TL) liquid regime,\nas is predicted for non-interacting one-dimensional ladders.~\\cite{Wang,Wessel}\nThis is purely a one-dimensional phenomenon,\nand the three-dimensional ordering occurs at much lower temperatures.~\\cite{Wessel}\nFor those cases, $M(T)$ curves around $T_{\\rm min}$ are characterized by the relatively\nbroad minimum and the concave curve.~\\cite{Honda01,HondaJAP,Wessel}\nThis is in clear contrast with the cusp-like anomalies and the convex curve below $T_{\\rm min}$\nin the present study as well as in those reported for TlCuCl$_3$ etc.,\nwhich signals the 3-dimensional magnetic ordering.\nIt is of course important to perform other experiments in order \nto verify the magnetic ordering at $T_{\\rm min}$.\nThe lack of single crystalline samples makes it difficult\nto measure the specific heat of this anisotropic compound.\nWe are then planning to measure the NMR spectra at high fields.\n\nIt may be interesting to compare the ordered-state induced by fields in PbNi$_2$V$_2$O$_8$\nwith that induced by impurity-doping in PbNi$_{2-x}$Mg$_x$V$_2$O$_8$.~\\cite{Uchiyama}\nIt is shown that the ordered-state of the latter has inhomogeneous distribution of magnetic moment,\nby ESR~\\cite{Smirnov} and ${\\rm \\mu}$SR experiments.~\\cite{Lappas}\nIn addition, the impurity-induced ordered state vanishes at fields higher than $H$ = 4 T,\nwhere the Haldane state with an energy gap recovers.~\\cite{Masuda}\nIn contrast, ordered state observed in the present experiments shows up only\nabove $H_{\\rm c}$.\nThe largest value of $T_{\\rm min}$ in the present study is $\\sim$10 K for $H$ = 30 T,\nwhich is much larger than the maximum value of $T_{\\rm N}$ induced by Mg-doping,\n3.3 K,~\\cite{Smirnov} or the value of $zJ_1 \\sim 0.03J \\simeq 3.1 $K,\nwith $z$ the number of nearest chains, and $J$ the intrachain coupling.~\\cite{Zheludev00}\nThis fact implies that the field-induced ordering occurs via the\ndeveloped antiferromagnetic-correlation along the chain,\nand the ordered moment induced by fields is distributed\nuniformly on the chain.\n\nIn Fig.3, the values of $T_{\\rm min}$ are plotted against the applied fields.\nThis corresponds to the magnetic phase diagram for PbNi$_2$V$_2$O$_8$.\nFor both of $H \\parallel c$ and $H \\perp c$, $T_{\\rm min}$ increases with fields.\nIt is notable that the phase boundaries for $H \\parallel c$ and $H \\perp c$ \ndo not cross each other at least within the field range measured.\nThis is in qualitative agreement with the theoretical calculation\nby Sakai,~\\cite{Sakai01} the $HT$ phase diagram calculated for a Haldane chain with\nnegative $D$.\nIndeed, $D\/J = -0.05$ is estimated for PbNi$_2$V$_2$O$_8$ from inelastic\nneutron scattering experiments.~\\cite{Zheludev00}\nIn contrast, it is reported that crossing of the phase boundaries occurs in the\nphase diagram of NDMAP and NDMAZ,~\\cite{Honda98,Kobayashi01}\nand is well explained by the theoretical calculation for positive $D$.~\\cite{Sakai00}\nThus, PbNi$_2$V$_2$O$_8$ is the first example of field-induced order in \nthe Haldane system with negative $D$.\n\n\\begin{figure}[tp]\n\\begin{center}\n\\includegraphics[width=8cm]{Fig3.eps}\n\\caption{\\label{fig:epsart} Magnetic phase diagram of PbNi$_2$V$_2$O$_8$\nsuggested from the present experiments (symbols).\n`AFO' and `Haldane-state' represent the antiferromagnetically ordered state\nand the nonmagnetic spin-singlet state with a Haldane-gap, respectively.\nOpen symbols represent the data for $H \\parallel c$, and filled ones are \nfor $H \\perp c$. Circles indicate $T_{\\rm min}$ determined from the $M(T)$ curves,\nwhile diamonds indicate $H_{\\rm c}$ estimated from the $M(H)$ curves.\n }\n\\end{center}\n\\end{figure}\nSince the early stage of the research of the Haldane systems,\nits magnetic state at $H > H_{\\rm c}$ has been discussed theoretically\nin terms of the BEC picture.~\\cite{Affleck,Sorensen}\nIn the following, we discuss on the possible condensed state in the ordered phase.\nIn Fig.2 (a), one can see that $M^{\\parallel}(T)$ increases \nbelow $T_{\\rm min}$ with decreasing $T$.\nSuch increase cannot be explained by a conventional mean-field theory,~\\cite{Tachiki}\nwhich predicts almost flat $M(T)$ below the ordered temperature.\nInstead, this increase is successfully explained by the magnon BEC theory,\nas to be due to the increase of magnon number as the condensation sets in.~\\cite{Nikuni}\nTo apply the magnon BEC theory to our case, it is essential that the rotational\nsymmetry around the magnetic field be conserved.~\\cite{Nikuni}\nIn the present case, $M^{\\parallel}(T)$ holds this restriction.\n\nIt is rather surprising that the $M^{\\perp}(T)$ also increases below $T_{\\rm min}$\nas is seen in Fig.2(b).\nHere $H$ is applied perpendicularly to $D$,\nthereby the rotational symmetry of the Hamiltonian around $H$ is broken.\nIn such cases, magnon BEC picture is not assured because the number of bosons \nis not conserved.~\\cite{Nikuni}\nIn fact, the $M^{\\perp}(T)$ of NDMAP does not increase below $T_{\\rm N}$\nbut becomes flat against $T$.~\\cite{Honda01,HondaJAP}\nSuch behavior is consistent with the Ising-like antiferromagnet that\nis predicted to develop for $H \\perp D$.~\\cite{Sakai00}\nFor the present system, the similarity of $M^{\\perp}(T)$ and $M^{\\parallel}(T)$\nmay be due to the relatively small $D$ ($D\/J = -0.05$).\nThis point should be studied more carefully.\n\nIt should be remarked, however, that the BEC picture requires some rigorous conditions.\nFirst, the concentration of magnons must be enough dilute.~\\cite{Nikuni}\nIn fact, experiments on KCuCl$_3$, isostructural of TlCuCl$_3$,\nshowed that the $M(T)$ curve becomes flat below $T_{\\rm N}$ for \nfields well above $H_{\\rm c}$.~\\cite{Oosawa02}\nThis behavior implies that for this dense magnon condition, \nmean field approximation is a better description.\nThe BEC picture should hence be applied only for the region $H - H_{\\rm C} \\sim 0$.\nMoreover, it is recently argued that some anisotropic interactions\narising from spin-orbit coupling like the Dzyaloshinsky-Moriya(DM) interaction \nand\/or the staggered $g$ effect can be qualitatively modify the BEC description,\neven if the interactions are very weak.~\\cite{Sirker04,Sirker05}\nRecent ESR measurements on TlCuCl$_3$ have indeed suggested the existence of \nsuch interactions.~\\cite{Glazkov}\nFor the present case, the screw-like crystal structure of PbNi$_2$V$_2$O$_8$\nmay cause the DM interaction, as is suggested to explain the weak-ferromagnetism \nin the isostructural SrNi$_2$V$_2$O$_8$.~\\cite{Zheludev00}\n\n\n\n\\section{Conclusions}\nWe have observed cusp-like anomaly at $T_{\\rm min}$ in the $M(T)$ curves for $H > H_{\\rm c}$.\nThe value of $T_{\\rm min}$ increases with applied fields.\nThese observations suggest the evolution of field-induced magnetic ordering\nin the Haldane chain system PbNi$_2$V$_2$O$_8$.\nMagnetic phase diagram of this system up to 30 T is presented.\nThe phase boundaries for $H \\parallel c$ and $H \\perp c$ do not\ncross each other, in qualitative agreement with the $HT$ phase diagram calculated theoretically for a Haldane \nsystem with $D<0$.~\\cite{Sakai01}\n\nIn the ordered phase, it is revealed that the magnetizations increase with decreasing $T$\nand the $M(T)$ have a convex curve for both the directions $H \\parallel c$ and $H \\perp c$.\nThese features may support that the magnon Bose-Einstein condensation picture\ncan be applicable as an approximation for Haldane gap systems at least for $H \\parallel c$.\nHowever, possible anisotropic effects including the Dzyaloshinsky-Moriya interaction\ncan modify the description of the ordered state significantly.\n\n\\begin{acknowledgments}\nN.T. gratefully acknowledges M. Hagiwara, A. Oosawa, M. Hase, and H. Kageyama for fruitful discussions.\nHe also thanks T. Waki and K. Yoshimura for informing their results, and \nK. Hashi and H. Shinagawa for the help of field-oriented sample preparation.\n\n\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIdentifying and analyzing Cyber Threat Information (CTI) is an important part of validating security alerts and incidents~\\cite{islam2019multi, koyama2015security, menges2019unifying, mittal2019cyber}. Any piece of information that helps organizations identify, assess, and monitor cyber threats is known as CTI~\\cite{johnson2016guide}. To help a Security Operation Centre (SOC) in using CTI, existing approaches, such as a unifying threat intelligence platform~\\cite{islam2019multi, koyama2015security, menges2019unifying, MISPJournal}, aim to automatically gather and unify CTI relevant to security alerts and incidents. However, gathering CTI is not enough to perform validation tasks, as security teams need to analyze and understand CTI for defining response actions. Security teams write scripts and define rules to extract necessary information from CTI, and map alerts and incidents to CTI~\\cite{anstee2017great, elmellas2016knowledge, RFID2021, tounsi2018survey}. Whilst techniques such as defining rules and scripts can be automated, they do not help in identifying evolving threats and alerts ~\\cite{serketzis2019actionable, ward2017building, zhou2019ensemble}, because rules can only be defined for behavior of known threats. Thus, human understanding and resolution are required to identify, define and update CTI, rules and scripts for emerging threats to adapt changing contexts.\n\nThe vast volume of CTI makes it time-consuming for a human to analyze. Thus, to address the shortcoming of defining rules and scripts to use CTI, we present a novel framework, SmartValidator. SmartValidator identifies CTI and validates security alerts and incidents by leveraging Artificial Intelligence (AI) based automation techniques~\\cite{faiella2019enriching, qamar2017data,serketzis2019actionable}. SmartValidator follows a systematic and structured approach for reducing the human cognitive burden to continuously monitor for changes (e.g., change in attack patterns and CTI) and define the automation strategies whenever changes occur. We focus on two aspects of automation: (i)~\\textit{automatic identification} of CTI for different alerts and (ii)~\\textit{automatic validation} of alerts using identified CTI. \n\nBy \\textit{automatic identification} of CTI, we mean identifying CTI from a wide variety of sources. The increasing presence and amount of CTI over the internet demands effective techniques to automate the identification of the required CTI for validation tasks~\\cite{faiella2019enriching,menges2019unifying,noor2019machine,qamar2017data}. The sources of CTI vary with differences in alerts and incidents~\\cite{EY2017,MISP2021,RFID2021}. Examples of CTI include Indicators of Compromise (IoC) (system artifacts or observables associated with an attack), Tactics Techniques Procedures (TTP) and threat intelligence reports~\\cite{johnson2016guide}. \n\nBy \\textit{automatic validation} of alerts and incidents, we refer to validating (i.e., prioritizing, and assessing the relevance or impact of) different types of alerts and incidents generated and identified by different detectors. In this context, by detector we mean any tools or systems used for detection of malicious activities. An organization deploys or develops different types of detectors that generate alerts upon detection of malicious activities. Examples of such detectors include Intrusion Detection Systems (IDS), vulnerability scanners and spam detectors. Validation of different types of security alerts and incidents requires extracting information from relevant CTI~\\cite{anstee2017great, elmellas2016knowledge, RFID2021, tounsi2018survey, WINKLER2017143}. For example, a network administrator or threat hunter writes scripts to search for CTI (e.g., information about the suspicious incident) and defines rules to validate an alert. \nThere are always cases for which automatic validation would not be suitable. For example, in our scenario, automated validation is not applicable for alerts and incidents that do not have associated CTIs; hence, such scenario would require a security team to perform manual analysis.\n\nThe massive volume and variations of CTI opens the door for automatic identification of patterns and gathering insights about CTI using Natural Language Processing (NLP) and Machine Learning (ML) techniques. For instance, Sonicwall has reported 9.9 billion malware attacks in its 2020 cyber threat report~\\cite{sonicwall2020}. The threat research team of Sonicwall has come across more than~1,200 new malware variants each day. Existing studies~\\cite{ibrahim2020challenges, le2019automated, noor2019machine,RFteam2018, serketzis2019actionable, Struve2017, zahedi2018empirical, zhou2019ensemble} have highlighted the power of AI to monitor, gather and analyze security intelligence. Recent advances have also been noticed in the use of NLP and ML techniques to extract patterns from threat data and gain insight about attacks and threats. The focus of these studies are application-specific, for example, detecting anomalies~\\cite{zhou2019ensemble} or automating vulnerability assessment~\\cite{le2019automated}, which need to be updated with changing CTI and organizational needs. These studies required knowledge of NLP and ML to build a model for performing the assessment or detection task. Most existing SOCs are managed SOCs (SOC as a service), which are subscription based~\\cite{Nick2020, ibrahim2020challenges}. They do not have dedicated data science or ML experts to design and update the AI based system based on their need. Considering this scenario \\textit{\"Can we design an efficient system to automate and assist the validation of security incidents and alerts with changing threat data and user needs\"?}.\n\nEvolving threat landscapes and changing needs of security teams demand a dynamic AI\/ML-based validation system which can be adapted at runtime.\nFor instance, if a security expert expresses interest to validate the maliciousness of a domain \"URL\", a prediction model is built by a data scientist team that classifies a URL as malicious or non-malicious. In an ML context, this task is known as a prediction or classification task. We propose three different layers to differentiate the tasks of threat data collection, validation and prediction model building. The purpose is to hide implementation complexity of data processing, prediction models and validators from security teams. Each layer is controlled and developed by experts with dedicated capabilities. Changing threat landscapes require SOCs to request new CTI and prediction models. One possible solution to this is to build and save prediction models for all possible attributes sets. However, building all possible models whenever changes occur will incur significant resource consumption (e.g., computation time). Most SOCs have limited resources; hence, instead of pre-building all possible combinations of prediction models, SmartValidator is designed to build the model based on a SOC's demands.\n\nWe have implemented a Proof of Concept (PoC) system to evaluate the effectiveness and efficiency of our proposed framework with two feature engineering approaches and eight ML algorithms. We have used an IoC form of CTI~\\cite{johnson2016guide} and collected open source threat intelligence (OSINT) from public websites and a CTI platform, MISP~\\cite{MISP2021, RFID2021}. The input of the developed system is a set of attributes from a security team. For example, a security team may want to investigate the \"\\textit{domain name}\" and \"\\textit{URL}\" to identify the \"\\textit{maliciousness}\" of an incident. The developed system takes these three attributes as input where \"\\textit{domain name}\" and \"\\textit{URL}\" are the observed attribute~($ob_{attrib}$) and \"\\textit{maliciousness}\" is the unknown attribute~($un_{attrib}$). The prediction models are built for classifying\/ predicting~$un_{attrib}$ based on~$ob_{attrib}$. \n\nTo capture changing contexts (i.e., security team requirements), we have considered five $un_{attrib}$: (i)~attack, (ii)~threat type, (iii)~name, (iv)~threat level and (v)~event. Eighteen different sets of~$ob_{attrib}$ (shown in Table~\\ref{tab:tableAttribute}) are provided to validate these five attributes to demonstrate the performance of the PoC with changing requirements. We have designed the PoC to select the suitable feature engineering approaches and ML algorithms at run time. \nSeven $ob_{attrib}$ are selected by the PoC to predict \\textit{attack} and 11 $ob_{attrib}$ sets are used to predict the remaining four attributes. Hence, the PoC provided a total 51 optimal prediction models for predicting five $un_{attrib}$ based on the preferred~18~$ob_{attrib}$. The results show that approximately~84\\% of the models have F1-scores above~0.72 and~75\\% of the models have F1-scores above~0.8. These results imply that SmartValidator is able to assist the automatic validation of threat alerts with a high level of confidence. Most of the models that were built with data gathered from the MISP platform can effectively predict~$un_{attrib}$ based on~$ob_{attrib}$ with a higher F1-score than the models that were built with CTI gathered from public websites. This demonstrates that trusted threat intelligence is more effective in validating alerts.\n\nThe results also demonstrate the efficiency of SmartValidator with dynamic changes in the preferred set of attributes. We pre-built all possible models, which required us to run~814 experiments. Given a maximum time limit of 48 hours and a memory limit of~100GB to build each prediction model, 20\\%~of the models failed to complete within the time limit and given memory. Hence, it shows the difficulties a security team would encounter in manually constructing each model. Results further reveal that building prediction models is a time-consuming process and requires expertise that can be automated through orchestrating different tasks. Saving the feature engineering approaches and ML algorithms helps SmartValidator to use them for predicting new attributes based on changing CTI and SOC requirements. Thus, constructing prediction models at run time based on a security team's preferred attributes sets reduces the overhead and resource consumption. The key contributions of this work are:\n\n\\vspace{-5pt}\n\n\\begin{itemize}\n\\item A novel AI-based framework, SmartValidator, that consists of three layers to effectively and efficiently identify and classify CTI for validating security alerts with changing CTI and security team requirements.\n\\vspace{-5pt}\n\\item A PoC system that automatically built 51 models to predict five different unknown attributes with~18~ observed attributes sets using two sources of OSINT.\n\\vspace{-5pt}\n\\item We demonstrated that SmartValidator can effectively select optimal prediction models to classify CTI where approximately 75\\% of optimal models have an F1-score of above~0.8.\n\\vspace{-5pt}\n\\item We showed the efficiency of SmartValidator by building prediction models based on security team demands which requires approximately 99\\%~less models to build, thus less resources and time consumption. \n\\vspace{-5pt}\n\\end{itemize}\n\n\n\nPaper organization: Section~\\ref{Sec:Motiv} presents a motivation scenario that highlights the need for SmartValidator. Section~\\ref{Sec:Preli} discusses the background knowledge about CTI. Section~\\ref{Sec:Framework} introduces the proposed framework, SmartValidator. Section~\\ref{section:POC} describes the large-scale experiment that is carried out for the evaluation of SmartValidator. Section~\\ref{Sec:Eval} demonstrates the effectiveness and efficiency of the proposed approach. Section~\\ref{Sec:RelatedWork} discusses related work. Finally, section~\\ref{Sec:Conclu} concludes the paper with future works.\n\\vspace{-10pt}\n\n\\section{Motivation Scenario} \\label{Sec:Motiv}\n\n\\begin{figure*}\n \\centering\n \\subfloat[]{\\includegraphics[width=0.7\\textwidth]{figs\/Figure1a.pdf}\\label{fig:motiva}}\n \\hfil\n \\subfloat[]{\\includegraphics[width=0.8\\textwidth]{figs\/motivB.pdf}\\label{fig:motivb}}\n \\caption{Motivation scenario illustrated (a) detection of malicious activities and validation of alerts with multiple detectors and sources of CTI respectively, (b) validation of same alerts for different set of preferences required two different validators and different CTI}\n \\label{fig:Motiv}\n \\vspace{-15pt}\n\\end{figure*}\n\nIn this section, we motivate an AI-based solution for alert validation through an example scenario. Figure~\\ref{fig:Motiv} shows a scenario where a SOC of an organization has deployed different types of detectors, validators and CTI to monitor and validate malicious behaviour in its network and business data. Figure~\\ref{fig:motiva} shows that three detectors (intrusion, phishing email, and vulnerability detectors) are deployed to detect suspicious and malicious activities of an organization. The information used by detectors varies with attack types. For example, the information that detectors use to identify an intrusion is different from identifying a phishing email\\footnote{https:\/\/github.com\/counteractive\/incident-response-plan-template\/blob\/master\/playbooks\/playbook-phishing.md} (Figure~\\ref{fig:motiva}). These detectors continuously monitor an organization's network and business data\\footnote{https:\/\/github.com\/rosenbet\/demisto\/tree\/master\/Playbooks} (e.g., emails, network traffic and business reports). \n\nMost detectors produce alerts upon detecting malicious activity that require a security team to act on it. These alerts require validation before analysing them for decision making\\footnote{https:\/\/www.incidentresponse.com\/playbooks\/}. In this paper, we consider a validator performs a task related to prioritizing and identifying the relevance or impact of alerts. Let us assume that an intrusion detector has detected a list of malicious IP addresses. A SOC has an alert validator to validate the maliciousness of the IPs~\\cite{Siemplify2019}. Figure~\\ref{fig:motiva} shows that validation of different types of alerts requires different forms of CTI. To validate IP maliciousness, blacklist and whitelist IP addresses are used. CTI further varies from organization to organization. Each security team has its own set of requirements (different attributes) to validate an alert. In this scenario, the security team rely on three types of CTI for alert validation. Considering different types of alerts have different attributes and require different sources of CTI, a SOC needs three validators to validate the alerts produced by three detectors. Figure \\ref{fig:motivb} illustrates the use of CTI to validate alerts.\n\nWe assume, an alert of type $A_i \\in A$ ($A$ is the set of alerts) is produced by detector $D_1$ (e.g., IDS). Each alert is represented with different attributes (or features). The function $F_{attrib}$($A_i$) provides attributes list of $A_i$. \n\n$F_{attrib}(A_i) \\rightarrow $\n\\\\ where $f_1 = \\text{IP}$, $f_2 = domain$, $f_3= \\text{URL}$, $f_4 = attack\\ type$, and $f_5 = threat\\ level$. Considering two different security roles of a SOC have different preferences and used different attributes to validate $A_i$, two validators $V_1$ and $V_2$ are built (Figure \\ref{fig:motivb}). For validator $V_1$, a security team prefers to validate \\textit{attack type} based on \\textit{\\text{IP}}, \\textit{domain} and \\textit{\\text{URL}}, thus\n\n$ob^1_{attrib} = <\\text{IP},\\ domain,\\ \\text{URL}>$ and \n\n$un^1_{attrib} = <\\text{IP}\\ maliciousness>$. \\\\\n For validator $V_2$, a security team's preference is to validate \\textit{threat level} using attributes \\textit{domain}, \\textit{URL} and \\textit{attack type}, thus,\n\n$ob^2_{attrib}= <\\text{IP},\\ domain,\\ \\text{URL},\\ attack\\ type>$ and \n\n$un^2_{attrib}= <\\text{URL}\\ threat\\ level>$.\n\nFor both cases, to perform validation, validators first extract $ob_{attrib}$ and if available $un_{attrib}$ from $A_i$ and then identify CTI with these attributes. In most cases, a security team provides CTI sources to a validator. In the next step, CTI that have $ob_{attrib}$ and $un_{attrib}$ are identified. As shown in Figure \\ref{fig:motivb}, validator $V_1$ extracts three attributes to validate \\textit{IP maliciousness} and validator $V_2$ extracts four attributes to validate \\textit{URL threat level}. $\\text{CTI}_1$ has the attributes that are required by validator ${V_1}$. On the other hand, $\\text{CTI}_2$ has the attributes required to investigate URL threat level by $V_2$. Therefore, threat data is extracted from $\\text{CTI}_1$ and $\\text{CTI}_2$ respectively for further investigation.\n\nThough $V_1$ and $V_2$ have investigated two different sets of attributes, the key steps (step~3.1 to step~3.4), as shown in Figure~\\ref{fig:motivb}, are the same. The tasks $V_1$ and $V_2$ can be formulated as an ML classification problem, where two different prediction models are required to be built. Building a prediction model involves pre-processing of data (e.g., $ob^1_{attrib}$), feature engineering, training and selecting a model, and then predicting a output ($un^1_{attrib}$). Many of the possible $ob_{attrib}$ of Cyber Threat Information (CTI) (e.g., domain, filename, description and comment) are textual features. Traditional categorical feature engineering or transformation approaches are not suitable to encode the textural features, hence require the application of NLP technique. To perform validation of $un^1_{attrib}$ and $un^2_{attrib}$ using $ob^1_{attrib}$ and $ob^2_{attrib}$, prediction models are required to be built where input of prediction models are security team preferences. Here, we consider the observed attributes and unknown attributes as SOCs' preferences\/ requirements. We assume, $AS$ as a set of a SOC's requirements, where \n\n $AS = $\n\\\\ This mean for validator $V_1,\\ AS_1$ = <$ob^1_{attrib}$ , $un^1_{attrib}$> \\& $AS_1 \\in AS$. For $ V_2, AS_2$ = <$ob^2_{attrib}$, $un^2_{attrib}$> \\& $AS_2 \\in AS$\n\nConsidering emerging threat patterns, a SOC may deploy new detectors and update the existing rules of intrusion detectors to detect evolving anomalies. To validate the alerts of a new detector, new validators may be required. Thus, several changes can arise in the scenario of Figure~\\ref{fig:Motiv}. Following, we present the three scenarios that we consider in this work.\n\n\\vspace{-5pt}\n\n\\begin{itemize}\n \\item Change in Alert: With changes in alerts types, variation can be seen in the attributes of alerts. For example, an alert of type $A_2$ may have a different set of attributes from $A_1$ such as timestamps, date, IP, organization, tools and comments. Depending on types of attack and detector, an alert attribute changes. Building prediction models with changing alerts might require incorporation of various types of pre-processing and features engineering approaches. \n \\vspace{-5pt}\n \\item Change in CTI: CTI are continuously changing with changing requirements from SOCs. In most cases, SOCs buy CTI from third parties where attributes provided by different vendors vary, or they build their own CTI platform. The validation of alerts relies on the attributes available in CTI.\n \\vspace{-5pt}\n \\item Changes in Preferred Attributes Set: Change in preferred attributes ($AS$) requires re-designing and building of prediction models. A SOC does not always have dedicated data scientists or experts to design and build prediction models. Even though the steps of model building are repetitive (e.g., pre-processing, feature engineering, model building and selection), few changes may be required for adaptation of the variations as existing solutions are not designed to automatically work with changing attributes.\n\\end{itemize}\n\\vspace{-5pt}\nTo address these changes, we propose SmartValidator to support the flexible design of a validator following a systematic and structured approach. The proposed framework can automatically construct prediction models to validate alerts\nwith changing requirements. \n\n\\vspace{-10pt}\n\n\\section{Preliminaries} \\label{Sec:Preli}\nThis section provides background information about CTI and MISP (an Open Source Threat Intelligence Platform).\n\n\\textbf{Indicator of Compromise: } IOCs provide characteristics of cyberattacks and threats. Based on IOCs, a security team decides whether a system is affected by a particular malware or not~\\cite{ anstee2017great, elmellas2016knowledge, tounsi2018survey}. Examples of IOCs include domain name, IP, file names and md5 file hashes. Three common categories of IOCs are network indicators, host-based indicators and email indicators. IP addresses, URLs and domain names are the most popular types of network indicators. The malicious file hash or signature, registry keys, malware name and dynamic link libraries are widely used host-based indicators. An email indicator may consist of a source email address, message objects, attachments, links and source IP addresses. The source of IOCs ranges from crowd-sourcing to government-sponsored sources. Just having threat data is not enough to fully understand the context or patterns of a cyberattack. For example, threat data may contain an IP address that is used only once to attack a network. Conversely, an associated URL in threat data might have been used many times. Therefore, threat intelligence must be extracted from the threat data with possible IOCs and their contextual information~\\cite{ anstee2017great, elmellas2016knowledge, ward2017building, tounsi2018survey}. Table~\\ref{tab:CTIsource} shows examples of some of these websites that provide threat feeds and are utilized for gathering OSINT. Table~\\ref{tab:CTIexample} shows examples of CTI publicly available in the malware domain website~\\cite{malwareDomain2021}. \n\n\\begin{table*}[h]\n \\caption{Description of each CTI sources with the Indicator of Compromise (IOCs) they contain}\n \\begin{tabular}{ll}\n \\toprule\n \\textbf{Source} & \\textbf{Description}\\\\\n \\midrule\nC\\&C Tracker \\cite{Ctracker2019} & Contains a list of C\\&C IPs (command and control botnets), date, and a link to a manual which contains \\\\& text description and false positive risk value. \\\\\nFeodo Tracker \\cite{Ftracker2019} & Tracks the Feodo Trojan. Contains IP, port, and date. \\\\\nMalware Domain List \\cite{malwareDomain2021} & Contains a searchable list of malicious domains, IP, date, domain, reverse lookups \\\\ & and lists registrants. Mostly focused on phishing, Trojans, and exploit kits.\\\\\nRansomware Tracker \\cite{Ransomwware2019} & Provides overview of infrastructures used by Ransomware, status of URLs, IP address and \\\\ & domain names associated with Ransomware and various block list of malicious traffic. \\\\\nWHOIS data \\cite{WhoIS2019} & Provides a database of registered users and assignees of internet resources, which is widely used\\\\ & for lookup of domain names.\\\\\nZeus Tracker \\cite{Ztracker2019} & Tracks domains of Zeus Command \\& Control servers.\\\\\nOpenPhish \\cite{OpenPhish2019} & A list of phishing URLs and their targeted brand. \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:CTIsource}\n\\vspace{-10pt}\n\\end{table*}\n\n\\begin{table*}[!hb]\n\\caption{An example of a list of observed malware domains with corresponding Indicator of Compromise (IOCs) from the website malwaredomainlist.com \\cite{malwareDomain2021}}\n \\centering\n \\begin{tabular}{llllll}\n \\toprule\n \\textbf{Date} & \\textbf{Domain} & \\textbf{IP} & \\textbf{Reserve Lookup} & \\textbf{Description} & \\textbf{ASN}\\\\\n\\midrule \n \n2017\/12\/04 18:50 & textspeier.de & 104.27.163.228 & - & phishing\/ fraud & 13335 \\\\\n2017\/10\/26 13:48 & photoscape.ch\/ & 31.148.219.11 & knigazdorovya.com & trojan & 14576 \\\\\n &Setup.exe \\\\\n2017\/06\/02 08:38 & sarahdaniella.com\/swift & 63.247.140.224 & coriandertest. & trojan & 19271\\\\\n & \/SWIFT\\%20\\$.pdf.ace & & hmdnsgroup.com & \\\\\n \n2017\/05\/01 16:22 & amazon-sicherheit.kunden & 63.247.140.224 & hosted-by.blazingfast.io & phishing & 49349\\\\\n& -ueberpruefung.xyz \\\\\n2017\/03\/20 10:13 & alegroup.info\/ntnrrhst & 185.61.138.74 & mccfortwayne.org & Ransom, Fake & 197695\\\\\n&&&&.PCN, Malspam \\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:CTIexample}\n \\vspace{-10pt}\n\\end{table*}\n\n\\textbf{Threat Intelligence}: Threat Intelligence, also known as security threat intelligence, is an essential piece of CTI for a cybersecurity team. According to \\textit{Recorded Future}, \"\\textit{threat intelligence is the output of the analysis based on detection, identification, classification, collection and enrichment of relevant data and information}\"~\\cite{RFID2021}.Threat intelligence helps a security team understand what causes an attack and what needs to be done to defend against it by gathering contextual information about an attack. For example, security teams use threat intelligence to validate security incidents or alerts and enrich threat data to get more insights about a particular security incident~\\cite{anstee2017great, elmellas2016knowledge, tounsi2018survey, RFID2021, WINKLER2017143}. The gathered data is organized in a human-readable and structured form for further analysis~\\cite{anstee2017great, elmellas2016knowledge, faiella2019enriching, RF2019, WINKLER2017143}. Open Source Intelligence (OSINT) is gathered from various websites (e.g., Zeus Tracker and Ransomware Tracker) that provide information about malware or blacklist domain\/IPs. \n\n\n\n\n\n\n\\textbf{Cyber Threat Intelligence Platform:} Threat intelligence platforms allow security communities to share and collaborate to learn more about the existing malware or threats. Using threat intelligence platforms, companies can improve their countermeasures against cyber-attacks and prepare detection and prevention mechanisms. In recent years, the cybersecurity communities have emphasized building common threat intelligence platforms to share threat information in a unified and structured way, and make CTI actionable~\\cite{faiella2019enriching, gao2018graph, menges2019unifying,mittal2019cyber, tounsi2018survey, ward2017building}. Various specifications and protocols such as STIX, TAXII, Cybox, CWE, CAPEC and CVE are widely used to describe and share threat information through common platforms~\\cite{ Stixbarnum2012standardizing, barnum2012cybox, taxiiconnolly2014trusted, RamsdaleCTISurvey2019}. Trusted Automated Exchange of Indicator Information (TAXII)~\\cite{taxiiconnolly2014trusted} is developed as a protocol for exchanging threat intelligence represented in STIX format. Both STIX and TAXII are open source and have collaborative forums~\\cite{Stixbarnum2012standardizing,taxiiconnolly2014trusted}. \n\n\\begin{figure}\n \\includegraphics[scale = 0.35]{figs\/MISP.jpg}\n \\caption{An example of MISP showing evoluation of a multi task Botnet}\n \\label{fig:MISP}\n \\vspace{-15pt}\n\\end{figure}\n\n\\textbf{MISP:} Malware Information Sharing Platform (MISP) is one of the most popular trusted threat intelligence platforms used by different industries to store and share CTI~\\cite{MISP2021, MISPJournal}. MISP is a framework for sharing and storing threat data in a structured way~\\cite{MISPJournal, azevedo2019pure}. MISP enables an organization to store both technical and non-technical information about attacks, threats and malware in a structured way. The relationship between malware and their IOCs are also available in MISP. Rules for network Intrusion Detection Systems (IDS) can also be generated from MISP, which can be imported into an IDS system, and hence improve the detection of anomalies and malicious behavior. A security team queries MISP for relevant data, and it shows the details of the attack. For example, Figure~\\ref{fig:MISP} shows the details of the evolution of a multitask Botnet. Table~\\ref{tab:MISPExample} and Table~\\ref{tab:MISPvaluePercentage} of Appendix~\\ref{app:A} show the key attributes gathered from MISP for this study and and the percentage of each attribute.\n\n\\vspace{-10pt}\n\n\\section{Proposed Framework} \\label{Sec:Framework}\nFigure~\\ref{fig:framework} provides an overview of our proposed framework, SmartValidator, that automates the identification and classification of CTI for validation of alerts. It comprises of three layers: (i)~threat data collection layer, (ii)~threat data prediction model building layer and (iii)~threat data validation layer. We consider each layer to have a separation of concerns so that while updating components of one layer, a security team does not need to worry about other layers.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=0.85\\textwidth]{figs\/framework.pdf}\n \\caption{An overview of SmartValidator for automated identification and validation (i.e., classification) of cyber threat data}\n \\label{fig:framework}\n \\vspace{-15pt}\n\\end{figure*}\n\n\\textit{Threat Data Collection layer} (section~\\ref{sub:4.1}): The threat data collection layer consists of a collector that automates the identification of CTI information from various sources (Figure~\\ref{fig:datalayer}). It further transforms the gathered CTI into a unified form (if required) and passes it to the threat data prediction model building layer.\n\n\\textit{Threat Data Prediction Model Building layer}(section~\\ref{sub:4.2}): This layer has a model builder that builds models for validation of alerts based on a SOC's requirements ($ob_{attrib}$, $un_{attrib}$) using the gathered CTI (Figure~\\ref{fig:predictionlayer}). Pre-processing of data, feature engineering, training and evaluation of prediction models are performed in this layer to generate candidate validation models. The candidate models and corresponding feature sets are saved with a SOC's preferences for use by the threat data validation layer at runtime to validate alert. \n\n\\textit{Threat Data Validation layer} (section~\\ref{sub:4.3}): The threat data validation layer takes alerts and a SOC's requirements as input to choose suitable prediction models for alert validation. SOC requirements are interpreted by an interpreter. Based on a SOC's requirements, attributes are extracted from alerts. The extracted attributes are used to choose suitable prediction model from the candidate list of saved models for predicting the unknown attributes ($un_{attrib}$) and perform validation of alerts based on observed attributes ($ob_{attrib}$).\n\nThe following sections elaborate the core components and functionalities of each layer of SmartValidator. \n\\vspace{-5pt}\n\n\\subsection{Threat Data Collection Layer}\\label{sub:4.1}\nValidation of a security alert requires the identification of relevant CTI for building a threat data prediction model. The purpose of the prediction model is to learn the pattern of CTI for automatic validation of alerts. Here, we have formulated the validation tasks as a classification task. For instance, to validate an IP maliciousness, a system needs to classify IPs as malicious or non malicious, which can be achieved through a prediction model. Similar to existing studies~\\cite{faiella2019enriching, tounsi2018survey, MISPJournal}, the data collection layer gathers CTI from multiple sources and combines them into a unified format. The data collection layer employs a collector to gather CTI data from an organization's preferred sources. Considering several types of CTI sources are used by an organization, deployment of various plugins, APIs and crawlers are required to collect CTI from these sources. Figure~\\ref{fig:datalayer} has shown the processing of three types of CTI data - (i)~internet data, (ii)~business data and (iii)~external data. These are most commonly used CTI. Other forms of CTI can also be integrated by following standard data processing strategies. \n\n\\begin{figure}[htb]\n \\includegraphics[width=0.5\\textwidth]{figs\/dataCollectionlayer.pdf}\n \\caption{Threat data collection layer - IOCs are extracted and processed from three types of data and then combined into a unified form}\n \\label{fig:datalayer}\n \\vspace{-15pt}\n\\end{figure}\n\nFor processing internet data, the collector has web crawlers, scrapers and parsers to gather and process CTI data from web pages (Figure~\\ref{fig:datalayer}). A web crawler searches and identifies reliable sites that contain threat information and IOCs of various malware. Considering threat intelligence team provides the relevant list of websites or keywords of their interest to gather CTI~\\cite{listThreatIntel2021}, a crawler crawls through the internet to search the relevant information. We propose a scraper as part of the collector for filtering out unnecessary information from crawled data. Crawling and scraping can be done on a variety of sources, such as RSS feeds, blogs and social media, but require different types of processing. A parser utilizes various information processing techniques to extract information from the output of a scraper and organise the data into a structured and language-agnostic format (e.g., markup language such as ontology). For forums or blog posts written in \\textbf{natural language}, a parser is required to extract threat information from sentences. NLP tools and techniques (e.g., Spacy and NLTK) are used to build a parser based on the structure of a document and information required by security team. \n\nTo get threat feeds from databases and CTI platforms, we design API calls and queries that are parts of the collector. Threat feeds can be gathered from both an organization's internal business data and external data (Figure~\\ref{fig:datalayer}). In this paper, we only gather external threat feeds. The collector can also query external data sources to find out missing information about available threat data. For example, after receiving an IP address, a query can be made to WHOIS query website to search for domain name. In this way, a collector gathers different sets of data, for example, blacklist and white list IP addresses, list of phishing websites and so on, from different types sources. The collected data is further combined into a unified form (e.g., dataset~P, Figure~\\ref{fig:datalayer}). To combine the data into a unified form, we first normalised the data, removed redundant information and then combined them. Examples of normalisation techniques include 1NF, 2NF and so forth. Depending on validation tasks (e.g., validation of IP maliciousness or validation of domain threat level), CTI is extracted and sent to the threat data prediction model building layer to build a validation model.\n\nWe consider organizations (e.g., government or financial) may have a dedicated threat intelligence teams or may use third party services to gather CTI. Any update related to the collection of CTI, such as adding or modifying CTI sources, deploying APIs or parsers to gather and extract information from these CTI, and inclusion or deletion of new data collection and normalisation techniques, are performed in the threat data collection layer by dedicated threat intelligence team.\n\\vspace{-5pt}\n\n\\subsection{Threat Data Prediction Model Building Layer}\\label{sub:4.2}\nThe threat data prediction model building layer is designed to build ML-based classification models using CTI and SOC's preferences. For example, if a security team wants to validate the \\textit{maliciousness} of an IP considering the \\textit{IP}, \\textit{domain} and \\textit{URL}, an ML classification model is built that takes IP, domain and URL and predicts IP maliciousness. We consider these attributes (IP, domain, URL and IP maliciousness) as SOC's preference where $ob_{attrib} = \\{\\textit{IP}, domain, \\textit{URL}\\}$ and $un_{attrib} = \\{\\textit{IP}\\ maliciouness\\}$. SOC's preferences drive from organizational security requirements and alerts. \n\nFigure~\\ref{fig:predictionlayer} shows the core components and workflow of the threat data prediction model building layer. It comprises of a pre-processor that pre-processes CTI (step~1, Figure~\\ref{fig:predictionlayer}) for extracting features from it. The pre-processing techniques depend on the types of attributes (e.g., categorical or text-based). For example, IP maliciousness can be either malicious or non-malicious, which is categorical. On the other hand, domain name is text based, as shown in Table~\\ref{tab:CTIexample}. Pre-processed data is passed to a feature engineering module where data is transformed into features (numeric form) (step 2, Figure~\\ref{fig:predictionlayer}), which are used as input for the ML algorithms. The reason behind this is that ML algorithms can only work with numerical data~\\cite{Helge2020, RFteam2018, sabir2020machine, Struve2017}. Depending on the type, size and diversity of CTI, the data science team chooses a feature engineering approach. The first two steps of Figure~\\ref{fig:predictionlayer}~leverage simple NLP techniques for pre-processing and feature engineering. Categorical values can be directly transformed into features using label encoding or one-hot encoding and text values are transformed into features using count vectorizer and TFIDF (Term Frequency-Inverse Document Frequency) techniques. The associated text cleaning and pre-processing steps for each text attribute are discussed in Appendix B1. Common pre-processing techniques such as tokenization, stop words removal and lemmatization are performed before transforming text data into features. Hence, based on the alert attributes types, data science teams perform data pre-processing and select a feature engineering approach. \n\n\\begin{figure} [tbh]\n \\includegraphics[scale = 0.5]{figs\/predictionlayer.pdf}\n \\caption{Threat data prediction layer - prediction models are built to predict unknown attributes based on observed attributes}\n \\label{fig:predictionlayer}\n \\vspace{-15pt}\n\\end{figure}\n\nThe transformed data is split into training and testing datasets to build, select and evaluate an prediction model (step~3, Figure~\\ref{fig:predictionlayer}). ML algorithms are applied to train models based on CTI that connects and learns patterns in the data to derive a working model. Depending on the nature of the training data, different models are built by a data science team. Traditionally a set of ML algorithms are applied to find an algorithm suitable for a specific dataset and user requirements~\\cite{AHMED201619, Helge2020, sabir2020machine}. To investigate the effectiveness of prediction models for the validation task, we considered a set of ML algorithms (e.g., Decision Tree, Na\u00efve Bayes, K-Nearest Neighbours and Random Forest). The details of the PoC is discussed in section~\\ref{section:POC}. As most ML algorithms have a list of hyperparameters, validation techniques (e.g., k-fold cross-validation, random cross-validation and Bayesian optimisation) are incorporated to select hyperparameters\\footnote{Hyperparameters are user-defined values that determine details about the ML classifier before training. For example, a decision tree requires tuning the value of variable depth, and k-nearest neighbours has a variable number of neighbours.} setting and feature engineering approach for a specific ML algorithm (step~4, Figure~\\ref{fig:predictionlayer}). \n\nThe built model's performance is evaluated using the testing dataset (step~5, Figure~\\ref{fig:predictionlayer}). Different types of performance metrics (e.g., precision, recall, accuracy and F1-score) are used to choose a model (also known as the optimal model) that provides the best performance (details in section~\\ref{sub:evaluationMetrics}). In this work, we mainly consider F1-score for evaluation, which is a score between~0 and~1. Higher F1-score indicates better performance of a model. Figure~\\ref{fig:predictionlayer} shows how the pre-processing techniques, feature engineering approaches and ML algorithms that are used to build threat data prediction models are stored for future model building process. Once an prediction model is found to have performed best, that model can be rebuilt using both training and testing datasets. The best models are saved with the evaluation score (step~6, Figure~\\ref{fig:predictionlayer}) for using it in the threat data validation layer.\n\n\\subsection{Threat Data Validation Layer}\\label{sub:4.3}\nWe design the threat data validation layer to (i)~collect a SOC's needs, (ii)~automatically orchestrate and request for CTI and prediction models and (iii)~validate alerts.\nFigure~\\ref{fig:framework} shows the validation layer comprises of an \\textit{interpreter}, \\textit{orchestrator}, \\textit{data processor} and \\textit{predictor}. In this layer, security teams of SOCs provide their preferences, $AS$, as a set of requirements, where $AS$ = {<$ob_{attrib}, un_{attrib}>$}. Security teams may also provide a minimum threshold for F1-score, which we refer to as the confidence score of prediction models. The reason behind gathering confidence score is that the performance of prediction models will differ with variation in CTI, alerts and attributes sets. A security team might need a higher F1-score while dealing with safety critical data and sensitive information. For example, to identify IP maliciousness, a security team may request for a validation model with a minimum confidence score of 0.9. While categorizing comments or text messages as spam, model performance of 0.8 or above may be approved. Selection of F1-score values vary from application to application. Thus, instead of setting a fixed value, we consider providing security teams the flexibility to set the confidence score based on their application needs. \n\nWe design Algorithm~\\ref{alg:Orchestrator} describing the key steps of the threat data validation layer. These steps are coordinated and orchestrated by the orchestrator. SOC preferences ($AS$ and confidence score) are the input of Algorithm~\\ref{alg:Orchestrator}. The \\textit{interpreter} receives the SOC requirements and extracts observed attributes $ob_{attrib}$, unknown attributes $un_{attrib}$ and confidence scores from that (line~3). The orchestrator checks model availability with F1-score above the confidence score for predicting $un_{attrib}$ based on $ob_{attrib}$ (line~4). If a model is available, the attributes are passed to the data processor, where it pre-processes and transforms the data based on the saved pre-processing and feature engineering approaches (lines~5-7). Finaly, the pre-processed and transformed data is sent to the predictor, which uses the available model to predict $un_{attrib}$ (line~8).\n\n\\begin{algorithm}[tbh]\n\\footnotesize\n \\caption{Model building with orchestrator in threat data validation layer}\\label{alg:Orchestrator}\n \\begin{algorithmic}[1]\n \\State \\textbf{Input: } \\texttt{AS} <$ob_{atrrib}, un_{attrib}$>, \\texttt{confidence score}\n \\State \\textbf{Output: } \\texttt{predictedData}\n \\State Interpret (\\texttt{AS, confidence score})\n \\State \\texttt{IsModels}= CheckModel(\\texttt{AS, confidence score}) \n \\If{\\texttt{IsModels} true}\n \\State \\texttt{model, featureEng} = getModel(\\texttt{AS,confidence score})\n \\State \\texttt{processedData} = transformData(\\texttt{featureEng}, \\texttt{AS})\n \\State \\texttt{predictedData} = predictOutput(\\texttt{model,processedData})\n \\Else\n \\State IsData = CheckData(\\texttt{AS})\n \\If{IsData true}\n \\State \\texttt{CTIData} = RetrieveData(\\texttt{CTI, AS})\n \\State \\texttt{model} = buildModel(\\texttt{CTIData, AS, confidence score})\n \\If{\\texttt{model} is built}\n \\State go to step 6\n \\Else\n \\State go to step 19\n \\EndIf\n \\Else\n \\State RequestData(\\texttt{AS})\n \\State \\textbf{return NotApplicable} \n \\EndIf\n \\EndIf\n \\State \\textbf{return} \\texttt{predictedData}\n \\end{algorithmic}\n\\end{algorithm}\n\nIf a model is unavailable for the requested attributes set, e.g., for predicting~$un_{attrib}$ based on~$ob_{attrib}$, then the orchestrator requests the data collector module to gather the relevant CTI data for the preferred attribute sets (line~9). After identifying the relevant information, the data collector module sends the collected CTI data to the \\textit{model builder} to build an prediction model (lines~12-13). Using the CTI \\textit{data} from the data collection layer, the \\textit{model builder} follows the model building process as discussed in section~\\ref{sub:4.2} (lines~14-15). After building a model, it sends the model availability notification to the orchestrator. Then the same process of data processing and prediction is performed. If the requested data is unavailable, a notification is sent to a threat intelligence team and a SOC team to gather the required CTI and manually analyze the alerts, respectively (lines 18-19). To ensure that alerts are not ignored when models or CTIs are unavailable, SOC teams must keep informed that manual analysis is required.\n\nOur proposed framework, SmartValidator, streamlines gathering, identification and classification of CTI. SmartValidator allows a security team to swiftly make a response about incoming alerts. As most information is generated in a structured way, it can be easily pre-processed to share through a CTI platform such as MISP or Collective Intelligence Framework (CIF) to benefit diverse security teams. SmartValidator can be integrated with the existing security orchestration and automation process to validate alerts and thus work together with the existing security tools, such as Security Information and Event Management (SIEM) and Endpoint Detection and Response (EDR). Microsoft Azure Sentinel\\footnote{https:\/\/azure.microsoft.com\/en-au\/services\/azure-sentinel\/} and Splunk\\footnote{https:\/\/www.splunk.com\/} are examples of SIEM where Limacharlie\\footnote{https:\/\/www.limacharlie.io\/} and Google Chronicle\\footnote{https:\/\/cloud.google.com\/blog\/products\/identity-security\/introducing-chronicle-detect-from-google-cloud} are considered as EDR.\n\n\\vspace{-10pt}\n\\section{Experiment Design and Setup} \\label{section:POC}\n\nWe designed and implemented a Proof of Concept (PoC) system to evaluate SmartValidator. We expected to demonstrate effectiveness of prediction models in validating security threat data and efficiency of building prediction models based on a SOC's requirements. The goal of the PoC is to identify the relevant CTI and build prediction models based on a list of SOC's requirements. Hence, we evaluated the PoC system based on the following two Research Questions (RQ). \n\\vspace{-5pt}\n\\begin{itemize}\n \\item RQ1. How effective is machine learning in classifying CTI for SmartValidator?\n \n \\vspace{-5pt}\n \\item RQ2. How efficient is SmartValidator in selecting and building prediction models at runtime over pre-building all possible prediction models?\n\\end{itemize}\n\\vspace{-5pt}\nTo build the PoC system for SmartValidator, we implemented the core components of Figure \\ref{fig:framework}: the data collection layer presented in Figure \\ref{fig:datalayer}, the prediction layer presented in Figure \\ref{fig:predictionlayer} and an orchestrator for the validation layer described in section \\ref{sub:4.3}. \n\n\\subsection{SOC's Requirement}\n\nWe defined a set of attributes (validated attributes and observed attributes) as SOC's requirements to carry on the experiment. These attributes were mainly given by a team who were different than the one that implemented the prediction models. Thus, here we considered the team who provided the requirement as part of the SOC and the other team as part of the data science team. This setting gave us the option to evaluate a variety of different SOC's requirements, to appropriately assess the PoC system. In a practical scenario, these requirements would usually be defined by a security team. Among the various attributes, commonly validated attributes ($un_{attrib}$) are \\textbf{attack}, \\textbf{threat type} and \\textbf{threat level}. We considered them as the desired unknown attributes. Beside them, we also considered two others attributes \\textbf{name} and \\textbf{event} as the desired unknown attributes.\nExample of these attributes are shown in Table~\\ref{tab:MISPExample} in appendix~\\ref{app:A}. As shown in Table \\ref{tab:MISPExample}, an example of an event title (or event) is \"\\textit{OSINT Leviathan: Espionage actor spear phishes maritime and defence targets}\".\n\nAs observed attributes ($ob_{attrib}$) vary from security team to security team, we gathered~18 different sets of observed attributes from security team to validate the aforementioned $un_{attrib}$. As shown in Table~\\ref{tab:tableAttribute}, \\textit{IP information} (i.e., ASN, IP owner, country, and domain), \\textit{organization}, \\textit{comments about attributes}, \\textit{comments about attacks}, \\textit{event data}, \\textit{timestamp} and \\textit{category} are the attributes that we considered in the set of observed attributes. We selected these attributes from alert data that are also commonly used to validate alerts generated by different IDS. We also used the metadata of attributes such as URL, domain and filename. \n\\begin{table}[!hb]\n\\caption{List of the observed attribute sets}\n \\centering\n \\begin{tabular}{ll}\n \\toprule\n \\textbf{\\#} & \\textbf{List of attributes}\\\\\n\\midrule \n $ob^1_{attrib}$ & Date \\\\\n $ob^2_{attrib}$ & Domain\\\\\n $ob^3_{attrib}$ & IP, ASN, Owner, Country \\\\\n $ob^4_{attrib}$ & Date, Domain \\\\\n $ob^5_{attrib}$ & IP, ASN, Owner, Country, Domain \\\\\n $ob^6_{attrib}$ & IP, ASN, Owner, Country, Date \\\\\n $ob^7_{attrib}$ & IP, ASN, Owner, Country, Domain, Date \\\\\n $ob^8_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, File hash, Filename\\\\\n $ob^9_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, Description, Comment, File \\\\ & hash, Filename\\\\\n $ob^{10}_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, Description, Comment\\\\\n $ob^{11}_{attrib}$ & IP destination, Port, IP source, ASN, Owner,\\\\ & Country, Domain, Date, Timestamp, File hash , \\\\ & Filename\\\\\n $ob^{12}_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, Date, Timestamp, Description,\\\\ & Comment, File hash, Filename\\\\\n $ob^{13}_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ & Country, Domain, Date, Timestamp, Description,\\\\ & Comment\\\\\n $ob^{14}_{attrib}$ & IP destination, Port, IP source, ASN, Owner, \\\\ &Country, Domain, Date, Timestamp\\\\\n $ob^{15}_{attrib}$ & Description, Comment, File hash, Filename\\\\\n $ob^{16}_{attrib}$ & Date, Timestamp, File hash, Filename\\\\\n $ob^{17}_{attrib}$ & Date, Timestamp, Description, Comment, \\\\ &File hash, filename\\\\\n $ob^{18}_{attrib}$ & Date, Timestamp, Description, Comment\\\\\n \\bottomrule\n \\end{tabular}\n \\label{tab:tableAttribute}\n \\vspace{-10pt}\n\\end{table}\n\n\\subsection{Collecting CTI}\\label{subsec:5.2}\n\nWe gathered CTI from two types of sources \u2013 publicly available internet data and data from an OSINT platform, MISP. CTI gathered from these two sources is considered as dataset~1~($DS_1$) and dataset~2~($DS_2$), respectively. \n\n\\textbf{\\textit{Gathering CTI from websites}}: We obtained a list of publicly available websites from a GitHub CTI repository~\\cite{listThreatIntel2021} which are shown in Table~\\ref{tab:CTIsource}. We selected these websites because they provided malware RSS feeds and their access were not restricted (e.g., API limit). We built web crawlers and scrapers to gather and extract the key pieces of information from the selected websites. A parser was built to parse the information and stored it in a structured format (i.e., a CSV file). The gathered data had consistent tagging and was labelled with the malware used in the attack, for example, Zeus, Citadel or Ice~IX. $DS_1$~contained 4060 events and represented the data available through public CTI feeds from websites. \n\n\\textbf{\\textit{Gathering CTI from MISP}}: We selected the MISP platform as a threat intelligence platform due to its popularity amongst businesses and the abundance of labelled data. We first gathered the MISP default feeds that were written in JSON format and then built a parser to extract the key attributes from that. $DS_2$~contained~213,736 events and represented the data available to an organisation from a dedicated threat intelligence platform.\n\n\\textbf{\\textit{Gathering additional attributes:}} External information was gathered utilizing the parsed attributes of both~$DS_1$ and~$DS_2$. For example, the common features amongst each source were IP, domain and date. Additional attributes were gathered from WhoIS data (e.g., a database query of the RFC~3912 protocol) for each IP. We used the Python cymruwhois\\footnote{https:\/\/pypi.org\/project\/cymruwhois\/} module to search each IP in the WhoIS database, which returned the IPs ASN (i.e, a unique global identifier), owner, and country location. Besides, AlienVault forum was chosen as an external information source. We scraped the AlienVault forum updates using the Python module BeatifulSoup\\footnote{https:\/\/pypi.org\/project\/beautifulsoup4\/}. The AlienVault data, which were natural language text descriptions, was searched and extracted for the associated event and threat.\n\n\\subsection{Building Data Processor}\n\nWe built a data processor to clean, pre-process and transform the collected data for building ML-based validators. \n\n\\textbf{\\textit{Cleaning and pre-processing:}} We used the Python sci-kit learn libraries to pre-process the attribute values~\\cite{chen2020deep, scikitlearn2021}. We first cleaned the data by removing null values and removing events with missing information. We observed missing information to be relatively infrequent, resulting in minimal information loss and a more robust model. For text values, we found two types of natural language features from the MISP data (i)~text attributes which are short paragraphs that describe an event in natural language and are often taken from blogs, and (ii)~comments. \n\nWe first analyzed the text and comments attribute to find a suitable processing and encoding technique. Thus, simple processing techniques were undertaken to decrease the dimensionality and remove any uninformative words (e.g., articles and prepositions). Each piece of text was stripped of all non-alphabetical characters, as numbers and special characters can rapidly increase dimensionality and rarely contain valuable information. The text was then stripped of any non-noun or non-proper noun tokens, as nouns are the most informative part of the text (e.g., attack names, attack types, and organisation). Finally, each word was lemmatized (i.e., changed to the base form of the word), so that similar words can be recognized. One of the key steps we followed was to tokenize the string values of attributes (i.e., domain, filename, hostname, URL) where patterns exit. We removed punctuation and special characters within a string to clean the data. We further split the text into small tokens based on a regular expression that tokenized a string using a given character. This separated each word within a value (e.g., value of domain or URL) and allowed a string to be tokenized. For example, we split the value of the URL in terms of \"\/\/\" and \".\". The tokenized data were then encoded as integers to create a numeric form of a feature vector.\n\n\\textbf{\\textit{Feature engineering:}} We encoded the categorical variables using one-hot encoding and label encoding. One-hot encoding considers each categorical value separately and represents each categorical variable as a column. Label encoding represents each categorical value as a unique integer. Table \\ref{tab:exampleEncoding} shows an example of one-hot encoding in the first table and label encoding in the second, for three types of attacks: phishing, DDoS and SQL injection. We used the labelEncoder() method of sci-kit learn to convert the string data into numerical values. An inbuilt function from the sci-kit-learn library, standardScaler(), was used to standardize the data. The function transformed data into a normalized distribution to remove outliers from the data, allowing for building more accurate prediction models. \nThe text variables (i.e., unstructured and structured natural language) did not conform to traditional one-hot or label encoding, as one-hot or label encoding interprets the text as a whole. Hence, we used two techniques: count vectorization and TFIDF as our feature engineering approach to encode text into numerical values. Count vectorization techniques stored each tokenized word as a column with its value being the number of times it appeared in each respective document.\nTable~\\ref{tab:exampleVectorizer} shows the examples of count vectorization of two sentences. The \\textbf{TFIDF} vectorizer\\footnote{https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.feature\\_\\\\extraction.text.TfidfVectorizer.html} worked similar to the count vectorization, except rather than storing counts it stored the TFIDF value of each word. TFIDF provided a metric for how 'important' a word is within a part of the text by comparing the term's frequency in a single document to the inverse of its frequency amongst all documents.\n\n\\begin{table}[h]\n\\caption{An example of one-hot encoding and label encoding for three types of attack}\n \\centering\n \n \\begin{tabular}{|c c c|c |c p{.9cm} c|}\n \\hline\n Phishing & DDoS & SQL & & & Attack & Encoded \\\\\n & & injection & & & & attack\\\\\n \\hline\n0& 1& 0&\t& 1&\tDDoS\t& 2\\\\\n1&\t0&\t0&\t&\t2&\tPhishing&\t1\\\\\n0&\t1&\t0&\t&\t3&\tDDoS&\t2\\\\\n1&\t0&\t0&\t&\t4&\tPhishing&\t1 \\\\\n1&\t0&\t0&\t&\t5&\tPhishing&\t1 \\\\\n0&\t0&\t1&\t&\t6&\tSQL&\t3\\\\\n& & & & & injection & \\\\\n\\hline\n \n \\end{tabular}\n \\label{tab:exampleEncoding}\n \\vspace{-10pt}\n\\end{table}\n\nFor the one-hot encoding setup, we used a simple count vectorizer, and for the label encoding setup we used a TFIDF vectorizer. It should be noted that whilst the dataset included text variables, the vast majority did not follow a natural language convention (e.g., domain or filename). Hence, more advanced NLP techniques, such as word embedding, cannot be accurately applied. The feature engineering schemes were saved for runtime use with the model building and prediction phase. Appendix~\\ref{app:B} summarizes the pre-processing techniques that we followed for different attributes.\n\n\\begin{table*}[!hb]\n\\caption{Count vectors for two sentences, S1: \"Fireball is malware\" and S2: \"Malware is any program that is harmful\"}\n \\centering\n \\begin{tabular}{cccccccc}\n \\hline\n &fireball &is &malware &any &program &that &harmful\\\\\n \\hline\n S1 & 1 & 1 & 1 & 0 & 0 & 0 & 0\\\\\n \\hline\n S2 & 0 & 2 & 1 & 1 & 1 & 1 & 1 \\\\\n \\hline\n \\end{tabular}\n \\label{tab:exampleVectorizer}\n \\vspace{-10pt}\n\\end{table*}\n\n\\subsection{Building the Validation Model} \\label{sub:buildingValidateModel}\nWe built prediction models following the traditional ML pipeline (i.e., selecting ML algorithms, building prediction models, performing hyperparameter tuning and evaluating the built model). We designed a model builder to build prediction models for various attribute sets $AS$ ($ob_{attrib}$ and $un_{attrib}$). We selected eight commonly used classification algorithms~\\cite{caruana2006}: Decision Tree (DT), Random Forest (RF), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), Ridge Classifier (RID), Na\u00efve Bayes (BAY) and eXtreme Gradient Boost (XGB)~\\cite{chen2016} to cover a wide range of classifier types. Appendix~\\ref{app:B} summarizes the ML algorithsm that we considered to build the PoC system. Bayesian Optimization was used to automatically tune each model~\\cite{snoek2012}. We used a straightforward train test split for evaluation, with~30\\% of the dataset hold out for testing. In a real world, setting the training data would be selected by the threat intelligence team, to ensure data quality. The built model was optimised by performing hyperparameter tuning. The Python module sci-kit learn was used to build the prediction models, as it is one of the popular and widely used libraries for building prediction models~\\cite{chen2020deep, scikitlearn2021, sabir2020machine}. \n\n\\subsection{ Developing the Orchestrator}\n\nWe designed and implemented a Python script to coordinate the data collector and model builder. The script worked as an orchestrator that automated the process from gathering SOC's requirements to predicting the outputs that is validating alerts. For example, we took the SOC's requirements as an attribute set, $$ and the confidence score (a value between 0 and 1). The output of the script was the value of $un_{attrib}$ and F1-score. In this process, the script first checked whether a model was available to predict $un_{attrib}$ with $ob_{attrib}$. If a model was available, it then called the data processor to process $$ and predict the value of $un_{attrib}$. \n\nIf a model was not found, it checked the availability of CTI with attributes $$. If CTI was available, the model builder was invoked and models were built following the process of model building discussed in previous section. Here, the orchestrator used the saved feature engineering approaches and algorithm to train the model and then selected the model with the best F1-score as the optimal model. The scripts then checked the value of the F1-score. If the F1-score was lower than the confidence score, it requested the data science team for building the model and returned that there is no model available to the security team. Otherwise, it notified the orchestrator about model availability and the next steps of data processing and predicting $un_{attrib}$ was followed and the value of $un_{attrib}$ was returned to the security team.\n\nIf required CTI data was not available, the orchestrator notified the security team about CTI unavailability. For example, the two CTI data we used did not have any vulnerability description and values. Now if we provided the vulnerability description as input and requested that we wanted to predict the severity, the orchestrator would return no data available. \n\n\\subsection{Evaluation Metrics} \\label{sub:evaluationMetrics}\n\nEvaluation metrics are needed to measure the success of a prediction model in validating security alerts and building prediction models on run time. This will determine the effectiveness and efficiency of SmartValidator. Accuracy, precision, recall and F1-score are the four commonly accepted evaluation metrics for evaluating a prediction models performance~\\cite{chen2020deep, sabir2020machine}. The correct and incorrect predictions are further calculated using number of (i)True Positive (TP) (refers to correct prediction of an attributes label), (ii) False Positive (FP) (indicates incorrect prediction of an attributes label), (iii) True Negative (TN) (refers to correct prediction that a threat does not have a particular label) and (iv) False Negative (FN) (indicates incorrect prediction that a threat does not have particular label). For example, if a model classifies a malicious IP address as non-malicious, it is calculated as a false positive. If it refers a malicious IP address as malicious it is calculated as a true positive. A true negative is when non-malicious IPs are not classified as malicious IPs. A false negative is if a malicious IP is not classified as such. Equation \\ref{equ:accuracy} to equation \\ref{equ:f1score} provides details of how accuracy, recall, precision and F1-score are calculated using TP, FP, TN and FN.\n\n\\vspace{-10pt}\n\n\\begin{equation}\\label{equ:accuracy}\n \\text{Accuracy} = \\frac{TP + TN}{TP+ TN+ FP+ FN }\n\\end{equation} \n\n\\begin{equation}\\label{equ:recall}\n \\text{Recall}= \\frac{TP}{TP + FN} \n\\end{equation}\n\n \\begin{equation}\\label{equ:precision}\n \\text{Precision}= \\frac{TP}{TP + FP} \n \\end{equation} \n\n\\begin{equation} \\label{equ:f1score}\n \\text{F1-score}= \\frac{2 \\times(Recall \\times Precision )}{Recall + Precision} \n\\end{equation}\n\nWe assessed the effectiveness of the prediction models in validating security alerts with F1-score because accuracy (equation \\ref{equ:accuracy}) is not always a useful metric on its own. It does not capture the bias of the data. The recall (equation \\ref{equ:recall}) is a measure of robustness; it displays if a model is failing to predict the relevant samples, e.g., failing to classify IP maliciousness correctly. It is important for the PoC system to have high recall to ensure that no malicious events are misinterpreted or ignored. Precision is the model's ability to accurately predict the positive class (malicious events), shown in equation \\ref{equ:precision}. A low value for precision indicates a high amount of false positives. Thus, it is important to achieve high precision, as low precision would introduce the need for human validation of the output of SmartValidator. The F1-score can be considered the best metric for an overall evaluation, as it considers both precision and recall (equation \\ref{equ:f1score}) together and evaluates each class separately. F1-score does not have any unit as this is the harmonic mean of precision and recall which do not have any unit as well.\n\nWe defined a confidence score between 0-1 to be used by the security team as a threshold value for prediction models. In our PoC, we compared the confidence score with the F1-score of the prediction model. If a model had a lower F1-score than the confidence score, the PoC discarded that model.\n\nWe defined computation time, as shown in equation \\ref{equ:comptime}, to evaluate the efficiency of building prediction models based on SOC's need. Computation time is the summation of the training time ($train_{time}$) and prediction time ({$predict_{time}$}). Training time is the time required to build a model and prediction time is the time required to predict unknown attributes using an optimal model.\n\n\\vspace{-15pt}\n\n\\begin{equation} \\label{equ:comptime}\n computation_{time} = train_{time} + predict_{time}\n\\end{equation}\n\n\\vspace{-10pt}\n\n\\section {Evaluation and Results}\\label{Sec:Eval}\n\nIn this section, we present the results of the developed PoC of SmartValidator to show the effectiveness and efficiency of a dynamic ML-based validator to automate and assist the validation of security alerts with changing threat data and SOC's needs.\n\n\\begin{table*}[thb]\n\\caption{Performance (F1-score) of different models for prediction of \\textit{attack} based on $ob^7_{attrib}$ using $DS_1$ and prediction of \\textit{threat type}, \\textit{threat level}, \\textit{name} and \\textit{event} based on $ob^{14}_{attrib}$ using $DS_2$}\n \\centering\n \n \\begin{tabular}{|l|c|c|c|c|c|}\n\n \\hline\n \n \\multirow{2}{*}{Model} & $DS_1\\ ob^7_{attrib}$ & \\multicolumn{4}{|c|}{$DS_2 ob^{14}_{attrib}$} \\\\ [1ex]\n \\cline{2-6}\n \n & attack & threat type & threat level & name & event \\\\ \n \n \\hline\nDT+LE & 0.719\t& 0.941 & 0.998\t& 0.938\t& \\textbf{0.995 }\\\\\nRF+LE\t& 0.274\t& 0.727\t& 0.76\t& 0.689\t& 0.362\\\\\nKNN+LE\t& 0.594\t& 0.912\t&\t0.986\t&\t0.917\t&\t0.902\\\\\nGBAY+LE & 0.312\t& \t0.15\t&\t0.343\t&\t0.102\t&\t0.077\\\\\nRID+LE\t& 0.627\t& \t0.055\t&\t0.082\t&\t0.105\t&\t0.001\\\\\nSVM+LE\t& 0.401\t&\t0.135\t&\t0.654\t&\t0.295\t&\t0.283\\\\\nMLP+LE \t& \t0.546\t&\t0.762\t&\t-\t&\t0.878\t&\t-\\\\\nXGB+LE\t& \t\\textbf{0.787}\t&\\textbf{\t0.998}\t&\t\\textbf{0.999}\t&\t\\textbf{0.997}\t&\t-\\\\\n\\hline\nDT+OHE\t& \t0.587\t&\t0.917\t&\t0.988\t&\t0.905\t&\t0.926\\\\\nRF+OHE\t& \t0.501\t&\t0.916\t&\t0.948\t&\t0.912\t&\t0.87\\\\\nKNN+OHE\t& \t0.351\t&\t\\textbf{0.998}\t&\t\\textbf{0.998}\t&\t\\textbf{0.999}\t& \\textbf{0.996}\\\\\nGBAY+OHE\t& \t0.382\t&\t0.677\t&\t-\t&\t0.465\t&\t0.241\\\\\nRID+OHE\t& \t0.763\t&\t0.121\t&\t-\t&\t0.007 &\t- \\\\\nSVM+OHE\t& \t0.322\t&\t0.051\t&\t0.126\t&\t0.001\t&\t-\\\\\nMLP+OHE\t& \t0.762\t&\t0.061\t&\t-\t&\t0.079 &\t-\\\\\nXGB+OHE\t& \t\\textbf{0.784}\t&\t-\t&\t-\t&\t0.996\t&\t-\\\\\n \\hline\n \\end{tabular}\n \\label{tab:performanceModel}\n \\vspace{-10pt}\n\\end{table*}\n\n\\subsection{Evaluation of Effectiveness}\n\nWe evaluated the effectiveness of prediction models to answer RQ1 which is \\textit{\"How effective are prediction models in classifying CTI?\"}. \nSpecifically, we used two datasets $DS_1$ and $DS_2$, as described in Section \\ref{subsec:5.2}, for predicting five observed attributes ($ob_{attrib}$). We collected the datasets in a way so that each dataset was confirmed to have at least one observed value. Finally, all experiments were conducted on the collected datasets and different combinations of attributes sets. Based on self-defined SOC requirements, CTI datasets were selected and models were built. Optimal models were selected based on their effectiveness for a particular attribute set. Effectiveness is measured using the metrics described in Section \\ref{sub:evaluationMetrics}. We investigated the performance of the optimal models for classifying the threat data.\n\nWe found that 51 optimal models were returned by the PoC system based on the given requirements. Among them, seven of the models were built using $DS_1$ to predict \\textit{attack} that have used $ob^1_{attrib}$ to $ob^7_{attrib}$ and the other~44 models were build using $DS_2$ to predict the four other unknown attributes based on $ob^8_{attrib}$ to $ob^{18}_{attrib}$.\n\nThe performance of different ML algorithms and encoding methods are summarized in Table~\\ref{tab:performanceModel} for the two observed attributes sets \u2013 $ob^7_{attrib}$ and $ob^{14}_{attrib}$. The results shows that XGBoost (XGB) with Label Encoding (LE) achieved a near perfect F1 score while using $DS_2$. We further observed that while using One Hot Encoding (OHE), K-Nearest Neighbors (KNN) performed better than XGB. However, considering the time and memory constraints, XGB failed to train a model to predict \"\\textit{event}\" when LE was used as an encoding method. Some of the model building processes failed as they could not finish within the allocated memory and time limits that were 24 hours and 10GB for $DS_1$ and 48 hours and 100GB for $DS_2$. These limits were set and tested to investigate and simulate computational resource limits. \n\nFigure \\ref{fig:Effective} shows the evaluation score (F1-score) for different datasets, labels and encoding methods. Figure \\ref{fig:Effective(a)} and Figure \\ref{fig:Effective(b)} shows the comparison of the different classifiers when trained on $DS_1$ and $DS_2$, respectively. We first observe that ML algorithms generally performed better using $DS_2$ data, with the exception of the Ridge classifier. This finding demonstrates the importance of CTI data and information quality. prediction models require a large number of training examples to properly learn trends and patterns. We recommend utilizing data from CTI platforms such as MISP, as these platforms aggregate a large quantity of verified information from a variety of sources.\n\nFigure \\ref{fig:Effective(c)} shows the comparative classifier performance across both $DS_1$ and $DS_2$. We observe a large range in classifier performance. The variance in best classification algorithms further motivates the need for automated model building and selection. Some ML algorithms were not as effective for this classification task. Hence, if a human were to repetitively use one algorithm to build models for different set of attributes, the results would not be good (i.e., effective) for all models. It would also be time consuming to validate every model. The results demonstrate on average, the XGB classifier performed extremely well, but KNN, as well as other tree-based classifiers (DT and RF) also performed well. MLP classifiers also appear to perform well. As MLP utilized a simplistic artificial neural network, this potentially motivates the investigation of more sophisticated deep learning methods for future work \\cite{FERRAG2020102419}.\n\n\\begin{figure*}\n\\centering\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESDS1.pdf}\\label{fig:Effective(a)}}\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESDS2.pdf}\\label{fig:Effective(b)}}\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/EvaluationScore.pdf}\\label{fig:Effective(c)}}\n \\hfil\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESlabel.pdf}\\label{fig:Effective(d)}}\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESEncoding.pdf}\\label{fig:Effective(e)}}\n \\subfloat[]{\\includegraphics[width=0.3\\textwidth]{figs\/ESOptimalModel.pdf}\\label{fig:Effective(f)}}\n \n \\caption{Comparative analysis of different classifiers performance (F1-score) while (a) using dataset 1, $DS_1$, (b) using dataset 2, $DS_2$, and (c) using both datasets, (d) predicting different labels and (e) using different encoding methods. (f) number of optimal models for different evaluation score (F1-score)}\n \\label{fig:Effective}\n \\vspace{-15pt}\n\\end{figure*}\n\n\\begin{table}[]\n\\caption{Optimal classifier and encoding method with evaluation score (F1-score) for observed attributes $ob^1_{attrib}$ to $ob^7_{attrib}$ to predict \\textit{attack} using $DS_1$}\n\n \\centering\n \n \\begin{tabular}{|c|c|c|}\n\n \\hline\n \n \\multirow{2}{*}{Model} & \n \\multicolumn{2}{|c|}{attack} \\\\\n \\cline{2-3}\n \n & Optimal model & F1-score \\\\ \n \n \\hline\n$ob^1_{attrib}$ &\tKNN + LE &\t0.572 \\\\[1ex]\n$ob^2_{attrib}$ &\tRID + OHE &\t0.537\\\\[1ex]\n$ob^3_{attrib}$ &\tSVM + OHE &\t0.623\\\\[1ex]\n$ob^4_{attrib}$ &\tXGB + OHE &\t0.728\\\\[1ex]\n$ob^5_{attrib}$ &\tRID + OHE &\t0.637\\\\[1ex]\n$ob^6_{attrib}$ &\tRID + OHE &\t0.772\\\\[1ex]\n$ob^7_{attrib}$ &\tXGB + LE &\t0.787\\\\[1ex]\n\n \\hline\n \\end{tabular}\n \\label{tab:optimalModelDs1}\n \\vspace{-10pt}\n\\end{table}\n\nWe further performed comparative analysis for predicting the five observed attributes in Figure \\ref{fig:Effective(d)}. Models generally performed well for all prediction tasks, but performed best when classifying \\textit{threat\\_level}. This is potentially because \\textit{threat\\_level} has the lowest dimensionality of the observed attributes, and hence ample training data for each class. Correspondingly, the \\textit{event} attribute had lower performance due to its high dimensionality. $DS_1$ data was used to predict \\textit{attack}. Performance is noticeably worse for this attribute, with mean evaluation score of approximately 0.5. This further emphasizes the importance of CTI quality.\n\nThe relative effectiveness of the two encoding strategies is analyzed and shown in Figure \\ref{fig:Effective(e)}. These two encoding strategies also provide a comparison of the two NLP techniques that we considered, which are count vectorization and TFIDF vectorization. One-hot encoding appeared to typically outperform label encoding, but the results are relatively similar. This refers to the similar performance of count vectorization and TFIDF, where the count vectorizer performed slightly better, as seen under the one-hot encoding results. Similar to the previous observation, we found that depending on the type of attributes and algorithms, the performance of count vectorizer and TFIDF varies. However, the performance of classification with $DS_2$, irrespective of the encoding, built effective machine learning models \\cite{Helge2020, chen2020deep, islam2019multi, sabir2020machine}. \n\nTable \\ref{tab:optimalModelDs1} shows the optimal model for predicting \\textit{attack} using $ob^1_{attrib}$ to $ob^7_{attrib}$ that were built using $DS_1$. Table \\ref{tab:optimalModelDs2} shows optimal model for predicting \\textit{threat type}, \\textit{threat level}, \\textit{name} and \\textit{event} based on $ob^8_{attrib}$ to $ob^{14}_{attrib}$ that were built using $DS_2$. Table \\ref{tab:optimalModelDs1} and Table \\ref{tab:optimalModelDs2} demonstrate that for different attribute sets different classifiers were seen to perform better. Thus, we cannot rely on a single algorithm. Table \\ref{tab:optimalModelDs1} shows that the IP extracted features that were in $ob^3_{attrib}$ (i.e., IP, ASN, owner, country) performed the best singularly, out of the three available features. $ob^5_{attrib}$ which had date and IP features was also actually relatively similar in terms of predictive capability. However, by itself the IP features that is $ob^2_{attrib}$ can only achieve a F1-Score of 0.623. Using more available features increased the effectiveness of the prediction model, with the largest feature set achieving the best F1-score of 0.787. The addition of the domain feature to the IP feature set which formed $ob^4_{attrib}$ did not significantly increase the effectiveness of the prediction model. This is likely due to the existing correlation between the IP and domain features. However, the date did appear to noticeably improve the F1-score. The large difference between the best and worst models F1-scores highlights the importance of proper feature engineering and model selection.\n\nAnalysing the results of Table \\ref{tab:optimalModelDs2}, we found that the optimal models generally performed very well, obtaining extremely good evaluation scores (F1-score). Several of the optimal models achieved a near perfect score on the testing dataset. The models also performed noticeably better than those trained on $DS_1$, further showcasing the need for good features and a large dataset. The models which used time-based features performed noticeably better than similar feature sets which did not. The time-based features likely had such strong predictive power as only a few MISP events were recorded close together. However, models without the temporal features were still able to achieve an F1-score of over 0.8. The file-based features also appeared to have weak predictive power as their inclusion or exclusion appeared to have very little impact on the evaluation score. The results of Table \\ref{tab:optimalModelDs2} further reflects that the \\textit{event} label was harder to predict than other three labels. This was likely because \\textit{event} was the label with the highest number of classes. Interestingly, tree-based classifiers seemed to perform better for the \\textit{event} label. However, it should be noted that a lot of the more sophisticated models timed-out for the \\textit{event} label, so their results were not recorded. It can also be seen that the\\textit{ threat level} was the easiest to predict, likely because it had the lowest number of classes. Figure \\ref{fig:Effective(d)} displays the huge variance in F1-scores of trained models for different labels.\n\n\\begin{table*}[h]\n\\caption{Optimal models and encoding methods with evaluation score (f1-score) for attributes sets $ob^8_{attrib}$ to $ob^{18}_{attrib}$ to predict threat type, threat level, name and event}\n\n \\centering\n \n \\begin{tabular}{|p{1.2cm}|l|p{.9cm}|l|p{.9cm}|l|p{.9cm}|l|p{.9cm}|}\n\n \\hline\n \n \\multirow{2}{1.2cm}{Observed attributes} & \n \\multicolumn{2}{c|}{threat type} & \\multicolumn{2}{c|}{threat level} & \\multicolumn{2}{c|}{name} &\\multicolumn{2}{c|}{event}\\\\\n \\cline{2-9}\n \n & Optimal model & F1-score & Optimal model & F1-score & Optimal model & F1-score & Optimal model & F1-score \\\\ \n \\hline\n$ob^8_{attrib}$ &\tMLP + OHE&\t0.648&\tXGB + LE&\t0.68&\tRID + OHE&\t0.593&\tDT + LE&\t0.275\\\\[1ex]\n$ob^9_{attrib}$&\tSVM + OHE&\t0.854&\tXGB + LE&\t0.84&\tSVM + OHE&\t0.809&\tSVM + OHE&\t0.735\\\\[1ex]\n$ob^{10}_{attrib}$\t&SVM + OHE&\t0.859&\tSVM + OHE&\t0.915&\tSVM + OHE&\t0.864&\tSVM + OHE&\t0.763\\\\[1ex]\n$ob^{11}_{attrib}$\t&KNN + OHE&\t0.998&\tXGB + LE&\t0.999&\tKNN + OHE&\t0.999&\tDT + LE\t&0.898\\\\[1ex]\n$ob^{12}_{attrib}$&\tKNN + OHE&\t0.997&\tKNN + OHE&\t0.999&\tKNN + OHE&\t0.998&\tDT + LE\t&0.994\\\\[1ex]\n$ob^{13}_{attrib}$&\tKNN + OHE&\t0.998&\tXGB + LE&\t0.999&\tKNN + OHE&\t0.998&\tKNN + OHE&\t0.995\\\\[1ex]\n$ob^{14}_{attrib}$\t&KNN + OHE&\t0.998&\tXGB + LE&\t0.999\t&KNN + OHE&\t0.999&\tKNN + OHE&\t0.999\\\\[1ex]\n$ob^{15}_{attrib}$&\tSVM + OHE&\t0.873&\tSVM + OHE&\t0.862&\tSVM + OHE&\t0.813&\tSVM + OHE&\t0.725\\\\[1ex]\n$ob^{16}_{attrib}$&\tXGB + LE&\t0.998&\tXGB + LE&\t1&\tKNN + OHE&\t0.998&\tDT + LE\t&0.996\\\\[1ex]\n$ob^{17}_{attrib}$&\tXGB + LE&\t0.997&\tXGB + LE&\t0.999&\tKNN + OHE&\t0.999&\tRF + OHE&\t0.865\\\\[1ex]\n$ob^{18}_{attrib}$&\tXGB + LE&\t0.998&\tXGB + LE&\t1&\tKNN + OHE&\t0.998&\tDT + LE&\t0.996\\\\[1ex]\n\n \n \\hline\n\n \\hline\n \\end{tabular}\n \\label{tab:optimalModelDs2}\n \\vspace{-10pt}\n\\end{table*}\n\n\n\nWe further analyzed the classification confidence of the models of SmartValidator on the collected data. The evaluation results are visualized in Figure \\ref{fig:Effective(f)}. We consider different confidence scores (0.6 \u2013 0.9) as with the variation in attributes sets, preference of confidence score also varies. Figure \\ref{fig:Effective(f)} shows that at runtime, with confidence score of 0.8, 80\\% of the models that were built based on $DS_2$ fits the need of a SOC. These models were built based on the saved feature engineering and ML algorithms. Figure \\ref{fig:Effective(f)} provides an overview to the security team whether they can rely on a dataset where the performance is not up to their requirements. It shows with increase in the confidence score the number of models above the confidence score decreases. Obviously, it can be seen that the models on $DS_2$ classified the attributes with a much higher confidence. We found that one of the key reasons behind this is that $DS_1$ had comparatively less data elements than $DS_2$. Thus, with $DS_2$ capturing the variation in data and correlating them provided better results than $DS_1$. The results show that approximately 84\\% of the 51 optimal models had an F1-score (or confidence score) above 0.72 and 75\\% of the models had F1-score above 0.8. Most of the models that were built with data gathered from CTI platforms can effectively predict $un_{attrib}$ based on $ob_{attrib}$ with a higher F1-score than the models that were built with CTI gathered from public websites. \n\nIn summary, the MISP dataset (i.e., $DS_2$) was found to be a high-quality dataset that worked well with automated classification and thus validation of alerts. Hence, using attributes that are representative of possible threat data, prediction models can be built to effectively validate alerts with a substantial degree of accuracy, precision and recall. The results of $DS_2$ reflect that ML based validation models can be used to effectively validate the alerts with high quality CTI like $DS_2$. Model choice and alternatives are seen to be important steps to find the optimal models, as for different attributes sets the PoC has returned different models.\n\n\\vspace{-5pt}\n\\subsection{Evaluation of Efficiency} \\label{subsec:6.2}\n\nTo demonstrate the efficiency of SmartValidator, we answer RQ2 that is \"\\textit{How efficient is SmartValidator in selecting and building prediction models on runtime over pre-building prediction models?}\".\n\nIn SmartValidator, we propose to build the models at run time based on SOC's requirements instead of pre-building all possible models. We considered the time to build possible model combinations of the eight aforementioned classifier algorithms as a baseline to compare the efficiency of SmartValidator. We observed that it was infeasible to pre-build models for every possible combination of features. For $DS_1$, there were 62 possible feature combinations, and for $DS_2$ there were 8190. This would increase the number of experiments for $DS_2$ to 524,160. For predicting the five unknown attributes with 18 attribute sets we would only require to run 1440 experiments; only 0.26\\% of the total experiments. \nThus, for calculating the efficiency, that is the computation time, based on SOC's requirement the PoC ran a total of 816 experiments; 112 experiments for DS1 (8 ML algorithms $\\times$ 7 input attribute sets $\\times$ 1 output attribute $\\times$ 2 encoding methods) and 704 experiments for DS2 (8 ML algorithms $\\times$ 11 input attribute sets $\\times$ 4 output attributes $\\times$ 2 encoding methods) to test all combinations of the 18 observed attribute sets, prediction models, encoding methods and five unknown attributes (i.e., classification labels). \n\nWe considered the total time as a baseline to evaluate the efficiency of building the model at runtime. Here we attempted to simulate the resource limitations of model construction for a real-world environment. We considered $DS_1$ as a lightweight dataset and assigned the restrictions of 24 hours runtime and 10GB of memory, whereas $DS_2$ was a heavyweight dataset and assigned a 48 hours runtime limit and 100GB of memory. Any experiment that exceeded this run time or consumed too much memory would be aborted, as it was deemed impractical due to organisations' strict resources and fast response requirements \\cite{islam2019multi, sonicwall2020}. In this result section, we reported the efficiency in terms of time.\n\nFor $DS_1$, all 112 experiments completed successfully. However, for $DS_2$, 169 of the 704 experiments failed to finish whilst enforcing our experimental setup. The most common classifier model to time out were MLP and XGB, as these models had a significantly larger training time. For these failed jobs, 149 out of 169 jobs were either for the \\textit{event} or \\textit{threat level} label, as these datasets had many more valid entries, and thus also took more time and memory to train. Similarly, 116 of the failed jobs used one-hot encoding, as this encoding method was much less efficient than label encoding, due to every possible value adding a dimension to the encoded input. However, 53 of the label encoding experiments also failed due to the size of the text features. These features were encoded with very high dimensionality due to the lack of a natural language convention. To investigate this issue further in our future work, we plan to investigate efficient encoding methods through vocabulary size and dimensionality reduction. \n\nTable \\ref{tab:timeDs1} and Table \\ref{tab:timeDs2} show the training time and prediction time of the optimal models that were built based on $DS_1$ and $DS_2$ respectively. The experimental results show that it is extremely inefficient to pre-build a large number of prediction models. For $DS_1$ the total training time was 61064 seconds (0.7 days), and for $DS_2$ the total training time was 7010279 seconds (81.1 days). The prediction time of a model was significantly faster than the training time, which further encouraged the use of validation models. On average, models made predictions in 0.6\\% of the training time for $DS_1$ (0.09 seconds), and 2.9\\% for $DS_2$ (17.29 seconds). \n\n\\begin{table}[h]\n\\caption{Training time and prediction time of optimal models in \\textbf{seconds} for attribute sets $ob^1_{attrib}$ to $ob^7_{attrib}$ to predict \\textit{attack} using $DS_1$}\n\n \\centering\n \n \\begin{tabular}{|c|l|c|c|}\n\n \\hline\n \n \\multirow{2}{*}{Model} & \n \\multicolumn{3}{|c|}{attack} \\\\\n \\cline{2-4}\n \n & Optimal model & Train time & Predict time \\\\ \n \n \\hline\n$ob^1_{attrib}$ &\tKNN + LE &\t758&\t0.676 \\\\[1ex]\n$ob^2_{attrib}$ &\tRID + OHE &\t0.90&\t0.002\\\\[1ex]\n$ob^3_{attrib}$ &\tSVM + OHE &\t482\t&0.001\\\\[1ex]\n$ob^4_{attrib}$ &\tXGB + OHE &\t2597&\t0.196\\\\[1ex]\n$ob^5_{attrib}$ &\tRID + OHE &\t5.6\t&0.001\\\\[1ex]\n$ob^6_{attrib}$ &\tRID + OHE &\t2.2\t&0.001\\\\[1ex]\n$ob^7_{attrib}$ &\tXGB + LE &\t1422.9&\t0.09\\\\[1ex]\n \\hline\n \\end{tabular}\n \\label{tab:timeDs1}\n \\vspace{-10pt}\n\\end{table}\n\n\\begin{table*}[h]\n\\caption{Training time and prediction time of optimal models in \\textbf{seconds} that were built to predict threat type, threat level, name and event based on attributes $ob^8_{attrib}$ to $ob^{18}_{attrib}$ using $DS_2$ }\n\n \\centering\n \n \\resizebox{\\textwidth}{!}{%\n \n \\begin{tabular}{|p{1.25cm}|p{1.45cm}|p{1cm}|p{.8cm}|p{1.45cm}|p{1cm}|p{.8cm}|p{1.45cm}|p{1cm}|p{.8cm}|p{1.45cm}|p{.8cm}|p{.8cm}|} \n \\hline\n \n \\multirow{2}{1.3cm}{Observed attributes} & \n \\multicolumn{3}{c|}{threat type} & \\multicolumn{3}{c|}{threat level} & \\multicolumn{3}{c|}{name} &\\multicolumn{3}{c|}{event}\\\\\n \\cline{2-13}\n \n & Optimal model &\tTrain time &\tPredict time\t& Optimal model &\ttrain Time&\tPredict time &\tOptimal model &\ttrain time &\tPredict time &\tOptimal model &\tTrain time&\tPredict time \\\\ \n \\hline\n$ob^8_{attrib}$ & \tMLP+OHE& \t166623& \t0.18& \tXGB+LE& \t111788& \t56.86& \tRID+OHE\t& 10959\t& 0.036& \tDT+LE& \t289.87& \t0.199\\\\[1ex]\n$ob^9_{attrib}$& \tSVM+OHE& \t1141.1\t& 0.031& \tXGB+LE& \t144473& \t139.2& \tSVM+OHE& \t925.37& \t0.016& \tSVM+OHE& \t37846\t& 0.686\\\\[1ex]\n$ob^{10}_{attrib}$& \tSVM+OHE &\t1199.66 & \t0.024& \tSVM+OHE& \t804.92& \t0.006& \tSVM+OHE\t& 994.43& \t0.014& \tSVM+OHE& \t27004& \t0.657\\\\[1ex]\n$ob^{11}_{attrib}$\t& KNN+OHE\t& 2599.72\t& 23.1& \tXGB+LE& \t19190\t& 3.115& \tKNN+OHE& \t984.94& \t5.98& \tDT+LE\t& 371.38& \t0.204\\\\[1ex]\n$ob^{12}_{attrib}$& \tKNN+OHE& \t2592.79& \t22.11& \tKNN+OHE& \t16871& \t176.6& \tKNN+OHE& \t2795.24\t& 14.56\t& DT+LE& \t534.94& \t0.197\\\\[1ex]\n$ob^{13}_{attrib}$\t& KNN+OHE\t& 3177.3& \t32.04& \tXGB+LE\t& 27067& \t4.296& \tKNN+OHE& \t1178.39& \t9.311& \tKNN+OHE& \t18065& \t259.95\\\\[1ex]\n$ob^{14}_{attrib}$\t& KNN+OHE& \t2535.45& \t22.09& \tXGB+LE& \t16036& \t2.556& \tKNN+OHE& \t951.54& \t6.713& \tKNN+OHE& \t16731& \t224.98\\\\[1ex]\n$ob^{15}_{attrib}$\t& SVM+OHE\t& 1081.05\t& 0.015& \tSVM+OHE& \t739.58& \t0.004& \tSVM+OHE& \t1019.56& \t0.007& \tSVM+OHE& \t21275& \t0.369\\\\[1ex]\n$ob^{16}_{attrib}$\t& XGB+LE& \t10552.3& \t21.38\t& XGB+LE& \t4033.701& \t1.913& \tKNN+OHE& \t899.12&\t5.679&\tDT+LE&\t102.35& \t0.213\\\\[1ex]\n$ob^{17}_{attrib}$ & \tXGB+LE& \t19602.7& \t19.32& \tXGB+LE& \t13950.9& \t3.938& \tKNN+OHE& \t1005.59& \t6.64& \tRF+OHE& \t7036.7& \t15.78\\\\[1ex]\n$ob^{18}_{attrib}$\t& XGB+LE& \t14257.7& \t18.57& \tXGB+LE& \t8569.79\t& 3.175\t& KNN+OHE\t& 949.95& \t5.611& \tDT+LE\t& 142.76& \t0.22\\\\[1ex]\n \n \\hline\n\n \\hline\n \\end{tabular}\n \\label{tab:timeDs2}\n \\vspace{-10pt}\n }\n\\end{table*}\n\n\\begin{figure*}\n\\centering\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/timeDS1.pdf}\\label{fig:efficiency(a)}}\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/timeDS2.pdf}\\label{fig:efficiency(b)}}\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/time.pdf}\\label{fig:efficiency(c)}}\n \\hfil\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/TimeLabel.pdf}\\label{fig:efficiency(d)}}\n \\subfloat[]{\\includegraphics[width=0.32\\textwidth]{figs\/timeEncoding.pdf}\\label{fig:efficiency(e)}}\n \n \\caption{Comparative analysis of time in \\textbf{seconds} required to train different ML based validation models with (a) dataset 1, (b) dataset 2, (c) both datasets, (d) different labels and (e) encoding methods}\n \\label{fig:Efficiency}\n \\vspace{-15pt}\n\\end{figure*}\n\nFigure \\ref{fig:Efficiency} shows the logarithmic distribution of the training time for different datasets, labels and encoding methods. The logarithmic distribution of training times for $DS_1$, $DS_2$ and both $DS_1$ and $DS_2$ is shown in Figure \\ref{fig:efficiency(a)}, Figure \\ref{fig:efficiency(b)} and Figure \\ref{fig:efficiency(c)}, respectively. It should be noted that Figure \\ref{fig:Efficiency} did not consider the run time of experiments which were timed out. As shown in Figure \\ref{fig:Efficiency}, the run times for more intensive models are skewed to the left. Noticeably, the Na\u00efve Bayes (GBAY) classifiers were trained near instantaneously, as these models did not require heavy fitting to the data. DT similarly was trained faster for both datasets, due to the simplicity of this model. Figure \\ref{fig:efficiency(a)} shows for $DS_1$, SVM, KNN and MLP required an average of 10-15 minutes to train. However, the XGB classifier took significantly the longest time to train with a median value of 36 minutes. For $DS_2$, the average overall training time was 217 minutes (shown in Figure \\ref{fig:efficiency(b)}) which was significantly larger than the average overall training time of 9 minutes for $DS_1$, due to the substantial dataset size increase. However, XGB had a significantly larger training time of over 8 hours. We observed that the runtime between RID and SVM was quite different even though both are linear classifiers (Figure \\ref{fig:efficiency(c)}). The SVM classifier took an average of 30 minutes to train on $DS_2$ due to hyperparameter optimization, whereas the RID classifier took an average of 10 seconds as they did not require any significant hyperparameters. These observations highlight the importance of SOC requirements, as we can see a trade-off between model performance and training time. XGB is the best performing model on average, but also exhibits the largest training time. Hence, SOC analysts would need to weigh model effectiveness against efficiency.\n\nFigure \\ref{fig:efficiency(d)} displays the training time for completed experiments for each predicted attribute. Models targeted towards predicting the \\textit{attack} attribute only took an average training time of 9 minutes, as they were trained using the much smaller $DS_1$ dataset in comparison to the other attributes that were trained using $DS_2$. Models took an average of 2 hours to train for the name attribute, in comparison to \\textit{threat\\_level}, \\textit{threat\\_type} and \\textit{event}, which took a mean time of around 4.5 hours. This could be because the training set was much smaller for the name attribute, as not as many CTI entries were assigned such information. Similarly, it should be noted that a large portion of the \\textit{event} and \\textit{threat\\_level} experiments timed out, as the training set was larger for these attributes, due to more valid entries.\n\nFigure \\ref{fig:efficiency(e)} reflects that the training time did not significantly differ between encoding methods. This is because the major encoding dimensionality came from the domain and filename features, which were treated as text attributes and were thus only one-hot encoded in our experiments. However, one-hot encoding usually exhibited larger training times, as it has much higher dimensionality and is thus less efficient. For $DS_1$, one-hot experiments took an extra 4.6 minutes on average (11.3 minutes vs 6.8 minutes). For $DS_2$, one-hot experiments took an extra 18.9 minutes on average (227.7 minutes vs 208.8 minutes).\n\nprediction models were built by experts (i.e., data scientists) who have the knowledge of ML technologies and pipeline to automatically validate the alerts. We observed that for changing SOC requirements, interaction or collaboration were required among the security team and the data science team where a security team specified the requirements and requested for the models they need. If the data required by the data science team were not available, they needed to request the threat intelligence team who gathered the requested information and updated the relevant list of information.\nHence, the model required redesigning and further actions needed to be performed to achieve the best prediction models.\nWhilst using the PoC based on SmartVadilator, these interaction could be minimized, by managing the interaction through the orchestrator. In this way, the orchestrator requested the model builder to build the required validation model. Constructing the models automatically based on a SOC's needs required less time and was more feasible than constructing possible model for all combination of attribute sets.\n\nEvaluating the efficiency of SmartValidator, we found that SmartValidator successfully identified and classified threat data required for alert validation. The same framework can be used to automate the validation of newly listed alerts, with new data sources. The data science team requires to map the suitable algorithm with suitable attributes sets and define the required data sources. \n\n\\vspace{-5pt}\n\\subsection{Discussion}\nWe consider the same attributes sets and CTI sources that are used for security incident and alert validation for three validation approaches. The validation approaches are (i) manual validation, (ii) pre-building prediction models and (iii) automatic construction of prediction models based on SOC's requirements \n\nIn manual validation, for each attribute or set of attributes, a security team first searched for attribute types and then looked for the relevant CTI availability. A security team used their previous experience to select the CTI to perform the validation. For example, to validate a malicious IP, a security team collected the blacklisted IP addresses and then looked for the IPs on the list. They further used WhoIS database to identify the relevant information about suspected IPs. The security team needed to manually write queries or call APIs to find and extract information for CTI and required knowledge about the underlying CTI sources. For similar types of alerts or changing context (i.e., change in CTI, alerts or SOC's requirement), the same sequence of actions were repeated, that cost significant man-hours and required knowledge about the underlying plugin, API, CTI sources and so on. \n\nFor changing context while following approach 2 (pre-building prediction models), the security team needed to request the data science team to train and build the possible prediction models for the new context. Section \\ref{subsec:6.2} reveals that to build prediction models each time a change occurs is not feasible. With automatic construction of prediction models, each time a SOC requested for a validation task, the models were built automatically. With changing context, the orchestrator coordinated the data collection and model building process that also freed the security team from coordinating and communicating with the data science team and also reduced the delay incurred due to communication. In this work, we have experimentally evaluated the performance of SmartValidator. We did not discuss the amount of time required by the security team to perform the validation activities or the time required for communication between the security team and data science team. This will also include the time gap between a request being made and the time for the security team to get the model. In future work, we plan to evaluate SmartValidator in a real SOC environment to\nfurther demonstrate how SmartValidator can be beneficial in a SOC environment. \nFor example, the effort required to manage the PoC system versus the cost saving from automation. Further we want to evaluate the maximum upfront cost to incorporate the PoC system in a real SOC environment. \n\nFor automating construction of ML-based validation models, the PoC followed three major steps - i) collecting and processing the data, ii) training the classifier and iii) running the prediction model. Step 1 required a large amount of time (in hours) as there were hundreds of thousands of data points to download and process, step 2 took a reasonable amount of time (in minutes) as the data were needed to be encoded and the choice of classifier were needed to train and optimise its hyperparameters. Step 3 was reasonably fast as it only needed time (in seconds) to apply a pre-trained model. These steps were bundled into an installable python package which could be made publicly available. We designed the PoC in a modular fashion so that it can be integrated into other network-enabled services to gain more information about network security. The system could easily be built for the future improvements.\n\nTo validate alerts coming from an IDS, the developed PoC system can be extended to first receive the IDS alerts over a network. After parsing the alert attributes (which would be similar to the attributes sets used in our PoC), the next steps are to transform the alert attributes into features, and then look for potential CTI to correlate the alerts information into patterns by building prediction models to predict suspicious behavior. Furthermore, the alerts can be validated using the models and the validated output can display security context of the network in a graphical user interface that is easy to understand. The PoC system can be enhanced to provide API that can be integrated as a part of a SOC's existing security system, such as middleware for EDR or SIEM. \n\nIn our experiment, we have selected two types of CTI \u2013 one is gathered from the websites, and another is from an OSINT platform MISP, which is widely used by industry as it contains high-quality data with enriched IOCs. We consider the confidence score of users to ensure that the models with low evaluation scores are not selected. We also merged multiple data sources to enrich CTI. The experimental results show that while using MISP, SmartValidator performs better than when using web data. We assert that this is due to the high quality of MISP. The lower level of CTIs used in our experiment can be replaced with higher quality CTIs such as TTP, where SmartValidator will perform the same steps from identifying CTIs to building models and performing the validation. The prediction models can be enriched with advanced IOC and TTP with more details about the threats.\n\nThe key steps identified to improve the models automatically built through SmartValidator are effective feature encoding, hyperparameter optimization, data distribution, feature extraction, dimensionality reduction and classifier selection. Increasing the size of the dataset and number of features increases the F1-score. The dimensionality of the categorical variables needs to be decreased. It is worth noting that our investigation is by no means exhaustive; we adopt basic ML principles and NLP techniques to develop a simplistic PoC. We plan to investigate more ML techniques such as data balancing, normalization and feature selection with diverse types of CTI. Similarly, we intend to investigate more sophisticated NLP techniques, such as word embeddings that can capture semantic information of the natural language attributes.\n\nWe found that CTI datasets contain highly multi-variate categorical variables. Highly dimensional problems like this are likely to be linearly separable, as we can separate any d+1 points in a d-dimensional space with a linear classifier, regardless of how the points are labelled. We further found that overall ensemble classifiers such as XGB performed better than the other selected seven algorithms.\n\n\nAs shown in Figure \\ref{fig:framework}, we consider that dedicated expertise is required (e.g., a data scientist) to build prediction models, which in most cases is different from the SOC team using the models for validation task. The prediction models are built by experts (e.g., data scientists, ML experts, or developers) who are knowledgeable about ML libraries, feature engineering and algorithms. Considering a SOC's security team capability, the model building process can also be replaced with Automated Machine Learning (AutoML)\\footnote{https:\/\/www.automl.org\/automl\/} framework such as Google Cloud AutoML\\footnote{https:\/\/cloud.google.com\/automl}. AutoML frameworks are designed to provide ML as a service, where a security team is required to provide the pre-processed data and for some cases the transformed data. It considers multiple ML algorithms in a pipeline to evaluate the performance, perform hyperparameter tuning and validation in an attempt to improve the performance. An AutoML framework provides a list of optimal models. For example, TPOTClassifier\\footnote{http:\/\/epistasislab.github.io\/tpot\/api\/} is an automated ML classifier that is developed in an attempt to automate the ML pipeline in python. It explores prediction models configurations that a data analyst or security analyst may not consider and attempts to fine-tune the model to achieve the most optimized model. Hence, the model builder of our proposed SmartValidator can be developed following the process discussed in Figure~\\ref{fig:predictionlayer} or using an AutoML framework. Thus, depending on an organizations SOC capabilities they may use an AutoML framework instead of building the prediction models with assistance of data scientist.\n\nSOC teams overwhelmed with massive volume of alerts failed to respond to a security incident even they had the alerts and corresponding information in their CTIs\\footnote{\\url{https:\/\/www.trendmicro.com\/explore\/en_gb_soc-research}}. Hence, we assert that the manual and repetitive validation task can be automated through SmartValidator, whereas more critical or unknown alerts and incidents would still require human involvement. In future work, we plan to extend the PoC to provide more explainable output so that SOC can make decisions based on the validated output, where prediction model choice and alternatives would be captured with explanation.\n\n\\subsection{Limitation of SmartValidator}\n\n\nThe experimental results show the effectiveness of SmartValidator. However, we observed several cases in which SmartValidator is unable to perform the validation.\nFurthermore using CTI to validate alerts for unknown known, unknown or zero-day attacks might not always be practical. To empirically investigate this, we ran an experiment separately to explore the ability of SmartValidator to detect unknown alerts. For our experiment, we used the alert data of Snort and selected IP information that is $ob_{attrib}^3$ to predict \\textit{threat level}. To validate such information, we used SmartValidator to build a model trained exclusively with the MISP data ($DS_2$), so that the Snort alert data is almost entirely unseen. The Snort alert data contained 16317 distant IPs, for which only 78 of the IPs were seen in the our MISP training dataset. The prediction model achieved an F1-score of 0.307 which implies SmartValidator has some capability for prediction of unknown alerts, albeit limited. Even though the alerts and IPs are unseen, the prediction model was still able to detect some patterns inferred from the ASN, IP owner and country attributes. We assert that due to the intelligent nature and the ability to learn the underlying semantic patterns, the models built with SmartValidator have the potential to validate some unseen values of unknown attributes (i.e., unknown attributes for which the model is built depending on the observed attributes). If the alert data is entirely unseen, i.e., the IP, ASN, owner and country are all uncontained in the training data, then SmartValidator will predict an output based on the most common value in the training data.\n\nTo validate any unknown attributes (i.e., $un_{attrib}$) with specific values, SmartValidator always needs the observed attributes as input to build a model. Therefore, even if the given value of unknown attributes is not seen in the training data, it is still possible to make a correct prediction by learning the patterns from the model building phase. For example, malicious IPs can have the same domain and threat actors. Here, an IP that has not been seen before can be identified as malicious, observing the threat actors and the domain. However, it is always possible that a trained model can fail to correctly predict IPs, which is a limitation of our proposed approach. We observered there are the cases when the observed attributes are not representative for capturing and learning patterns about the unknown attributes. SmartValidator will not be applicable to automate the validation tasks in these scenarios. Quantifiable investigation of this limitation is out of the scope of this work; but it is an exciting area of exploration for future research.\n\nCTI is time-sensitive. Hence, SOC teams will need to update the model whenever a new CTI is available. The framework can further be extended to capture the timeliness of the CTI used for building the models and keep track of the models with up-to-date CTI. However, all the CTI will not be updated simultaneously; thus, changing all the models whenever there is an update in the CTI will not be feasible. SmartValidator can be extended to handle this situation by retraining the model when new requests come and retraining and evaluating the available model built based on old CTI.\n\nThere can be bad quality of CTI sources which may affect the performance of SmartValidator. For example, it is possible that the models infer wrong patterns with bad quality data and consider legitimate or benign IPs as malicious. There is a need of empirical studies that focus on ensuring the quality of CTIs. However, this is not within the scope of this study. \n\n\n\n\\subsection{Threats to validity}\n\n\\textit{Construct Validity:} Our choice of data for our evaluation setup may not be suitable. We have considered CTI data to also be representative of alert attributes, as CTI is often generated from existing security alerts from external organizations. However, the classifiers have not yet been tested thoroughly with real-world internal business data. This evaluation will be attempted in future work.\n\n\\textit{Internal Validity:} A potential concern is that our models are not properly optimised. The hyperparameter tuning was performed for a specific set of configurations, as testing hyperparameters with all possible combination would take a large amount of time that may not justify here. Moreover, knowing all the combination of hyperparameters is quite im-possible. Similarly, the features that we have selected to train our models are non-exhaustive. The attribute sets selected were chosen based on the attributes used for validating alerts through security orchestration. The purpose was to show that SmartValidator can automate the construction of prediction models by identifying the CTI and the constructed prediction models can effectively validate alerts based on SOC's requirement. The attributes list might not reflect a complete list of attributes for validating certain alerts, but our system can be easily extended to several other attack scenarios.\n\n\\textit{External Validity:} Our experiments may not generalize to other datasets. The built classifiers and prediction models were evaluated based on simulated alerts attributes sets and publicly available CTIs such as MISP. \n\n\\vspace{-10pt}\n\n\\section{Related Work} \\label{Sec:RelatedWork}\n\nResearch trends are seen in the use of Machine Learning (ML) and Deep Learning (DL) in cybersecurity domain for detection and classification of cyberattacks. Most of the existing literature focused on using AI techniques such as NLP, ML and DL tools and techniques to identify and detect cyber attacks such as malware, network intrusion, data exploitation and vulnerabilities \\cite{AHMED201619, Helge2020,FERRAG2020102419, GAMAGE2020102767, GIBERT2020102526, sabir2020machine, softVul9108283}. ML algorithms are used to extract knowledge from open source public repositories which are later used to analyze attacks, or validate alerts. Although automation has been achieved in detecting and analysis of attacks, validation of alerts and incidents still requires SOC's involvement \\cite{islam2019multi}.\n\nCTI is used by security experts of SOCs to analyze and validate alerts. To ease the use of CTI, researchers have been trying to come up with a unified structured for sharing CTI \\cite{menges2019unifying, tounsi2018survey}. STIX \\cite{Stixbarnum2012standardizing}, TAXII \\cite{taxiiconnolly2014trusted}, CyBox \\cite{barnum2012cybox} and UCO \\cite{menges2019unifying} are popular among them. Use of Artificial Intelligence (AI) is encouraged for identifying, gathering and extracting CTI objects \\cite{RF2019, qamar2017data, Struve2017}. Various AI tools and techniques are used for knowledge extraction, representation and analytics of CTI \\cite{brazhuk2019semantic, tounsi2018survey}. For example, Zahedi et al. \\cite{zahedi2018empirical} has applied topic modelling techniques such as LDA to find the security relevant topics from open source repositories such as GitHub. Another example is a system used by EY \\cite{EY2017} who mined previous threat data and then analyze it to give information on threats. Using this information, they can respond to attacks and continuously monitor a system. They place data collectors at points of high movement in a network, like a server, where a system can continuously analyze data and keep the system safe. Attacks detected can be used to harvest IOCs and analyzed to discover security issues within the network. Recorded Future (widely known CTI service providers \\cite{RFID2021}) also elaborated on the fact that threat data can be found in a large variety of places such as Tweets, Facebook posts and emails. They also use AI to recognize patterns in email so that phishing emails can be found based on the information of the sender or file attached.\n\nRecent advances in CTI domain have drawn attention to the use of the existing knowledge to automate the manual analysis of human experts and enrichment of quality CTIs \\cite{azevedo2019pure,edwards2017panning, noor2019machine, zhou2019ensemble}. For example, vulnerability description of NVD like databases are being used to predict the severity, confidentiality and availability of threats. Le et al. \\cite{le2019automated} have used NLP and traditional ML algorithms to perform automated vulnerability assessment using vulnerability description of open source public repositories. Noor et al. \\cite{noor2019machine} have used data provided by STIX and Mitre corporation to identify the documents related to attacks. Azevedo et al. have proposed a platform Pure to improve quality of CTI in the form of enriched IoCs by automatically correlating and aggregating the IoCs \\cite{azevedo2019pure}. They have evaluated the performance of the proposed platform with 34 OSINT feeds gathered from MISP.\n \nOne recent study by Recorded Future has laid out four ways of using AI techniques to extract CTI from a detected attack \\cite{Struve2017}. They have defined risk score metrics to identify malicious network activity. This extends the classification from being just about whether an attack has occurred and provides more in-depth information on the threat \\cite{RFteam2018, Struve2017}. Recorded Future has used NLP to increase the range of possible data sources by removing the limit on just structured information \\cite{Struve2017}. They utilize extracted text and perform classification for the language, topic and company. They have applied ML and NLP techniques to rank documents to identify malware data attacks. Their model also considers different classifications needed like scoring a risk value. They do not always use ML for scoring a risk value as they often have a rule-based system for the classifier to follow. \n\n\nUnlike the above-mentioned work, we propose SmartValidator to utilize NLP and ML techniques to assist in automating validation of security alerts and incident. To the best of our knowledge, this is the first attempt to use CTI, such as MISP data, to automate the classification and validation of security threat data based on a SOC's preferences. Here, we have investigated how effective ML algorithms are while classifying CTI to assist in alert validation. Unlike the existing works, where possible prediction models are pre-built, here we propose to build the models on demand. We have demonstrated the efficiency of constructing prediction models dynamically. \n\\vspace{-10pt}\n\n\\section{Conclusion} \\label{Sec:Conclu}\n\nMany organizations are facing difficulty to keep pace with the changing threat landscape as security experts need to identify and analyze threat data in most circumstances. Without the indulgence of automation techniques, it is impossible to reduce the burden of analyzing the CTI to make a timely decision. \nIn this work, we propose a novel framework SmartValidator, to build an effective and efficient validation tool using CTI that automates the validation of the security alerts and incidents based on SOC's preferences. Different from the manual approaches, SmartValidator is designed in a way so that SOCs can add their requirements without worrying about collecting CTI and using CTI to build a validation model. SmartValidator consists of three layers: threat data collection, threat data prediction model building and threat data validation. Different teams are responsible for updating the components of different layers, thus freeing security teams from learning data processing and model building techniques. The validation task is designed as a classification problem that leverages existing NLP and ML techniques to extract features from CTI and learn patterns for alert validation. We developed a Proof of Concept (PoC) system to automatically extract features from CTI and build prediction models based on the preferences of SOCs. A SOC's preferences are collected as a set of attributes sets:~observed and unknown attributes, where the task of the PoC is to predict unknown attributes based on observed attributes.\n\nWe have demonstrated the effectiveness of SmartValidator by predicting \\textit{attack}, \\textit{events}, \\textit{threat type}, \\textit{threat level} and \\textit{name}. It collected and processed data from public websites and MISP. Next, CTI with preferred attributes sets were selected to build prediction models. Eight ML algorithms were ran to build and select the models with the highest F1-score. The best model was used to predict the unknown attributes and thus validate alerts. The developed PoC constructed validation models, and can be used to validate alerts generated by the threat detection tools and find the missing information to store the data in a structured format. The results show prediction models are effective in validating security alerts. Building prediction models at run time are more efficient then building prediction models for all possible attributes sets and~CTI. \n\nIn future work, we plan to extend the PoC system to reduce the amount of data that is sent to the SIEM tool, thus reducing the cost of data analysis. The system can also be extended to reduce the organizational dependence on human expertise to take actions against security threats such as blocking ports or identifying maliciousness of an incident. The proposed framework can assist an organization's security team to focus on decision making, rather than manually extracting and validating security alerts and incidents. The framework can also be leveraged to provide the benefit to choose CTI suitable for an organization's application rather than using generalized CTI.\n\\vspace{5pt}\n\n\\textbf{Acknowledgements}\n\nThis work is supported by Cyber Security Cooperative Research Centre (CSCRC), Australia.\n\n\\vspace{-10pt}\n\n\\bibliographystyle{cas-model2-names}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}