diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkqdt" "b/data_all_eng_slimpj/shuffled/split2/finalzzkqdt" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkqdt" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nAchieving consensus of states is a well-known important feature for networked system, see for example \\citep{ART:OM04,BOO:RB07}. Many distributed control\/optimization problems over a network require a consensus algorithm as a key component.\nThe most common consensus algorithm is the dynamical system defined by the Laplacian matrix for continuous time system or the Perron matrix for discrete-time system. Past works in the general direction of speeding up convergence of these algorithms exist. For example, the work of \\citep{ART:XB04} proposes a semi-definite programming approach to minimize the algebraic connectivity over the family of symmetric matrices that are consistent with the topology of the network. Their approach, however, results in asymptotic convergence towards the consensus value and is most suitable for larger networks. More recent works focus on finite-time convergence consensus algorithm \\citep{INP:SH07,INP:YSSG09, ART:WX10,ART:YSSBG13,ART:HJOV14,ART:HSJ15} which is generally preferred for small to moderate size networks. One important area in finite-time convergence literature is the determination of the asymptotic value of a consensus network using a finite number of state measurement. Typically, the approach adopted is based on the z-transform final-value theorem and on the finite-time convergence for individual node \\citep{INP:SH07,INP:YSSG09,ART:YSSBG13}. Other works in finite-time consensus include the design of a short sequence of stochastic matrices $A_k, \\cdots, A_0$ such that $x(k)= \\Pi_{j=1}^{k} A_j x(0)$ reaches consensus after $k$ steps \\citep{INP:KS09,ART:HSJ15}.\n\nThis work proposes an approach to speed up finite-time convergence consensus algorithm for a network of agents via the weights associated with the edges of the graph. It is an offline method where the network is assumed known. Hence, it is similar in spirit to the work of \\cite{ART:XB04} except that the intention is to find a low-order minimal polynomial. Ideally, the lowest-order minimal polynomial should be used. However, the lowest minimal polynomial achievable for a given graph with variable weights is an open research problem \\citep{ART:FH07}. They are only known for some special classes of graphs (full connected, star-shaped, strongly regular and others), \\citep{ART:VH98,ART:VKT14}. For this reason, this paper adopts a computational approach towards finding a low-order minimal polynomial. The proposed approach achieves the lowest order minimal polynomial in many of the special classes of graphs and almost always yields minimal polynomial of order lower than those obtained from standard Perron matrices of general graphs. These are demonstrated by several numerical examples.\n\nThe choice of the weights to be chosen is obtained via a rank minimization problem. In general, rank minimization is a well-known difficult problem \\citep{INP:FHB04,ART:RFP10}. Various approaches have been proposed in the literature including the nuclear norm relaxation, bilinear projection methods and others. This work uses a unique two-step procedure in the rank minimization problem. The first is an optimization problem using the nuclear norm and the second, which uses the results of the first, is a correction step based on a low rank approximation.\nWhile both steps of the two-step procedure have appeared in the literature, the use of the two in a two-step procedure is novel, to the best of the authors' knowledge. Hence, the approach towards the rank minimization problem can be of independent interest. The same can be said of the expression of the consensus value which is obtained not by the z-transform mechanization.\n\nThe rest of this paper is organized as follows. This section ends with a description of the notations used. Section \\ref{sec:pre} reviews some standard results of the standard Laplacian and Perron matrix with unity edge weights as well as minimal polynomial and its properties. Section \\ref{sec:main} presents the procedure of obtaining the consensus value from the minimal polynoimal and discusses, in detail, the key subalgorithm used in the overall algorithm. The overall algorithm is described in section \\ref{sec:algo} and the performance of the approach is illustrated via several numerical examples in Section \\ref{sec:num}. Conclusions is given in Section \\ref{sec:con}.\n\nThe notations used in this paper are standard. Non-negative and positive integer sets are indicated by $\\mathbb{Z}^+_0$ and $\\mathbb{Z}^+$ respectively. Let $M, L \\in \\mathbb{Z}^+_0$ with $M \\ge L \\ge 1$. Then $\\mathbb{Z}^M:=\\{1,2,\\cdots,M\\}$ and $\\mathbb{Z}_L^M:=\\{L,L+1,\\cdots,M\\}$. Similarly, $\\mathbb{R}^+_0$ and $\\mathbb{R}^+$ refer respectively to the sets of non-negative and positive real number. $I_n$ is the $n\\times n$ identity matrix, $1_n$ is the $n$-column vector of all ones (subscript omitted when the dimension is clear). Given a set $C$, $|C|$ denotes its cardinality. The transpose of matrix $M$ and vector $v$ are indicated by $M'$ and $v'$ respectively. For a square matrix $Q$, $Q \\succ (\\succeq) 0$ means $Q$ is positive definite (semi-definite), $spec(Q)$ refers to the set of its eigenvalues, $vec(Q)$ is the representation of elements of $Q$ as a vector and $(\\lambda, v)$ is an eigen-pair of $Q$ if $Qv=\\lambda v$. The cones of symmetric, symmetric positive semi-definite and symmetric and positive definite matrices are denoted by $\\mathcal{S}^n=\\{M \\in \\mathbb{R}^{n\\times n} | M=M'\\}$, $\\mathcal{S}^n_{0+}=\\{M \\in \\mathbb{R}^{n\\times n} | M=M', M \\succeq 0\\}$ and $\\mathcal{S}^n_+=\\{M \\in \\mathbb{R}^{n\\times n} | M=M', M \\succ 0\\}$ respectively. The $\\ell_p$-norm of $x\\in \\mathbb{R}^{n}$ is $\\|x\\|_p$ for $p=1,2,\\infty$ while $\\|M\\|_*, \\|M\\|_2, \\|M\\|_F$ are the nuclear, operator (induced) and Frobenius norm of matrix $M$. Diagonal matrix is denoted as $diag\\{d_1,\\cdots,d_n\\}$ with diagonal elements $d_i$. Additional notations are introduced when required in the text.\n\n\n\\section{Preliminaries and Problem formulation}\\label{sec:pre}\nThis section begins with a review of standard consensus algorithm and sets up the notations needed for the sequel.\nThe network of $n$ nodes is described by an undirected graph $G = (\\mathcal{V},\\mathcal{E})$ with vertex set $\\mathcal{V} = \\{1,2,\\cdots,n\\}$ and edge set $\\mathcal{E} \\subseteq \\mathcal{V} \\times \\mathcal{V}$. The pair $(i,j)\\in \\mathcal{E}$ if $i$ is a neighbor of $j$ and vice versa since $G$ is undirected. The set of neighbors of node $i$ is $N_i:=\\{j\\in \\mathcal{V}: (i,j) \\in \\mathcal{E}, i \\neq j\\}$. The standard adjacency matrix $\\mathcal{A}_s$ of $G$ is the $n \\times n$ matrix whose $(i,j)$ entry is $1$ if $(i,j)\\in \\mathcal{E}$, and $0$ otherwise.\n\nThe implementation of the proposed consensus algorithm is a discrete-time system of the form $z(k+1)=\\mathcal{P} z(k)$ where $\\mathcal{P}$ is the Perron matrix. However, for computational expediency, the working algorithm uses the weighted Laplacian matrix $\\mathcal{L} \\in \\mathcal{S}^n_{0+}$. The conversion of $\\mathcal{L}$ to $\\mathcal{P}$ is standard and is discussed later, together with desirable properties of $\\mathcal{P}$ and $\\mathcal{L}$.\nThe properties of standard (non-weighted) $\\mathcal{L}$ are first reviewed.\n\nThe standard Laplacian matrix denote by $\\mathcal{L}_s$ of a given $G$ is\n\\begin{align} \\label{eqn:Lii}\n[\\mathcal{L}_s]_{i,j}=\\left\\{\n \\begin{array}{ll}\n -1, & \\hbox{if $j \\in N_i$;} \\\\\n |N_i|, & \\hbox{if $i=j$ ;} \\\\\n 0, & \\hbox{otherwise.}\n \\end{array}\n\\right.\n\\end{align}\nIn this form, it is easy to verify that (i) eigenvalues of $\\mathcal{L}_s$ are real and non-negative, (ii) eigenvectors corresponding to different eigenvalues are orthogonal, (iii) $\\mathcal{L}_s$ has at least one eigenvalue $0$ with eigenvector of $1_n$. Properties (i) and (ii) follow from the fact that $\\mathcal{L}_s$ is symmetric and positive semi-definite while properties (iii) is a result of the row sum of all rows being $0$.\nSuppose the assumption of \\\\\n$(A1)$: $G$ is connected. \\\\\nis made. Then, it is easy to show that the eigenvalue of $0$ is simple with eigenvector $1_n$. Consequently, the consensus algorithm of\n$\\dot{x}(t)=-\\mathcal{L}_s x(t)$ converges to $\\frac{1}{n} 1_n (1_n' x(0))$.\n\nUnlike standard Laplacian, this work uses the weighted Laplacian\n\\begin{align}\n\\mathcal{L}(W,G)= \\mathcal{D}(G)- \\mathcal{A}(G,W)\n\\end{align}\nwhere $\\mathcal{A}(G,W)$ is the weighted adjacency matrix with $[\\mathcal{A}(G,W)]_{ij}=w_{ij}$ when $(i,j) \\in \\mathcal{E}$,\n$\\mathcal{D}(G)=diag\\{d_1,d_2,\\cdots, d_n\\}$ with $d_i:=\\sum_{j\\in N_i} w_{ij}$ and $W:=\\{w_{ij} \\in \\mathbb{R} | (i,j) \\in \\mathcal{E}\\}$.\nThe intention of this work is to compute algorithmically the minimal polynomial of $\\mathcal{L}(W,G)$ over variable $W$ for a given $G$.\nHowever, since the minimal polynomial attainable for a given network $G$ is a well-known difficult problem \\citep{ART:FH07},\nthe output of the algorithm can be seen as an upper bound on the order of the achievable minimal polynomials of $\\mathcal{L}(W,G)$ over all $W$.\nNote that the value of $w_{ij}$ is arbitrary including the possibility that $w_{ij}=0$ and $w_{ij}<0$ for $(i,j) \\in \\mathcal{E}$.\nThis relaxation allows for a larger $W$ search space but it brings about the possibility of losing connectedness of $\\mathcal{L}(W,G)$ even when $G$ is connected. Additional conditions are therefore needed to preserve connectedness, as discussed in the sequel. Since the graph $G$ is fixed, its dependency in $\\mathcal{L}(\\cdot), \\mathcal{D}(\\cdot)$ and $\\mathcal{A}(\\cdot)$ is dropped for notational convenience hereafter unless required.\n\nThe desirable properties of $\\mathcal{L}(W)$ are\n\\begin{itemize}\n \\item[(L1)] All eigenvalues are non-negative.\n \\item[(L2)] 0 is a simple eigenvalue with eigenvector $1_n$.\n \\item[(L3)] $[\\mathcal{L}(W)]_{ij}=0$ when $(i,j) \\notin \\mathcal{E}$\n\\end{itemize}\nProperties (L1) and (L2) are needed for $x(t)$ of $\\dot x(t)=-\\mathcal{L} x(t)$ to reach consensus while property (L3) is a hard constraint imposed by the structure of $G$. In addition, the choice of $W$ should be chosen such that\n\\begin{itemize}\n \\item[(L4)] $\\mathcal{L}(W)$ has a low-order minimal polynomial.\n\\end{itemize}\n\nGiven $\\mathcal{L}(W)$ having properties (L1)-(L4), the corresponding Perron matrix $\\mathcal{P}$ can be obtained from $\\mathcal{P}:=e^{-\\epsilon\\mathcal{L}}$ or\n$\\mathcal{P} :=I_n - \\epsilon \\mathcal{L}(W)$, for some $0<\\epsilon < \\frac{1}{max_i \\{d_{i}\\}}$). Then, it is easy to verify that $\\mathcal{P}$ inherits\nfrom (L1)-(L4) the following properties:\n\\begin{description}\n \\item (P1) All eigenvalues of $\\mathcal{P}$ are real and lie within the interval $(-1,1]$.\n \\item (P2) $1$ is a simple eigenvalue of $\\mathcal{P}$ with eigenvector $1_n$.\n \\item (P3) $[\\mathcal{P}]_{ij}=0$ when $(i,j) \\notin \\mathcal{E}$.\n \\item (P4) $\\mathcal{P}$ has a low-order minimal polynomial.\n\\end{description}\nThe consensus algorithm based on $\\mathcal{P}$ follows\n\\begin{align}\nz(k+1)=\\mathcal{P} z(k) \\label{eqn:zkplus1}\n\\end{align}\nfor consensus variable $z \\in \\mathbb{R}^n$. From (P1)-(P2) and assumption (A1), it is easy to show, with $(\\sigma_i,\\xi_i)$ being the $i^{th}$ eigenpair of $\\mathcal{P}$ that\n\\begin{align}\nlim_{k \\rightarrow \\infty} z(k) = lim_{k \\rightarrow \\infty} (\\sum_{i=1}^n \\xi_i \\xi_i' \\sigma_i^k)z(0) =\\frac{1}{n} 1_n (1_n' z(0))\n\\end{align}\nand hence $lim_{k \\rightarrow \\infty} z(k)$ reaches consensus among all its elements. The above shows that finding a $\\mathcal{P}$ that possesses properties (P1)-(P4) is equivalent to finding an $\\mathcal{L}(W)$ that satisfying properties (L1)-(L4). The next subsection reviews properties of the minimal and characteristic polynomials that are available in the literature.\n\n\\subsection{Minimal polynomial and finite-time convergence}\nThis section begins with a review of minimal polynomial and its properties.\n\\begin{definition}\nThe minimal polynomial $m_M(t)$ of a square matrix $M$ is the monic polynomial of the lowest order such that $m_M(M)=0$.\n\\end{definition}\nSeveral well known properties of characteristic and minimal polynomial are now collected in the next lemma.\nTheir proofs can be found in standard textbook (see for example, \\cite{BOO:FIS03} or others) and are given next for easy reference.\n\\begin{lemma} \\label{lem:poly}\nGiven a square matrix $M$ with minimal polynomial $m_M(t)$ and characteristic polynomial $p_M(t)$. Then (i) $\\lambda$ is an eigenvalue of $M$ if and only if $\\lambda$ is a root of $m_M(t)$. (ii) a root of $m_M(t)$ is a root of $p_M(t)$ (iii) the distinct roots of $m_M(t)$ are equivalent to the distinct roots of $p_M(t)$. Suppose $M$ is symmetric, then (iv) the algebraic multiplicity of every eigenvalue in $M$ equals its geometric multiplicity and (v) each distinct root of $m_M(t)$ has a multiplicity of one.\n\\end{lemma}\n\nSince $G$ is undirected, the desirable properties of (L4) (or equivalently (P4)) is achieved by having as few distinct roots of $m_\\mathcal{L}(t)$ as possible, as given in properties (iv) and (v) of Lemma \\ref{lem:poly}. The next result shows the connection of the minimal polynomial and the computations needed to obtain the consensus value of (\\ref{eqn:zkplus1}) using only $s_\\mathcal{P}$ number of states where $s_\\mathcal{P}$ is the order of the minimal polynomial, $m_\\mathcal{P}(t)$.\n\n\\section{Main approach} \\label{sec:main}\n\\begin{lemma} \\label{eqn:mpoly}\nGiven a $n^{th}$ order symmetric matrix $\\mathcal{P}$ with minimal polynomial $m_{\\mathcal{P}}(t)$ of order $s_\\mathcal{P}$. Any matrix polynomial $h(\\mathcal{P})$ can be expressed as $h(\\mathcal{P})=r(\\mathcal{P})$ where $r(t)$ is a $(s_\\mathcal{P} -1)$ order polynomial. The coefficients of $r(t)$ can be obtained from solution of\nthe following equation involving the Vandermonde matrix.\n\\begin{align} \\label{eqn:ht}\n\\left(\n \\begin{array}{ccccc}\n 1 & \\lambda_1 & \\lambda_1^2 & \\cdots & \\lambda_1^{{s_\\mathcal{P}}-1} \\\\\n \\vdots & & \\ddots & & \\vdots \\\\\n 1 & \\lambda_{s_\\mathcal{P}} & \\lambda_{s_\\mathcal{P}}^2 & \\cdots & \\lambda_{s_\\mathcal{P}}^{{s_\\mathcal{P}} -1} \\\\\n \\end{array}\n \\right) \\left(\n \\begin{array}{c}\n \\pi_0 \\\\\n \\vdots \\\\\n \\pi_{{s_\\mathcal{P}}-1} \\\\\n \\end{array}\n \\right)\n =\\left( \\begin{array}{c}\n h(\\lambda_1) \\\\\n \\vdots \\\\\n h(\\lambda_{{s_\\mathcal{P}}-1}) \\\\\n \\end{array}\n \\right)\n\\end{align}\nwhere $\\lambda_i$ are the distinct eigenvalues of $\\mathcal{P}$ and $\\pi_i$ is the coefficient of $t^i$ in $r(t)$.\n\\end{lemma}\n\\textit{Proof :}\nThe polynomial $h(t)$ can be expressed, via long division by $m_{\\mathcal{P}}(t)$, as\n$$h(t)=\\phi(t)m_{\\mathcal{P}}(t)+r(t)$$\nwhere $r(t)$ is the remainder of the division and, hence, is of order $s_\\mathcal{P}-1$ (including the possibility that some coefficients are zero). Since the above holds for all values of $t$, it holds particularly when $t=\\lambda_i$, the $i^{th}$ distinct eigenvalue of $\\mathcal{P}$. This fact, together with $m_{\\mathcal{P}}(t)|_{t=\\lambda_i}=0$ from property (P1) of Lemma (\\ref{lem:poly}), establishes each row of (\\ref{eqn:ht}).\nFrom property (P3) of Lemma \\ref{lem:poly}, there are $s_\\mathcal{P}$ distinct eigenvalues of $\\mathcal{P}$ and there are $s_\\mathcal{P}$ unknown coefficients in $r(t)$ whose values can be obtained from solving (\\ref{eqn:ht}). To show that (\\ref{eqn:ht}) always admits a solution, note that the determinant of the Vandermonde matrix is $\\Pi_{1 \\le i \\le j \\le {s_\\mathcal{P}}-1} (\\lambda_j - \\lambda_i)$ and is non-zero since $\\lambda_i$ are distinct. The above equation also holds for its corresponding matrix polynomial or $h(\\mathcal{P})=\\phi(\\mathcal{P})m_{\\mathcal{P}}(\\mathcal{P})+r(\\mathcal{P})$. That $h(\\mathcal{P})=r(\\mathcal{P})$ follows because $m_{\\mathcal{P}}(\\mathcal{P})=0$.\n$\\Box$\n\nConsider a given polynomial of $h(t)=\\lim_{k \\rightarrow \\infty} t^{k}:=t^{\\infty}$ and that $\\mathcal{P}$ satisfies properties (P1)-(P4).\nLemma \\ref{eqn:mpoly} states that there exists a polynomial $r(t)$ or order $s_\\mathcal{P}-1$ such that\n$t^{\\infty}=\\pi_{s_\\mathcal{P}-1}t^{s_\\mathcal{P}-1}+\\cdots + \\pi_{1}t+\\pi_0$ and the values of $\\{\\pi_{s_\\mathcal{P}-1},\\cdots, \\pi_0\\}$ can be obtained from the $s_\\mathcal{P}$ equations of (\\ref{eqn:ht}). The right hand side of (\\ref{eqn:ht}) has the properties that $h(t)|_{t=\\lambda_i}=\\lambda_i^{\\infty}=0$\nfor all but one eigenvalues of $\\mathcal{P}$ since $|\\lambda_i|<1$ (property P1). The remaining eigenvalue of $\\mathcal{P}$ is $\\lambda_i=1$ and it yields $h(t)|_{t=\\lambda_i}=\\lambda_i^{\\infty}=1$, following property (P2). Hence,\n\\begin{align} \\label{eqn:W}\n\\mathcal{P}^{\\infty}=\\pi_{s_\\mathcal{P}-1}\\mathcal{P}^{s_\\mathcal{P}-1}+\\cdots + \\pi_{1}\\mathcal{P}+\\pi_0 I_n.\n\\end{align}\nRewriting (\\ref{eqn:zkplus1}) as $z(k)=\\mathcal{P}^{k}z(0)$, one gets\n\\begin{align}\n\\lim_{k \\rightarrow \\infty} z(k) &= \\mathcal{P}^{\\infty}z(0)=( \\sum\\limits_{\\ell=0}^{s_\\mathcal{P}-1} \\pi_{\\ell} \\mathcal{P}^{\\ell})z(0)= \\sum\\limits_{\\ell=0}^{s_\\mathcal{P}-1} \\pi_{\\ell}z(\\ell) \\label{eqn:sumtoT}\n\\end{align}\nThe application of Lemma \\ref{eqn:mpoly} for distributed consensus algorithm is now obvious. Each agent $i$ stores the parameters $\\{\\pi_{s_\\mathcal{P}-1},\\cdots, \\pi_0\\}$ in its memory as well as $\\{z_i(0), \\cdots, z_i(s_\\mathcal{P}-1)\\}$, obtained from\n\\begin{align}\nz_i(k+1)&=\\sum_{j \\in N_i} \\mathcal{P}_{ij} z_j(k), \\; \\textrm{ for } k=0,\\cdots, s_\\mathcal{P}-2. \\nonumber\n\\end{align}\nAt the end of $k=s_\\mathcal{P} -2$, agent $i$ takes the sum of $\\pi_0 z_i(0) + \\cdots + \\pi_{s_{\\mathcal{P}}-1} z(s_{\\mathcal{P}}-1)$ which yields $\\lim_{k \\rightarrow \\infty} z_i(k)$.\n\n\\subsection{Key Subalgorithms}\nWith the discussion above, it is clear that the choice of $\\mathcal{L}$ (or $\\mathcal{P})$ should be chosen such that $m_{\\mathcal{L}}(t)$ is the lowest-order minimal polynomial consistent with the network. The numerical approach proposed here attempts to find the lowest order minimal polynomial by minimizing the number of distinct eigenvalues of $\\mathcal{L}(W,G)$ over variable $W$, since the order of the minimal polynomial $m_{\\mathcal{L}}(t)$ equals to the number of distinct eigenvalues of $\\mathcal{L}$ from Lemma \\ref{lem:poly}.\n\nThe overall scheme of the proposed algorithm is now described in loose terms for easier appreciation.\nThe numerical algorithm is iterative and at each iteration $k$, two subalgorithms are invoked producing two possible choices of $W$. The $W$ that results in $\\mathcal{L}(W,G)$ having a lower order of minimal polynomial is then chosen as $\\mathcal{L}(W_k,G)$. The two subalgorithms, OPA and OPB, are very similar in structure but serve different purposes: OPA searches for a new eigenvalue of $\\mathcal{L}(W_k,G)$ with multiplicity of 2 or higher while OPB searches for additional multiplicity of eigenvalues that are already present in $\\mathcal{L}(W_k,G)$. To accomplish this, two sets are needed: $\\mathcal{C}_k$ containing the zero simple eigenvalue and distinct eigenvalues with multiplicities of 2 or higher in $\\mathcal{L}(W_k,G)$ and $\\mathcal{M}_k$ containing the multiplicities of the eigenvalues in $\\mathcal{C}_k$. The set $\\mathcal{C}_k$ has the property that $\\lambda \\in \\mathcal{C}_{k}$ implies $ \\lambda \\in \\mathcal{C}_{k+1}$. The remaining `free' eigenvalues in $\\mathcal{L}(W_k,G)$ are then optimized again in the next iteration. An additional index function $\\xi(\\cdot):\\mathbb{Z} \\rightarrow \\mathbb{Z}$ is needed to keep track of the cardinality of $\\mathcal{C}_{k}$ in the sense that $\\xi(k)=|\\mathcal{C}_{k}|$. The overall scheme proceeds with decreasing order of the minimal polynomial and stops when no further repeated eigenvalue can be found. At each iteration, properties (L1) - (L3) and assumption (A1) are preserved from $\\mathcal{L}(W_{k-1},G)$. The key steps at iteration $k$ are now discussed. For notational simplicity, $\\mathcal{L}(W_k,G)$ is denoted as $\\mathcal{L}_k$.\n\nIteration $k$ requires the following data as input from iteration $k-1$: the matrix $\\mathcal{L}_{k-1}$; index function $\\xi(k-1)$; the set $\\mathcal{C}_{k-1}=\\{\\lambda_1, \\lambda_2, ..., \\lambda_{\\xi(k-1)}\\}$ with $\\lambda_1=0$ and $\\lambda_2,\\cdots, \\lambda_{\\xi(k-1)}$ being distinct eigenvalues of $\\mathcal{L}_{k-1}$; and the set of multiplicities $\\mathcal{M}_{k-1}:=\\{m_1, m_2, \\cdots, m_{\\xi(k-1)}\\}$ where $m_1=1$ and $m_i \\ge 2$ is the multiplicity of $\\lambda_i$ in $\\mathcal{C}_{k-1}$. Let\n\\begin{align}\\label{eqn:qkminus1}\nq_{k-1}:=\\sum_{i=1}^{\\xi(k-1)} m_i,\\quad \\bar{q}_{k-1}=n-q_{k-1}\n\\end{align}\ncorresponding to the number of fixed and free eigenvalues in $\\mathcal{L}_{k-1}$.\n\nIteration $k$ starts by computing the eigen-decomposition of $\\mathcal{L}_{k-1}$ in the form of $\\mathcal{L}_{k-1}=Q_{k-1} \\Lambda_{k-1} Q_{k-1}'$ where $\\Lambda_{k-1}=\\left(\n \\begin{array}{cc}\n D_c & 0 \\\\\n 0 & D_o \\\\\n \\end{array}\n \\right)$\nwith $D_c \\in \\mathcal{S}^{q_{k-1}}_{0+}$ being a diagonal matrix with elements, in the same order, as the eigenvalues in $\\mathcal{C}_{k-1}$ including multiplicities,\nand $D_o \\in \\mathcal{S}^{\\bar{q}_{k-1}}_{+}$ being the diagonal matrix containing the remaining eigenvalues of $\\mathcal{L}_{k-1}$. Correspondingly, the $Q_{k-1}$ can be expressed as $[Q_{k-1}^c \\: Q_{k-1}^o]$ of appropriate dimensions such that\n\\begin{align}\\label{eqn:Lkminus1}\n\\mathcal{L}_{k-1}=\\left(\n \\begin{array}{cc}\n Q_{k-1}^c & Q_{k-1}^o \\\\\n \\end{array}\n \\right)\\left(\n \\begin{array}{cc}\n D_c & 0 \\\\\n 0 & D_o \\\\\n \\end{array}\n \\right)\\left(\n \\begin{array}{c}\n (Q_{k-1}^c)' \\\\\n (Q_{k-1}^o)' \\\\\n \\end{array}\n \\right\n\\end{align}\nConsider the parameterization of $\\mathcal{L}$ by a symmetric matrix $M \\in \\mathcal{S}^{\\bar{q}_{k-1}}_{+}$ in the form of\n\\begin{align} \\label{eqn:HM}\nH(M):= \\left(\n \\begin{array}{cc}\n Q_{k-1}^c & Q_{k-1}^o \\\\\n \\end{array}\n \\right) \\left(\\begin{array}{cc}\n D_c & 0 \\\\\n 0 & M \\\\\n \\end{array}\n \\right) \\left(\n \\begin{array}{c}\n (Q_{k-1}^c)' \\\\\n (Q_{k-1}^o)' \\\\\n \\end{array}\n \\right)=Q_{k-1}^c D_c (Q_{k-1}^c)' + Q_{k-1}^o M(Q_{k-1}^o)'\n\\end{align}\nThe structural constraints of the graph $G$ are imposed on $M$ via $[H(M)]_{ij}=[Q_{k-1}^c D_c (Q_{k-1}^c)']_{ij}\n+ [Q_{k-1}^o M(Q_{k-1}^o)']_{ij}=0$ for $(i,j) \\notin \\mathcal{E}$. The collection of these structural constraints can be stated as $\\Phi_{k-1} vec(M) = b_{k-1}$ where $vec(M)$ is the vectorial representation of $M$ with $\\Phi_{k-1}$ and $b_{k-1}$ being the collection of appropriate terms from $Q_{k-1}^c D_c (Q_{k-1}^c)'$ and $Q_{k-1}^o M(Q_{k-1}^o)'$ respectively. Consider the following optimization problems over variables\n$\\lambda \\in \\mathbb{R}, M \\in \\mathcal{S}^{\\bar{q}_{k-1}}_{+}$:\n\\begin{subequations}\\label{eqn:M}\n\\begin{align}\n(OP) \\quad \\quad min \\quad &rank(\\lambda I - M) \\label{eqn:minrank}\\\\\n&M \\succ 0 \\label{eqn:Mpositive}\\\\\n&\\Phi_{k-1} vec(M) = b_{k-1} \\label{eqn:Phi}\n\\end{align}\n\\end{subequations}\nThen, the next lemma summarizes its properties.\n\\begin{lemma} \\label{lem:prop1}\nSuppose $\\mathcal{L}_{k-1}$ satisfies (L1)-(L3) and (A1). Then\n(i) OP has a feasible solution. (ii) $spec(H(M^*))=spec(M) \\cup spec(D_c)$. (iii) Suppose $(\\lambda^*, M^*)$ is the optimizer of OP. Then $H(M^*)$ satisfies (L1)-(L3) and (A1).\n\n\\end{lemma}\n\\textit{Proof:} (i) Choose $\\hat{M}=D_o$ where $D_o$ is that given in (\\ref{eqn:Lkminus1}) and $\\hat{\\lambda}$ be any diagonal element of $D_o$. Then $(\\hat{\\lambda},\\hat{M})$ is a feasible solution to OP since $\\mathcal{L}_{k-1}$ satisfies (L1)-(L3) and (A1). (ii) The property is obvious from the expression of $H(M^*)$ of (\\ref{eqn:HM}). (iii) From (ii), the spectrum of $H(M^*)$ are the eigenvalues in $\\mathcal{C}_{k-1}$ (with the corresponding multiplicities) and those of $M^*$. Since $M^*\\succ 0$ and all values in $\\mathcal{C}_{k-1}$ are non-negative with $0\\ \\in \\mathcal{C}_{k-1}$ being simple, (L1) and (L2) hold for $H(M^*)$. The condition of (L3) is ensured by constraint (\\ref{eqn:Phi}). Since $\\mathcal{L}_{k-1}$ satisfies (A1) and $M^* \\succ 0$, the second largest eigenvalue of $H(M)$ is also strictly greater than 0 which implies $H(M^*)$ satisfies (A1).\n\nClearly, minimizing $rank(\\lambda I -M)$ in (\\ref{eqn:minrank}) is equivalent to maximizing the dimension of the nullspace of $\\lambda I -M$, which in turn leads to $\\lambda$ having the largest multiplicity. However, rank minimization is a well-known difficult numerical problem. The numerical experiment undertaken (see section \\ref{sec:num}) suggests that the nuclear norm approximation appears to be the most reliable since it results in a convex optimization problem and is known to be the tightest pointwise convex lower bound of the rank function (\\cite{ART:RFP10}). If such a relaxation is taken, the optimization problem over $\\lambda \\in \\mathbb{R}$ and $M \\in \\mathcal{S}^{\\bar{q}_{k-1}}_{+}$ becomes\n\\begin{subequations}\\label{eqn:nuclearnorm2}\n\\begin{align}\nOPA(\\Phi_{k-1}, b_{k-1}): \\; &min_{\\lambda, M} \\|\\lambda I - M\\|_* \\label{eqn:minnuclear2}\\\\\n&M \\succ \\epsilon_M I_{\\bar{q}_{k-1}} \\label{eqn:Mepsilon2}\\\\\n&\\Phi_{k-1} vec(M) = b_{k-1} \\label{eqn:Phi2}\n\\end{align}\n\\end{subequations}\nwhere $\\epsilon_M$ is some small positive value to prevent eigenvalues of $M$ being too close to $0$. Suppose ($\\lambda^* , M^*$) is the optimizer of OPA. There are many cases where the solution of OPA provides a low value of $rank(\\lambda^* I -M^*)$. However, there are also many cases where their solutions differ. This is not unexpected since the nuclear norm is a relaxation of the rank function. In one of these cases, further progress can be made. This special case is characterized by $M^*$ having several eigenvalues that are relatively close (known hereafter as bunch eigenvalues) to one another but are not close enough for\nthe nullspace of $(\\lambda^* I - M^*)$ to have a dimension greater than one. When this situation is detected, a correction step is invoked. Specifically, suppose $spec(M^*)=\\{\\mu_1,\\cdots, \\mu_{\\bar{q}_{k-1}}\\}$ and there are $\\ell$ bunch eigenvalues with $\\bar{q}_{k-1} \\ge \\ell \\ge 2$ in the sense that\n\\begin{align}\\label{eqn:lambdastar}\n|\\lambda^* - \\mu_i| < \\epsilon_\\mu \\quad \\forall i =1,\\cdots, \\ell\n\\end{align}\nwhere $\\epsilon_\\mu >0$ is some appropriate tolerance, then the following correction step is invoked. For notational simplicity, its description uses $q$ and $\\bar{q}$ for $q_{k-1}$ and $\\bar{q}_{k-1}$ respectively. The input to the correction step are $\\lambda^*, M^*,\\ell$ from (\\ref{eqn:nuclearnorm2}) and (\\ref{eqn:lambdastar}).\n\nCorrection step: COS($\\lambda^*, M^*,\\ell$)\n\\begin{description}\n \\item Step 1: Let $\\eta=0$ and $(\\lambda^* I - M^*)$ is approximated by a rank $\\bar{q}-\\ell$ matrix of the form\n $$\\lambda^* I - M^* \\approx F_\\eta G_\\eta'$$\nvia full rank matrices $F_\\eta, G_\\eta \\in \\mathbb{R}^{\\bar{q} \\times (\\bar{q}-\\ell)}$.\n \\item Step 2: Solve the following optimization problem over variables $\\lambda \\in \\mathbb{R}, M \\in \\mathcal{S}^{\\bar{q}}_+, \\Delta_G, \\Delta_F \\in \\mathbb{R}^{\\bar{q} \\times (\\bar{q}-\\ell)}\n\\begin{subequations}\\label{eqn:nuclearnorm3}\n\\begin{align}\nOPC(\\Phi_{k-1}, b_{k-1}): &min \\|\\lambda I - M - F_\\eta G_\\eta' - F_\\eta \\Delta_G' - \\Delta_F G_\\eta \\|_F \\label{eqn:minnuclear3}\\\\\n&M \\succ \\epsilon_M I_{q_{k-1}} \\label{eqn:Mepsilon3}\\\\\n&\\|\\Delta_G\\|_F \\le \\epsilon_G, \\quad \\|\\Delta_F\\|_F \\le \\epsilon_F \\label{eqn:DeltaG}\\\\\n&\\Phi_{k-1} vec(M) = b_{k-1} \\label{eqn:Phi3}\n\\end{align}\n\\end{subequations}\n\\indent \\indent and let its optimizers be $\\lambda_\\eta^*, M_\\eta^*, \\Delta_F^*, \\Delta_G^*$.\n \\item Step 3: If $\\|\\lambda_\\eta^* I - M_\\eta^* - F_\\eta G_\\eta' - F_\\eta (\\Delta_G^*)' - (\\Delta_F^*) G_\\eta\\|_F < \\epsilon_C \\cdot \\bar{q}$ or $\\eta \\ge \\eta_{max}$, then stop.\n \\item Step 4: Else, Let $F_{\\eta+1}=F_\\eta + \\Delta_F^*$, $G_{\\eta+1}=G_\\eta + \\Delta_G^*$, $\\eta=\\eta+1$ and goto Step 2.\n\\end{description}\nThe motivation of the correction step is clear. When $F_\\eta$ and $G_\\eta$ are of rank ${\\bar{q}-\\ell}$, so is $F_\\eta G_\\eta' + F_\\eta \\Delta_G' + \\Delta_F G_\\eta$. Hence, when $\\|\\lambda_\\eta^* I - M_\\eta^* - F_\\eta G_\\eta' - F_\\eta (\\Delta_G^*)' - (\\Delta_F^*) G_\\eta\\|_F $ is sufficiently small, $\\lambda_\\eta^* I - M_\\eta^*$ has a nullspace of dimension $\\ell$. The use of full rank matrices, $F$ and $G$ to find a rank $\\ell$ solution have appeared in the literature (\\cite{INP:FHB04}). However, its use as a correction step after nuclear norm optimization is novel, to the best of the authors' knowledge.\n\n\\begin{remark}\nThe successful termination of COS depends critically on the choices of $F_0, G_0'$. While some options exist, the choice adopted is to let $F_0=G_0'=U_{\\bar{q}-\\ell} \\Sigma_{\\bar{q}-\\ell}^{0.5}$ where $U_{\\bar{q}-\\ell}$ is the first ${\\bar{q}-\\ell}$ columns of $U$, $\\Sigma_{\\bar{q}-\\ell}^{0.5}=diag\\{\\sigma_1^{0.5}, \\cdots, \\sigma_{\\bar{q}-\\ell}^{0.5}\\}$ with\n$\\sigma_1, \\cdots, \\sigma_\\ell$ being the largest $\\ell$ singular values of $(\\lambda^* I -M^*)$ and $U$ being the corresponding singular vectors.\n\\end{remark}\n\nAs mentioned earlier, the other subalgorithm at iteration $k$ is OPB. OPB is similar to OPA except that $\\lambda$ is not a variable. Instead, $\\lambda$ is a prescribed value taken successively from $\\mathcal{C}_{k-1}\\backslash\\{0\\}$. The need for such a step arises from the nonlinear nature of the rank function. Numerical experiment suggests that the same eigenvalue can be obtained from $M$ eventhough this eigenvalue has been obtained from OPA or COS in an earlier iteration. Hence, the intention of OPB is to check if additional multiplicities can be added to those eigenvalues in $\\mathcal{C}_{k-1}\\backslash \\{0\\}$. Specifically, the optimization problem is\n\\begin{subequations}\\label{eqn:nuclearnorm4}\n\\begin{align}\nOPB(\\lambda,\\Phi_{k-1},b_{k-1}): \\quad \\quad min_{M} \\; &\\|\\lambda I - M\\|_* \\label{eqn:minnuclear4}\\\\\n&M \\succ \\epsilon_M I_n \\label{eqn:Mepsilon4}\\\\\n&\\Phi_{k-1} vec(M) = b_{k-1} \\label{eqn:Phi4}\n\\end{align}\n\\end{subequations}\n\n\\begin{corollary} \\label{lem:prop2}\nSuppose $\\mathcal{L}_{k-1}$ satisfies (L1)-(L3) and (A1). Then (i) OPA$(\\Phi_{k-1},b_{k-1})$ and OPB$(\\lambda,\\Phi_{k-1},b_{k-1})$ have a feasible solution, (ii) $H(M^*)$ where $M^*$ is the optimizer of OPA$(\\Phi_{k-1},b_{k-1})$ or OPB$(\\lambda,\\Phi_{k-1},b_{k-1})$ satisfies (L1)-(L3) and (A1). (iii) If OPC$(\\Phi_{k-1},b_{k-1})$ terminates successfully at step 3 of COS with $\\lambda_\\eta^*, M_\\eta^*, \\Delta_F^*, \\Delta_G^*$, then $H(M_\\eta^*)$ satisfies (L1)-(L3) and (A1).\n\\end{corollary}\n\\textit{Proof} The proof follows same reasoning as those given in the proof of Lemma $\\ref{lem:prop1}$.\n\n\\section{The Overall Algorithm} \\label{sec:algo}\nThe main algorithm can now be stated.\n\n\\begin{algorithm}[H]\n\\caption{The Minimal Polynomial Algorithm}\n\\textbf{Input:} $\\mathcal{A}_s(G)$ (the standard adjacency matrix of graph G), $\\epsilon_M, \\epsilon_G, \\epsilon_F, \\epsilon_C, \\epsilon_\\mu.$\\\\\n\\textbf{Output:} $\\mathcal{L}(W_k,G)$, $\\mathcal{C}_k=\\{\\lambda_1,\\cdots,\\lambda_k\\}$ and $\\mathcal{M}_k=\\{m_1, \\cdots, m_k\\}$. \\\\\n\\textbf{Initialization} \\\\\nExtract $V(G), \\mathcal{E}(G)$ from $\\mathcal{A}_s(G)$.\nLet $\\mathcal{L}_0=\\mathcal{L}_s$, the standard Laplacian matrix of (\\ref{eqn:Lii}) for graph $G$.\\\\\nSet $\\mathcal{C}_0=\\{0\\}, \\mathcal{M}_0=\\{1\\}$, $\\xi(0)=1$ and $k=1$. \\\\\n\\textbf{Main}\n\\begin{description}\n \\item[1] Compute the eigen-decomposition of $\\mathcal{L}_{k-1}=Q_{k-1} \\Lambda_{k-1} Q_{k-1}'$ according to (\\ref{eqn:Lkminus1}). Set up $\\Phi_{k-1}, b_{k-1}$ according to (\\ref{eqn:HM}). Compute $q_{k-1}, \\bar{q}_{k-1}$ according to (\\ref{eqn:qkminus1}). Call OPA$(\\Phi_{k-1}, b_{k-1})$ and denote its optimizer as $(\\lambda_A^\\dag,M_A^\\dag)$ with $spec(M_A^\\dag)=\\{\\mu_1,\\cdots, \\mu_{\\bar{q}_{k-1}}\\}$. Let $r_A=\\bar{q}_{k-1}$.\n \\item[2] If (\\ref{eqn:lambdastar}) is satisfied with $\\ell \\ge 2$, then Call COS$(\\lambda_A^\\dag,M_A^\\dag, \\ell)$. If COS terminates successfully, let the optimizer be $(\\lambda_A^*,M_A^*)$ and $r_A = \\bar{q}_{k-1}-\\ell+1$.\n \\item[3] If $\\xi(k-1)=1$, let $r_B=\\bar{q}_{k-1}$ and goto step 5. Else, let $n_\\mathcal{C} = \\xi(k-1)$.\n \\item[4] For each $i=2, \\cdots, n_\\mathcal{C}$,\n\\begin{description}\n \\item[(i)] call OPB($\\lambda_i,\\Phi_{k-1}, b_{k-1})$ and denote its optimizer as $M_i^\\dag$ with $spec(M_i^\\dag)=\\{\\mu_1,\\cdots, \\mu_{\\bar{q}_{k-1}}\\}$. Let $r^i_B = \\bar{q}_{k-1}$.\n \\item[(ii)] If (\\ref{eqn:lambdastar}) is satisfied with $\\ell \\ge 1$, then call COS$(\\lambda_i, M_i^\\dag, \\ell)$. If COS terminates successfully, let its optimizer be $M_i^*$ and $r^i_B = \\bar{q}_{k-1}-\\ell$.\n\\end{description}\nNext $i$\\\\\nLet $i_B^*=\\arg \\min_{i=2,\\cdots, n_\\mathcal{C}}r^i_B$ and $r_B = r_B^{i_B^*}$. If $r_B \\le \\bar{q}_{k-1}-1$, let $(\\lambda_B,M_B)=(\\lambda^{i_B^*},M_{i_B^*}^*)$.\n \\item[5] If $r_A = \\bar{q}_{k-1}$ and $r_B = \\bar{q}_{k-1}$, then the algorithm terminates.\n \\item[6] If $r_A < r_B$, then let $(\\lambda^*,M^*)= (\\lambda_A^*,M_A^*)$,\n$\\mathcal{C}_{k}=\\mathcal{C}_{k-1} \\cup \\{\\lambda^*\\}$, $\\xi(k)=\\xi(k-1)+1$, $m_{\\xi(k)}= \\bar{q}_{k-1}-r_A+1$, $\\mathcal{M}_{k}=\\mathcal{M}_{k-1}\\cup \\{m_{\\xi(k)}\\}$. Else, let $(\\lambda^*,M^*)= (\\lambda_B^*,M_B^*)$, $\\mathcal{C}_{k}=\\mathcal{C}_{k-1}$, $\\xi(k)=\\xi(k-1)$, $m_{i_B^*}= m_{i_B^*} + (\\bar{q}_{k-1}-r_B)$, $\\mathcal{M}_k=\\mathcal{M}_{k-1}$.\n \\item[7] Let $\\mathcal{L}_k=H(M^*)$ where $H(\\cdot)$ is that given by (\\ref{eqn:HM}) and $k=k+1$. Go to 1.\n\\end{description}\n\n\\end{algorithm}\n\nIn the above description, quantities $r_A, r_B^i, r_B$ are the estimates of the rank of $M_A, M_B^i$ and $M_B$ respectively. Note that the ranks of $M_A, M_B^i$ and $M_B$ are never computed exactly. Their values are only guaranteed via the successful termination of the COS routine, as given by steps 2 and 4(ii) respectively.\n\n\n\\section{Numerical Examples}\\label{sec:num}\nThis section begins with the examples of graph with known minimal polynomials. In particular, a complete graph, a star-shaped graph and a special regular graph (\\cite{ART:VH98}) are used. For each, the minimal polynomials based on the standard Laplacian are used as a reference for the output of Algorithm 1. The parameters in Algorithm 1 are: $\\epsilon_M=0.01, \\epsilon_G=0.01, \\epsilon_F=0.01, \\epsilon_C=10^{-7}, \\epsilon_\\mu=0.01.$ Most of the computations in this section can be done in tens of seconds except when $n=50$ where the computational times are in the range of 5-10 minutes on a Windows 7 PC with a Intel Core i5-3570 processor and 8GB memory. The matlab implementation of this code is available in \\cite{MISC:WO17}.\n\n\\begin{figure}[H]\n\\centering\n\\subfigure[The complete graph]{\\label{fig:complete}\\includegraphics[width=1.6in]{complete_graph.eps}}\n\\subfigure[The star]{\\label{fig:star}\\includegraphics[width=1.6in]{star.eps}}\n\\subfigure[The regular graph]{\\label{fig:regular}\\includegraphics[width=1.6in]{regular_graph.eps}}\n\\caption{Special examples}\n\\label{fig:regularstar}\n\\end{figure}\n\nThe spectra of the standard Laplacian matrices of these graphs are: $\\{0,8,8,8,8,8,8,8\\},\\{0,1,1,1,1,1,1,8\\}$, and $\\{0,4,4,4,4,4,4,8\\}$ respectively. The corresponding spectra of $\\mathcal{L}(W,G)$ after Algorithm 1 are: \\\\ $\\{0,0.2610,0.2610,0.2610,0.2610,0.2610,0.2610,0.2610\\}$, $\\{0,7.9982,0.9998,0.9998,0.9998,0.9998,0.9998,0.9998\\}$\\\\\nand $\\{0,2.0006,1.0003,1.0003,1.0003,1.0003,1.0003,1.0003\\}$ respectively. Hence, the algorithm preserves the order of the minimal polynomials for these cases.\n\nThe next example is a 10 agent system with a randomly generated topology as given in Figure \\ref{fig:10agent}. It is used to illustrate the progress of a $\\mathcal{L}(W,G)$ of a typical graph as it goes through Algorithm 1 as well as a means to evaluate the relevance of the subroutines involved. The next table shows such a case. The second column (labeled Defining Step) of Table \\ref{tab:1} refers to the procedure that determines the $M$ matrix of $H(M)$. As the table shows, all routines (OPA, COS-of-OPA, OPB and COS-of-OPB) are needed to achieve the minimal polynomial. It also validates the necessity of OPB and COS-of-OPB in the algorithm.\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.5\\linewidth]{illustrate_example.eps}\n\\caption{A random 10-agent network}\n\\label{fig:10agent}\n\\end{figure}\n\n\\begin{table}[H]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\n \\hline\n \n k & Defining Step & Spectra of $H(M^*)$ & order of $m_{\\mathcal{L}_k}$ \\\\\n \\hline\n 0 & Standard Laplacian & \\{0,2.5721,3.7509,4.5858,6.6243,7.1464,7.4142,8,8.8035,9.1028\\} & 10 \\\\\n \\hline\n 1 & COS of OPA & \\{0,2.7246,1.0923,0.9992,0.9995,2.2654,2.1183,2.0839,2.0839,2.0839\\} & 8 \\\\\n \\hline\n 2 & COS of OPB & \\{0,2.9853,0.7331,1.3771,1.5191,2.3286,2.0839,2.0839,2.0839,2.0839\\} & 7 \\\\\n \\hline\n 3 & COS of OPA & \\{0,0.7464,2.9379,2.2874,1.3771,1.3771,2.0839,2.0839,2.0839,2.0839\\} & 6 \\\\\n \\hline\n 4 & Terminated & \\{0,0.7464,2.9379,2.2874,1.3771,1.3771,2.0839,2.0839,2.0839,2.0839\\} & 6\\\\\n \\hline\n\\end{tabular}\n\\caption{The Steps in Algorithm 1 for the graph of \\ref{fig:10agent}} \\label{tab:1}\n\\end{center}\n\\end{table}\n\n\n\n\nThe next table shows results from Algorithm 1 on graphs for which the minimal polynomials are unknown. These graphs are generated based on the following procedure. For every pair of nodes in $\\mathcal{V}$, the existence of a link connecting them follows a uniform density function with a threshold. The link exists if and only if the density function returns a value above the threshold. Graphs of various sizes and topologies are generated in this way. For each graph $G$ generated, validity of assumption (A1) is ensured by checking that the second smallest eigenvalue of $\\mathcal{L}_s$ satisfy $\\lambda_2 (\\mathcal{L}_s(G)) >0$. For each choice of threshold and size, $20$ examples are generated randomly. Let $s_{\\mathcal{L}(W)}$ denote the order of $m_{\\mathcal{L}(W)}$ from Algorithm 1 and recall that $s_{\\mathcal{L}(W)}-1$ is the number of steps needed to achieve consensus from (\\ref{eqn:sumtoT}). The mean and standard deviations of $s_{\\mathcal{L}(W)}-1$ over the 20 examples are given in the following table:\n\\begin{table}[H]\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n \n Threshold & $n$ & mean of $s_{\\mathcal{L}(W)}$ & std. dev. of $s_{\\mathcal{L}(W)}$ & mean of $s_{L_s}$ & std. dev. of $s_{L_s}$ \\\\\n \\hline\n 0.3 & 10 & 5.45 & 0.865 & 7.55 & 1.43\\\\\n 0.6 & 10 & 8.5 & 0.5 & 9.95 & 0.218\\\\\n \\hline\n 0.3 & 20 & 7.85 & 1.96 & 19.8 & 0.536\\\\\n 0.6 & 20 & 16.9 & 0.624 & 20 & 0\\\\\n \\hline\n 0.3 & 50 & 19.3 & 1.58 & 50 & 0\\\\\n 0.6 & 50 & 40.1 & 1.26 & 50 & 0\\\\\n \\hline\n\\end{tabular}\n\\caption{The mean and standard deviation of the order of the minimal polynomial over 20 random graphs} \\label{tab:2}\n\\end{center}\n\\end{table}\n\n\nTwo trends are clear from Table \\ref{tab:2} besides the obvious that the order of the minimal polynomial increases for increasingly sparse networks. Let $\\bar{s}_{{\\mathcal{L}(W)}}$ denote the mean of $s_{\\mathcal{L}(W)}$ (third column of Table \\ref{tab:2}). The first is that the relative decrease in $\\frac{\\bar{s}_{\\mathcal{L}(W)}}{n}$ from dense to sparse networks increases for increasing values of $n$. It went from $0.85$ to $0.54$ when $n=10$ and $0.8$ to $0.386$ when $n=50$. This suggests that the proposed approach is more effective for larger networks. The second trend is that percentage of decrease in the order of the minimal polynomial is more pronounced for better-connected system - the ratio of $\\frac{\\bar{s}_{\\mathcal{L}(W)}}{n}$ increases from 0.545 to about 0.386 when $n$ increases from 10 to 50. The percentage decrease is less in the case for sparsely-connected networks, $\\frac{\\bar{s}_{\\mathcal{L}(W)}}{n}$ goes from 0.85 to 0.8 for the corresponding increase in $n$. This suggests that the difficulties in reducing the order of minimal polynomial for sparesely-connected networks.\n\n\n\\section{Conclusions}\\label{sec:con}\nThis work presents an approach to speed up finite-time consensus by searching over the weights of a weighted Laplacian matrix.\nThe intention is to find the weights that minimizes the rank of the Laplacian matrix. As rank minimization is a difficult problem, this work\nuses an iterative process wherein two optimization problems are solved at each iteration. In each optimization problem, a nuclear norm convex optimization is first solved followed by a correction step via a low-rank approximation. Numerical experiment suggests that this two-stage process is more effective in finding a low rank Laplacian matrix.\n\nThe numerical experiment shows that the minimal polynomials obtained from the iterative process are of a lower or equal order to that obtained from standard Laplacian. Using the minimal polynomial so obtained, finite-time consensus can be achieved in $(s_\\mathcal{L} -1)$ time step where $s_\\mathcal{L}$ is the order of the minimal polynomial.\n\n\n\n\n\n\n\\bibliographystyle{dcu}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec1}\nFor $\\alpha,\\beta >-1$, the Jacobi polynomials (cf. e.g. \\cite[Chapter 9.8]{koe-les-swa:hypergeometric})\n\\begin{subequations}\n\\begin{equation}\nP_n^{(\\alpha ,\\beta)} (x ):=\\frac{ (\\alpha+1)_n}{n!} \\,\n {}_2F_1\n\\left[ \\begin{array}{c} -n,n+ \\alpha+\\beta \\\\\n\\alpha +1 \\end{array} ; \\frac{1-x}{2} \\right] \n\\end{equation}\nsatisfy the orthogonality relation\n\\begin{align}\n\\int_{-1}^1 &(1-x)^\\alpha (1+x)^\\beta P_n^{(\\alpha ,\\beta)} (x )P_m^{(\\alpha ,\\beta)} (x ) \\\\\n&=\\begin{cases}\n\\frac{2^{\\alpha+\\beta+1}\\Gamma(n+\\alpha +1)\\Gamma (n+\\beta+1) }\n{(2n+\\alpha+\\beta+1) \\Gamma(n+\\alpha+\\beta +1) n!} &\\ \\text{if}\\ m=n, \\\\\n0 &\\ \\text{if}\\ m\\neq n,\n\\end{cases} \\nonumber\n\\end{align}\nfor $n,m=0,1,2,\\ldots$.\n\\end{subequations}\nHere $\\Gamma (\\cdot)$ refers to the gamma function, and we have adopted standard conventions for the Pochhammer symbol\n\\begin{equation*}\n(a)_n := \\prod_{0\\leq k 0$, these polynomials satisfy the orthogonality relations \\cite[\\text{Chapter 9.4}]{koe-les-swa:hypergeometric}\n\\begin{subequations}\n\\begin{align}\n&\\frac{1}{2\\pi} \\int_{-\\infty}^\\infty \\Delta (x ;a,b) \\text{p}_n(x ;a,b) \\text{p}_m(x ;a,b) \\text{d}x = \\\\\n &\n\\begin{cases}\n\\frac{n! \\Gamma(a+\\overline{a}+b+\\overline{b}+ n-1)\\Gamma ( a+\t\\overline{a}+n )\\Gamma (a+\\overline{b}+n) \\Gamma (\\overline{a}+b+n) \\Gamma (b+\\overline{b}+n) }\n{\\Gamma(a+\\overline{a}+b+\\overline{b}+2n-1) \\Gamma (a+\\overline{a}+b+\\overline{b} +2n) } &\\ \\text{if}\\ m=n, \\\\\n0 &\\ \\text{if}\\ m\\neq n,\n\\end{cases}\n\\nonumber\n\\end{align}\nwhere\n\\begin{equation}\\label{weight-cH}\n\\Delta (x;a,b)=\\left| \\Gamma (a+ix)\\Gamma (b+ix) \\right|^2.\n\\end{equation}\n\\end{subequations}\nUpon assuming that possible non-real parameters $a$ and $b$ arise as a complex conjugate pair, the weight function $\\Delta (x;a,b)$ is even. The corresponding orthogonal polynomials are referred to as \\emph{symmetric} continuous Hahn polynomials.\n\n\n\\subsection{Morse Function}\\label{sub-morse-cH}\nFor parameter values belonging to the above orthogonality regime with possible non-real parameters $a$ and $b$ arising as a complex conjugate pair, the roots\n\\begin{equation*}\nx_1^{(n)}0$ with possible non-real parameters $a$ and $b$ arising as a complex conjugate pair.\n\\begin{itemize}\n\\item[a)] The unique global minimum in $\\mathbb{R}^n$ of the strictly convex, radially unbounded,\nMorse function $V(x_1,\\ldots ,x_n;a,b)$ \\eqref{VCH} is attained at the symmetric continuous Hahn roots $x_j= x_j^{(n)}$ ($j=1,\\ldots ,n$).\n\\item[b)] Let\n$\\bigl(x_1(t),\\ldots ,x_n(t)\\bigr)$, $t\\geq 0$ denote the unique solution of the gradient system \\eqref{g-flow}, \\eqref{g-flow-chahn} determined by a choice for\nthe initial condition \n\\begin{equation*}\n(x_1(0),\\ldots ,x_n(0))\\in\\mathbb{R}^n,\n\\end{equation*}\n and let us fix any $\\kappa$ in the interval\n\\begin{equation}\n{ 0< \\kappa < \\frac{\\text{Re}(a)}{\\text{Re}^2(a)+(R_n+|\\text{Im}(a)|)^2}+ \\frac{\\text{Re}(b)}{\\text{Re}^2(b)+(R_n + |\\text{Im}(b)| )^2} },\n\\end{equation}\nwhere $R_n:=x_n^{(n)}$. Then there exists a constant $K>0$ (depending on the parameter values, on the initial condition, and on $\\kappa$) such that \n\\begin{equation}\n|x_j(t)-x_j^{(n)}|\\leq K e^{-\\kappa t} \\qquad (j=1,\\ldots,n)\n\\end{equation}\nwhenever $t\\geq 0$.\n\\end{itemize}\n\\end{subequations}\n\\end{theorem}\n\n\n\\subsection{Proof of Theorem \\ref{cHzeros:thm}}\nBecause $V(x_1,\\ldots ,x_n;a,b)$ \\eqref{VCH} is manifestly smooth on $\\mathbb{R}^n$, the existence of a global minimum is immediate from the observation that the function under consideration is radially unbounded:\n $V(x_1,\\ldots ,x_n;a,b)\\to +\\infty$ when $x_1^2+\\cdots +x_n^2\\to + \\infty$ (cf. \\cite[Section 5.1]{die-ems:solutions} for a detailed verification of this crucial property). The convexity reconfirms the uniqueness of the global minimum. Indeed,\n the relevant Hessian is of the form\n\\begin{align*}\n&H_{j,k}(x_1,\\ldots ,x_n;a,b) =\\partial_{x_j}\\partial_{x_{k}}V(x_1,\\ldots ,x_n;a,b) \\\\\n&\n\\begin{cases}\n \\frac{a}{a^2+x_j^2}+ \\frac{b}{b^2+x_j^2}+\n \\sum_{l \\neq j} \\frac{1}{1+(x_j-x_l)^2} & \\text{if $j=k$,}\\\\\n -\\frac{1}{1+(x_j-x_k)^2} & \\text{if $j\\neq k$,}\\\\\n\\end{cases}\n\\nonumber\n\\end{align*}\nso\n\\begin{align*}\n\\sum_{1\\leq j,k \\leq n} y_j y_{k}H_{j,k} (x_1,\\ldots,x_n;a,b) \n=& \\sum_{1\\leq j\\leq n} \\left(\\frac{a}{a^2+x_j^2}+ \\frac{b}{b^2+x_j^2}\n \\right) y_j^2 \\nonumber \\\\\n&+ \\sum_{1\\leq j< k \\leq n } \n\\frac{(y_j-y_k)^2 }{1+ (x_j-x_k)^2} >0 \n\\end{align*}\nwhen $y_1^2+\\cdots +y_n^2\\neq 0$. The fact that this unique global minimum is attained precisely at the roots of the symmetric continuous Hahn polynomial $\\text{p}_n(x;a,b)$ follows via\n\\cite[Remarks 5.4, 5.5]{die-ems:solutions} as outlined above in Subsection \\ref{sub-morse-cH}, which completes the proof of part a) of the theorem.\n\nWe conclude from part a) that $(x_1^{(n)},\\ldots,x_n^{(n)})$ constitutes the unique equilibrium of our gradient system \\eqref{g-flow}, \\eqref{g-flow-chahn}. \nGiven any initial condition\nin $\\mathbb{R}^n$, the smoothness of the Morse function guarantees \n the existence and uniqueness of the solution $(x_1(t),\\ldots,x_n(t))$ locally for small $t\\geq 0$.\nMoreover, since $V(x_1,\\ldots, x_n;a,b)$ is radially unbounded and strictly decreasing along the gradient flow:\n\\begin{equation*}\n\\frac{\\text{d}}{\\text{d}t} V \\bigl (x_1(t), \\ldots ,x_n(t);a,b\\bigr)= -\\sum_{j=1}^n \\left| (\\partial_{x_j} V) \\bigl (x_1(t), \\ldots ,x_n(t);a,b\\bigr) \\right|^2 < 0 \n\\end{equation*}\n(outside the equilibrium), the solution cannot escape to infinity in finite time and thus extends (uniquely) for all $t\\geq 0$. In fact, the radial unboundedness of the\npotential $V(x_1,\\ldots, x_n;a,b)$ \\eqref{VCH} guarantees that the equilibrium $(x_1^{(n)},\\ldots,x_n^{(n)})$ is globally asymptotically stable (cf. e.g. \\cite[Theorem 4.2]{kha:nonlinear}):\n\\begin{equation}\\label{asymptotic}\n\\lim_{t\\to +\\infty} x_j(t) = x_j^{(n)}\\qquad (j=1,\\ldots ,n) .\n\\end{equation}\nThe rate of the convergence to the equilibrium is determined by the lowest eigenvalue $\\lambda$ of the Jacobian of the linearized system at the equilibrium point. This Jacobian is given by\nthe above Hessian. Since the weight function $\\Delta (x;a,b)$ \\eqref{weight-cH} is even in $x$, the roots of $\\text{p}_n(x;a,b)$ are symmetrically distributed around the origin. Hence\n $-R_n\\leq x_j\\leq R_n$ ($j=1,\\ldots ,n$) at the equilibrium, and thus\n \\begin{align*}\n \\text{Re} \\Bigl( \\frac{a}{a^2+x_j^2} \\Bigr) =&\n\\frac{1}{2}\\left( \\frac{\\text{Re}(a)}{\\text{Re}^2(a)+(x_j+\\text{Im}(a))^2}+ \\frac{\\text{Re}(a)}{\\text{Re}^2(a)+(x_j-\\text{Im}(a))^2}\\right) \\\\\n\\geq & \\frac{\\text{Re}(a)}{\\text{Re}^2(a)+(R_n+|\\text{Im}(a)|)^2}\n \\end{align*}\nin this situation. The upshot is that\n\\begin{align*}\n&\\sum_{1\\leq j,k \\leq n} y_j y_{k}H_{j,k} (x_1^{(n)},\\ldots,x_n^{(n)};a,b) \\\\ \n& \\geq {\\textstyle \\left( \\frac{\\text{Re}(a)}{\\text{Re}^2(a)+(R_n+|\\text{Im}(a)|)^2}+ \\frac{\\text{Re}(b)}{\\text{Re}^2(b)+(R_n+|\\text{Im}(b)|)^2}\\right) ( y_1^2+\\cdots +y_n^2) }+ \\sum_{1\\leq j< k \\leq n } \n\\frac{(y_j-y_k)^2 }{1+ 4R_n^2 } \\\\\n&\\geq {\\textstyle \\left( \\frac{\\text{Re}(a)}{\\text{Re}^2(a)+(R_n+|\\text{Im}(a)|)^2}+ \\frac{\\text{Re}(b)}{\\text{Re}^2(b)+(R_n+|\\text{Im}(b)|)^2}\\right) ( y_1^2+\\cdots +y_n^2) },\n\\end{align*}\nwhich entails that\n\\begin{align*}\n\\lambda &= \\min_{\\substack{(y_1,\\ldots,y_n)\\in\\mathbb{R}^n\\\\ y_1^2+\\cdots +y_n^2\\neq 0}} \\frac{\\sum_{1\\leq j,k\\leq n} y_jy_k H_{j,k}(x_1^{(n)},\\ldots ,x_n^{(n)};a,b)}{y_1^2+\\cdots +y_n^2}\\\\\n&\\geq \n \\frac{\\text{Re}(a)}{\\text{Re}^2(a)+(R+|\\text{Im}(a)|)^2}+ \\frac{\\text{Re}(b)}{\\text{Re}^2(b)+(R + |\\text{Im}(b)| )^2} > \\kappa .\n\\end{align*}\nThis lower bound on the eigenvalue guarantees (cf. e.g. \\cite[Corollary 2.78]{chi:ordinary}) that there exists a neighborhood $U\\subset\\mathbb{R}^n$ around the equilibrium point $\\bigl( x_1^{(n)}, \\ldots , x_n^{(n)}\\bigr)$ and a constant $C>0$ such that\n\\begin{equation*}\n|x_j(t)-x_j^{(n)}|\\leq C |x_j(t_0)-x_j^{(n)}| e^{-\\kappa (t-t_0)} \\qquad (j=1,\\ldots,n),\n\\end{equation*}\nwhenever\n$(x_1(t_0),\\ldots ,x_n(t_0))\\in U$ and\n $t\\geq t_0$. To complete the proof of part b), it now suffices to recall that the gradient flow pulls us from any initial condition $(x_1(0),\\ldots ,x_n(0))\\in \\mathbb{R}^n$ into $U$ within finite time $t$ (in view of Eq. \\eqref{asymptotic}).\n\n\n\n\\subsection{Numerical Samples}\\label{cH-numerics}\nFor $n=30$ with\n$a=10$ and $b=\\frac{3}{10}$, the continuous Hahn roots become with a precision of 4 decimals:\n\\begin{equation*}\n\\begin{matrix}\nx_1^{(30)}=-15.6230 ,& & x_6^{(30)}=-7.5285 , && x_{11}^{(30)}=-2.7503 , \\\\\n x_2^{(30)}=-13.3738 ,& & x_7^{(30)}=-6.4188 , && x_{12}^{(30)}=-1.9957 , \\\\\n x_3^{(30)}=-11.6001, & & x_8^{(30)}=-5.3956 , && x_{13}^{(30)}=-1.3059 , \\\\\n x_4^{(30)}=-10.0841 ,& & x_9^{(30)}=-4.4474 ,&& x_{14}^{(30)}=-0.6907 , \\\\\n x_5^{(30)}=-8.7415 ,& & x_{10}^{(30)}=-3.5671 , && x_{15}^{(30)}=-0.1919 \n \\end{matrix}\n\\end{equation*}\n(and $x^{(30)}_{j}=-x^{(30)}_{31-j}$ for $j=16,\\ldots ,30$).\nThe corresponding trajectories of $x_j^{(n)}(t)$ for $0\\leq t\\leq 30$ starting from the origin, i.e. with $x_j^{(n)}(0)=0$ ($j=1,\\ldots ,n$), are exhibited in Figure \\ref{cH-fig1}.\nThe slope of the evolution of the logarithmic error $\\log \\bigl| x_j(t)-x_j^{(n)}\\bigr|$ in Figure \\ref{cH-fig2} confirms that the convergence is exponential with a decay rate that\nexceeds the not very sharp estimate of $\\frac{a}{a^2+R_n^2}+\\frac{b}{b^2+R_n^2}\\approx 0.030$ \nguaranteed by Theorem \\ref{cHzeros:thm}.\n\n\n\n\\begin{figure}[h]\n \\centering\n \\caption{Continuous Hahn trajectories $x^{(30)}_1(t),\\ldots, x_{30}^{(30)}(t)$ corresponding to the initial condition $x_j^{(30)}(0)=0$ ($j=1,\\ldots ,30$) and the parameter values $a=10$ and $b=\\frac{3}{10}$.}\n\\includegraphics[scale=0.7]{cH-fig1} \\label{cH-fig1}\n\\end{figure}\n\n \\begin{figure}[h]\n \\centering\n \\caption{Evolution of the continuous Hahn logarithmic error $\\log\\bigl |x_j(t)-x_j^{(30)}\\bigr| $ for $j=1$ (top), $j=8$ (middle) and $j=15$ (bottom),\n corresponding to the initial condition $x_j^{(30)}(0)=0$ ($j=1,\\ldots ,30$) and the parameter values $a=10$ and $b=\\frac{3}{10}$.}\n\\includegraphics[scale=0.7]{cH-fig2} \\label{cH-fig2}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\\section{Wilson Polynomials}\\label{sec3}\n\n\n\\subsection{Preliminaries}\nThe monic Wilson polynomial $\\text{p}_n(x^2 ;a,b,c,d)$ \\cite{wil:some} is an orthogonal polynomial of degree $n$ in $x^2$ that sits at the top of Askey's hypergeometric scheme \\cite{koe-les-swa:hypergeometric}.\nFor generic parameter values it is given by the terminating hypergeometric series\n\\cite[\\text{Chapter 9.1}]{koe-les-swa:hypergeometric}\n\\begin{align}\\label{Wp}\n\\text{p}_n(x^2 ;a,b,c,d)=&\\frac{(-1)^n (a+b,a+c,a+d)_n}{(n+a+b+c+d-1)_n} \\\\\n &\\times {}_4F_3\n\\left[ \\begin{array}{c} -n,n+a+b+c+d-1,a+ix ,a-ix \\\\\na+b,a+c,a+d \\end{array} ; 1 \\right] ,\\nonumber\n\\end{align}\nWhen $\\text{Re}(a), \\text{Re}(b),\\text{Re}(c),\\text{Re}(d)>0$ with possible non-real parameters arising in complex conjugate pairs, the polynomials in question satisfy the following orthogonality relations\n\\cite[\\text{Chapter 9.1}]{koe-les-swa:hypergeometric}\n\\begin{subequations}\n\\begin{align}\n&\\frac{1}{2\\pi} \\int_0^\\infty \\Delta (x ;a,b,c,d) \\text{p}_n(x^2 ;a,b,c,d) \\text{p}_m(x ^2;a,b,c,d) \\text{d}x = \\\\\n &\n\\begin{cases}\n\\frac{n! \\Gamma(a+b+c+d+ n-1)\\Gamma ( a+b+n )\\Gamma (a+c+n) \\Gamma (a+d+n) \\Gamma (b+c+n) \\Gamma (b+d+n) \\Gamma (c+d+n) }\n{\\Gamma(a+b+c+d+2n-1) \\Gamma (a+b+c+d +2n) } &\\ \\text{if}\\ m=n, \\\\\n0 &\\ \\text{if}\\ m\\neq n,\n\\end{cases}\n\\nonumber\n\\end{align}\nwhere\n\\begin{equation}\\label{weight-W}\n\\Delta (x ;a,b,c,d)=\\left| \\frac{\\Gamma (a+ix)\\Gamma (b+ix)\\Gamma (c+ix)\\Gamma (d+ix)} {\\Gamma (2ix )}\\right|^2 .\n\\end{equation}\n\\end{subequations}\n\n\\subsection{Morse Function}\\label{sub-morse-W}\nFor parameters values within the above orthogonality regime, the roots\n\\begin{equation*}\n00$ with possible non-real parameters arising in complex conjugate pairs.\n\\begin{itemize}\n\\item[a)] The unique global minimum of the strictly convex, radially unbounded,\nMorse function $V(x_1,\\ldots ,x_n;a,b,c,d)$ \\eqref{VW} is attained at the Wilson roots $x_j= x_j^{(n)}$ ($j=1,\\ldots ,n$).\n\\item[b)] Let\n$\\bigl(x_1(t),\\ldots ,x_n(t)\\bigr)$, $t\\geq 0$ denote the unique solution of the gradient system \\eqref{g-flow-wilson} determined by a choice for\nthe initial condition\n\\begin{equation*}\n(x_1(0),\\ldots ,x_n(0))\\in\\mathbb{R}^n, \n\\end{equation*}\nand let us fix any $\\kappa$ in the interval\n\\begin{align}\\label{W-rate}\n{\\textstyle 0< \\kappa < \\frac{2(n-1)}{1+4R_n^2}+ \\frac{\\text{Re}(a)}{\\text{Re}^2(a)+(R_n+|\\text{Im}(a)|)^2}+ \\frac{\\text{Re}(b)}{\\text{Re}^2(b)+(R_n+|\\text{Im}(b)|)^2} } & \\\\\n{\\textstyle + \\frac{\\text{Re}(c)}{\\text{Re}^2(c)+(R_n+|\\text{Im}(c)|)^2}+ \\frac{\\text{Re}(d)}{\\text{Re}^2(d)+(R_n+|\\text{Im}(d)|)^2} }& \\nonumber\n\\end{align}\nwhere $R_n:=x_n^{(n)}$. Then there exists a constant $K>0$ (depending on the parameter values, on the initial condition, and on $\\kappa$) such that \n\\begin{equation}\n|x_j(t)-x_j^{(n)}|\\leq K e^{-\\kappa t} \\qquad (j=1,\\ldots,n)\n\\end{equation}\nwhenever $t\\geq 0$.\n\\end{itemize}\n\\end{subequations}\n\\end{theorem}\n\n\n\\subsection{Proof of Theorem \\ref{Wzeros:thm}}\nThe proof runs along the same lines as that of Theorem \\ref{cHzeros:thm}, so we only highlight some of the principal modifications in the corresponding computations.\n\nAs before, the first part of the theorem hinges on the considerations in\n\\cite[Remarks 5.4, 5.5]{die-ems:solutions} with the main points summarized above in Subsection \\ref{sub-morse-W}\n(cf. also the proof of \\cite[Proposition 4.1]{die-ems:solutions} for a detailed check that the present Morse function is indeed radially unbounded).\n\nTo infer the asserted estimate for the convergence rate, we must again provide the lower bound for the eigenvalues of the pertinent\nJacobian evaluated at the equilibrium point. This Jacobian is given by the Hessian\n\\begin{align}\\label{Hesse-AW}\n&H_{j,k}(x_1,\\ldots ,x_n;a,b,c,d) =\\partial_{x_j}\\partial_{x_{k}}V(x_1,\\ldots ,x_n;a,b,c,d) \\\\\n&\n\\begin{cases}\n\\frac{a}{a^2+x_j^2} +\\frac{b}{b^2+x_j^2} +\\frac{c}{c^2+x_j^2} +\\frac{d}{d^2+x_j^2} +\n \\sum_{l \\neq j} \\bigl( \\frac{1}{1+(x_j+x_l)^2}+ \\frac{1}{1+(x_j-x_l)^2} \\bigr) & \\text{if $j=k$,}\\\\\n \\frac{1}{1+(x_j+x_l)^2} - \\frac{1}{1+(x_j-x_l)^2} & \\text{if $j\\neq k$,}\\\\\n\\end{cases}\n\\nonumber\n\\end{align}\nwhich confirms the convexity:\n\\begin{align*}\n\\sum_{1\\leq j,k \\leq n} y_j y_{k} H_{j,k} (x_1,\\ldots,x_n;a,b,c,d) \n= \\sum_{1\\leq j\\leq n} \\Bigl( \\sum_{\\epsilon\\in \\{a,b,c,d\\}} \\frac{\\epsilon}{\\epsilon^2+x_j^2}\n \\Bigr) y_j^2 & \\nonumber \\\\\n+ \\sum_{1\\leq j< k \\leq n } \\Bigl(\n\\frac{(y_j+y_k)^2 }{1+ (x_j+x_k)^2} +\\frac{(y_j-y_k)^2 }{1+ (x_j-x_k)^2} \\Bigr) . &\n\\end{align*}\nSince $0 \\kappa .\n\\end{align*}\n\n\n\n\n\n\\subsection{Numerical Samples}\nFor $n=15$ with\n$a=\\frac{17}{3}$, $b=\\frac{1}{5}$, $c=1+i$, and $d=1-i$, the Wilson roots become with a precision of 4 decimals:\n\\begin{equation*}\n\\begin{matrix}\nx_1^{(15)}=0.5274 ,& & x_6^{(15)}=3.7728 , && x_{11}^{(15)}=8.5546 , \\\\\n x_2^{(15)}=1.1194 ,& & x_7^{(15)}=4.5787 , && x_{12}^{(15)}=9.8143 , \\\\\n x_3^{(15)}=1.7050, & & x_8^{(15)}=5.4496 , && x_{13}^{(15)}=11.2449 , \\\\\n x_4^{(15)}=2.3375 ,& & x_9^{(15)}=6.3938 ,&& x_{14}^{(15)}=12.9284 , \\\\\n x_5^{(15)}=3.0266 ,& & x_{10}^{(15)}=7.4231 , && x_{15}^{(15)}=15.0759 . \n \\end{matrix}\n\\end{equation*}\nThe corresponding trajectories of $x_j^{(n)}(t)$ for $0\\leq t\\leq 30$, with an initial condition of the form $x_j^{(n)}(0)=0$ ($j=1,\\ldots ,n$), are exhibited in Figure \\ref{W-fig1}.\nThe slope of the evolution of the logarithmic error $\\log \\bigl| x_j(t)-x_j^{(n)}\\bigr|$ in Figure \\ref{W-fig2} confirms that the convergence is exponential with a decay rate that\nexceeds the not very sharp estimate of\n\\begin{equation*}\n \\sum_{\\epsilon\\in \\{a,b,c,d\\}} \\frac{\\text{Re}(\\epsilon)}{\\text{Re}^2(\\epsilon)+(R_n+|\\text{Im}(\\epsilon)|)^2} +\\frac{2(n-1)}{1+4R_n^2} \\approx 0.061\n\\end{equation*}\nguaranteed by Theorem \\ref{Wzeros:thm}.\n\n \\begin{figure}[h]\n \\centering\n \\caption{Wilson trajectories $x^{(15)}_1(t),\\ldots, x_{15}^{(15)}(t)$ corresponding to the initial condition $x_j^{(15)}(0)=0$ ($j=1,\\ldots ,15$) and the parameter values\n $a=\\frac{17}{3}$, $b=\\frac{1}{5}$, $c=1+i$, and $d=1-i$.}\n\\includegraphics[scale=0.7]{W-fig1} \\label{W-fig1}\n\\end{figure}\n\n \\begin{figure}[h]\n \\centering\n \\caption{Evolution of the Wilson logarithmic error $\\log\\bigl |x_j(t)-x_j^{(15)}\\bigr| $ for $j=1$ (bottom), $j=8$ (middle) and $j=15$ (top), corresponding to the initial condition $x_j^{(15)}(0)=0$ ($j=1,\\ldots ,15$) and the parameter values\n $a=\\frac{17}{3}$, $b=\\frac{1}{5}$, $c=1+i$, and $d=1-i$.}\n\\includegraphics[scale=0.7]{W-fig2} \\label{W-fig2}\n\\end{figure}\n \n\n\n\n\\section{Symmetry Reduction}\\label{sec4}\nAs noticed above, the weight function $\\Delta (x;a,b)$ \\eqref{weight-cH} is even in $x$ for our parameter regime, so the roots of the corresponding symmetric continuous Hahn polynomial $\\text{p}_n(x;a,b)$ are symmetrically distributed around the origin:\n\\begin{equation}\nx_j^{(n)}+x_{n+1-j}^{(n)}=0\\quad\\text{for}\\ j=1,\\ldots ,n.\n\\end{equation}\nIt is manifest from the explicit differential equations that the gradient system in Eqs. \\eqref{g-flow}, \\eqref{g-flow-chahn} preserves this parity symmetry. More specifically, if the initial condition satisfies\n\\begin{subequations}\n\\begin{equation}\nx_j(0)+x_{n+1-j}(0)=0\\quad\\text{for}\\ j=1,\\ldots ,n,\n\\end{equation}\nthen so does the corresponding gradient flow $\\bigl(x_1(t),\\ldots ,x_n(t)\\bigr)$, $t\\geq 0$:\n\\begin{equation}\nx_j(t)+x_{n+1-j}(t)=0\\quad\\text{for}\\ j=1,\\ldots ,n.\n\\end{equation}\n\\end{subequations}\nIf we now perform the reduction of our gradient system to the pertinent $m:=\\lfloor \\frac{n}{2}\\rfloor$-dimensional invariant manifold through the substitution\n\\begin{equation}\n(x_1,\\ldots ,x_n)=\\begin{cases}\n(-y_m,\\ldots ,-y_1,y_1,\\ldots ,y_m)&\\text{if}\\ n=2m,\\\\\n(-y_m,\\ldots ,-y_1,0,y_1,\\ldots ,y_m)&\\text{if}\\ n=2m+1,\n\\end{cases}\n\\end{equation}\nthen we arrive at the differential equations\n\\begin{subequations}\n\\begin{align}\\label{gflow-symmetric-even}\n{ \\frac{\\text{d} y_j}{\\text{d} t}-\\pi ( j -{\\textstyle \\frac{1}{2}})} &+\n{ \\arctan\\left(\\frac{y_j}{a}\\right) +\\arctan\\left(\\frac{y_j}{b}\\right) +\\arctan\\left( 2y_j \\right) } \\\\\n&+\n \\sum_{\\substack{1\\leq k\\leq m \\\\ k \\neq j}} \\bigl( \\arctan (y_j+y_k)+\\arctan (y_j-y_k) \\bigr) =0 \n \\nonumber\n\\end{align}\n($j=1,\\ldots ,m$) when $n=2m$, and\n\\begin{align}\\label{gflow-symmetric-odd}\n{ \\frac{\\text{d} y_j}{\\text{d} t}-\\pi j } &+\n{ \\arctan\\left(\\frac{y_j}{a}\\right) +\\arctan\\left(\\frac{y_j}{b}\\right) +\\arctan\\left( 2y_j \\right)+\\arctan\\left( y_j\\right) } \\\\\n&+\n \\sum_{\\substack{1\\leq k\\leq m \\\\ k \\neq j}} \\bigl( \\arctan (y_j+y_k)+\\arctan (y_j-y_k) \\bigr) =0 \n \\nonumber\n\\end{align}\n\\end{subequations}\n($j=1,\\ldots ,m$) when $n=2m+1$, respectively. By comparing the latter differential equations with the gradient system for the Wilson polynomials in Eq. \\eqref{g-flow-wilson}, it is seen that\nfor $n=2m$ we recover the case $(a,b,c,d)\\to (a,b,\\frac{1}{2},0)$ and for $n=2m+1$ we recover the case $(a,b,c,d)\\to (a,b,\\frac{1}{2},1)$. Indeed, the symmetric continuous Hahn polynomials and the Wilson polynomials are known to be related by the following quadratic relations (cf. e.g. \\cite[Section 2.4]{koo:quadratic}):\n\\begin{equation}\n\\text{p}_{n}(x ;a,b)= \\begin{cases} \\text{p}_m(x^2 ;a,b,\\frac{1}{2},0)&\\text{if}\\ n=2m,\\\\\nx \\text{p}_m(x^2 ;a,b,\\frac{1}{2},1)&\\text{if}\\ n=2m+1.\n\\end{cases}\n\\end{equation}\n\nThe upshot is that for initial conditions respecting the parity invariance, the estimate for the rate of the exponential convergence in the second part of Theorem \\ref{cHzeros:thm} can be improved as follows.\n\n\n\\begin{theorem}\\label{cHzeros-symmetric:thm}\n\\begin{subequations}\nLet $\\text{Re}(a),\\text{Re}(b)>0$ with possible non-real parameters $a$ and $b$ arising as a complex conjugate pair, and\n let\n$\\bigl(x_1(t),\\ldots ,x_n(t)\\bigr)$, $t\\geq 0$ denote the unique solution of the gradient system \\eqref{g-flow}, \\eqref{g-flow-chahn} determined by a choice for\nthe initial condition $(x_1(0),\\ldots ,x_n(0))\\in\\mathbb{R}^n$ such that\n\\begin{equation*}\nx_j(0)+x_{n+1-j}(0)=0\\quad\\text{for}\\ j=1,\\ldots ,n.\n\\end{equation*}\nThen for any $\\kappa$ in the interval\n\\begin{align}\\label{cH-rate-symmetric}\n{\\textstyle 0< \\kappa < \\frac{2 \\lfloor \\frac{n}{2} \\rfloor }{1+4R_n^2}+ \\frac{\\text{Re}(a)}{\\text{Re}^2(a)+(R_n+|\\text{Im}(a)|)^2}+ \\frac{\\text{Re}(b)}{\\text{Re}^2(b)+(R_n+|\\text{Im}(b)|)^2} } \n{\\textstyle + \\frac{1-(-1)^{n}}{2+2R_n^2}} ,\n\\end{align}\nwhere $R_n:=x_n^{(n)}$, there exists a constant $K>0$ (depending on the parameter values, on the initial condition, and on $\\kappa$) such that \n\\begin{equation}\n|x_j(t)-x_j^{(n)}|\\leq K e^{-\\kappa t} \\qquad (j=1,\\ldots,n)\n\\end{equation}\nwhenever $t\\geq 0$.\n\\end{subequations}\n\\end{theorem}\n\nFor $n=2m+1$, Theorem \\ref{cHzeros-symmetric:thm} is immediate from the reduced differential equation \\eqref{gflow-symmetric-odd} and\n the second part of Theorem \\ref{Wzeros:thm}, upon performing the parameter specialization $(a,b,c,d)\\to (a,b,\\frac{1}{2},1)$. The case $n=2m$ would follow similarly via\n the reduced differential equation \\eqref{gflow-symmetric-even} \n upon performing the parameter specialization $(a,b,c,d)\\to (a,b,\\frac{1}{2},0)$ in Theorem \\ref{Wzeros:thm}. However, since the parameter specialization $d\\to 0$ takes us outside the Wilson parameter domain\n considered here, for the argument to stick rigorously one formally has to repeat the proof of Theorem \\ref{Wzeros:thm} for the case of the gradient flow in Eq. \\eqref{gflow-symmetric-even}. \n \nSince our numerical example in Section \\ref{cH-numerics} corresponded to an initial condition of the form $x_j(0)=0$ ($j=1,\\ldots ,n$), the improved exponential convergence of Theorem \\ref{cHzeros-symmetric:thm} actually applies in this situation. However, if we break the parity symmetry by moving the initial condition up to $x_j(0)=3$ ($j=1,\\ldots ,n$)---while maintaining the parameter values $a=10$ and $b=\\frac{3}{10}$\n(cf. Figure \\ref{cH-symmetric-fig1})---then we see from the slopes of the logarithmic error (cf. Figure \\ref{cH-symmetric-fig2}) that\nthe rate of the exponential convergence indeed slows down considerably. Notice at this point that the downward peaks in the evolution of the logarithmic error reveal that the corresponding limiting values are no longer approached monotonously: the peak detects when the trajectory of $x_j(t)$ overshoots its limiting value $x_j^{(n)}$.\n\n\n\\begin{figure}[h]\n \\centering\n \\caption{Continuous Hahn trajectories $x^{(30)}_1(t),\\ldots, x_{30}^{(30)}(t)$ corresponding to the initial condition $x_j^{(30)}(0)=3$ ($j=1,\\ldots ,30$) and the parameter values $a=10$ and $b=\\frac{3}{10}$.}\n\\includegraphics[scale=0.7]{cH-symmetric-fig1} \\label{cH-symmetric-fig1}\n\\end{figure}\n\n \\begin{figure}[h]\n \\centering\n \\caption{Evolution of the continuous Hahn logarithmic error $\\log\\bigl |x_j(t)-x_j^{(30)}\\bigr| $ for $j=1$ (top), $j=8$ (second from below), $j=23$ (bottom), and $j=30$ (second from above), where the ordering refers to that of the asymptotic tails, with the initial condition $x_j^{(30)}(0)=3$ ($j=1,\\ldots ,30$) and the parameter values $a=10$ and $b=\\frac{3}{10}$.}\n\\includegraphics[scale=0.7]{cH-symmetric-fig2} \\label{cH-symmetric-fig2}\n\\end{figure}\n\n\n\n\n\\section*{Acknowledgments}\nIt is a pleasure to thank Alexei Zhedanov for emphasizing that the Morse functions from \\cite{die-ems:solutions}, which minimize at the roots of the continuous Hahn, Wilson and Askey-Wilson polynomials, \nshould be viewed as natural analogs of Stieltjes' electrostatic potentials for the roots of the classical orthogonal polynomials. \nThanks are also due to an anonymous referee for suggesting some important improvements in the presentation.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nNanofabrication techniques developed over the last \ndecade~\\cite{Goldhaber,scienceartmol,blick2dot,ludwig3dot,vidan3dot,gaudreau3dot,rogge3dot,Fujisawananotube}, together with atomic-scale manipulation using scanning tunneling microscopy~\\cite{Crtrimercrommie,Cotrimeruchihashi}, have sparked intense interest in novel mesoscopic devices where strong electron correlation and many-body effects play a central role~\\cite{glazman}. The classic spin-$\\tfrac{1}{2}$ Kondo effect~\\cite{hewson} -- in which a single spin is screened by antiferromagnetic coupling to conduction electrons in an attached metallic lead -- has been observed in odd-electron semiconductor quantum dots, single molecule dots, and adatoms on metallic surfaces~\\cite{glazman}; although its ferromagnetic counterpart, where the spin remains asymptotically free, has yet to be reported experimentally.\nIn \\emph{coupled}\ndot devices -- the `molecular' analogue of single quantum dots viewed as artificial atoms~\\cite{scienceartmol} -- exquisite experimental control is now \navailable over geometry,\ncapacitance, and tunnel-couplings of the dots~\\cite{vidan3dot,gaudreau3dot}. Both spin and internal, orbital degrees of freedom -- and hence the interplay between the two -- are important in such systems.\nThis leads to greater diversity in potentially observable correlated electron behaviour on coupling to metallic leads, as evident from a wide range of theoretical studies of double \n(e.g. Refs.~\\onlinecite{kikoinDQD,vojta2spin,bordaDQD,galpinccdqd,AKMccDQD,andersgalpinnano,lopez,zarandDQD,ingersentDQD}) \nand triple \n(e.g. Refs.~\\onlinecite{lazarovitsadatom,hewsonTQD,zitkoTQD,ferrodots,lobosTQD,wangtqd,nutshell,delgadoTQD}) \nquantum dot systems.\n\n\n\nMotivated in part by recent experiments involving triple dot devices~\\cite{ludwig3dot,vidan3dot,gaudreau3dot,rogge3dot},\nwe consider here a system of three, mutually tunnel-coupled single-level quantum dots, one of which is \nconnected to a metallic lead: a triple quantum dot (TQD) ring structure, the simplest to exhibit frustration.\nWe focus on the TQD in the 3-electron Coulomb blockade valley, and study its evolution as a function of the \ninterdot tunnel couplings, using both perturbative arguments and the full density matrix~\\cite{asbasis,fdmnrg} formulation of Wilson's numerical renormalization group (NRG) technique~\\cite{nrgreview,KWW} (for a recent review,\nsee Ref.~\\onlinecite{nrgrev}). A rich range of behaviour is found to occur. Both antiferromagnetic and ferromagnetic \nKondo physics are shown to arise in the system -- with the two distinct ground states separated by a quantum phase transition -- and should be experimentally accessible via side-gate control of the tunnel couplings.\nThe zero-bias differential conductance ($G$) through the dots is shown to drop discontinuously across the transition at zero temperature, from the unitarity limit of $G\/G_0=1$ in the strong coupling \nantiferromagnetic phase (with $G_0=2e^2\/h$ the conductance quantum) to $G\/G_0\\simeq 0$ in the weak coupling, local \nmoment phase. However in a certain temperature window in the vicinity of the transition, the conductance is found to be controlled by the transition fixed point separating the two ground state phases, comprising both a Kondo singlet state and a residual local moment; in particular such that \\emph{increasing} temperature in the local moment\nphase actually revives the antiferromagnetic Kondo effect.\n\n\n\\begin{figure}[b]\n\\includegraphics[height=2cm]{fig1.eps}\n\\caption{\\label{dots}Schematic illustration of the quantum dot trimer.}\n\\end{figure}\n\n\\section{Models, bare and effective}\nWe consider three semiconducting (single-level) quantum dots, arranged in a triangular geometry as illustrated in Fig~\\ref{dots}. Each dot is tunnel-coupled to the others, and one of them (dot `2') is also coupled to a metallic lead. We focus explicitly on a system tuned to mirror symmetry (see Fig.~\\ref{dots}), and study the Anderson-type model $H=H_0+H_{tri}+H_{hyb}$. Here\n$H_0=\\sum_{\\text{k},\\sigma}\\epsilon^{\\phantom{\\dagger}}_{\\text{k}} a^{\\dagger}_{\\text{k} \\sigma}a^{\\phantom{\\dagger}}_{\\text{k} \\sigma}$ refers to the non-interacting lead, \nwhich is coupled to dot `2' via\n$H_{hyb}=\\sum_{\\text{k},\\sigma} V(a^{\\dagger}_{\\text{k} \\sigma}c^{\\phantom{\\dagger}}_{2 \\sigma}+\\text{H.c.})$,\nwhile $H_{tri}$ describes the isolated TQD with tunnel couplings $t,t'$,\n\\begin{equation}\n\\label{Htun}\n\\begin{split}\nH_{tri}&=\\sum_i(\\epsilon \\hat{n}_i+U\\hat{n}_{i\\uparrow}\\hat{n}_{i\\downarrow}) \\\\\n&~~+\\sum_{\\sigma} \\left[ t~c_{2\\sigma}^{\\dagger}(c^{\\phantom\\dagger}_{1\\sigma}+\nc^{\\phantom\\dagger}_{3\\sigma})\n+t'~c_{1\\sigma}^{\\dagger}c^{\\phantom\\dagger}_{3\\sigma} +\\text{H.c.}\\right] \n\\end{split}\n\\end{equation}\nwhere $\\hat{n}_i=\\sum_{\\sigma}\\hat{n}^{\\phantom{\\dagger}}_{i\\sigma}=\\sum_{\\sigma}c^{\\dagger}_{i \\sigma}c^{\\phantom{\\dagger}}_{i \\sigma}$ is the number operator for dot $i$. \n$U$ is the intradot Coulomb repulsion and $\\epsilon_i$ the level energy of\ndot $i$, such that $\\epsilon_{1}=\\epsilon_{3} \\equiv \\epsilon$ for a mirror symmetric system \n(in which $H$ is invariant under a $1\\leftrightarrow 3$ permutation). For convenience we take $\\epsilon_{i}=\\epsilon$ for all dots (although this is not required, as mentioned further below).\n\n\nWe are interested in the TQD deep in the ${\\cal{N}}=3$ electron Coulomb blockade valley. To this end, noting that coupled quantum dot experiments typically correspond to $t\/U\\sim 10^{-2}$~~\\cite{gaudreau3dot}, we consider the representative case $\\epsilon = -U\/2$ with $U\\gg (t,t')$. Each dot is then in essence singly occupied, with ${\\cal{N}}=2$ or $4$ states much higher ($\\sim$$\\tfrac{1}{2}U$) in energy. The ${\\cal{N}}=3$ \nstates of the isolated TQD comprise two lowest doublets, and a spin quartet (which always lies highest in energy).\nAs the tunnel coupling $t'$ is increased there is a level crossing of the doublets, which are \ndegenerate by symmetry at the point $t'=t$. Projected into the singly-occupied manifold the doublet states are\n\\begin{equation}\n\\label{eq:states}\n\\begin{split}\n|+; S^{z}\\rangle&=c_{2\\sigma}^{\\dagger}\\tfrac{1}{\\sqrt{2}}\\left(c_{1\\uparrow}^{\\dagger}\nc_{3\\downarrow}^{\\dagger}+c_{3\\uparrow}^{\\dagger}c_{1\\downarrow}^{\\dagger}\\right)|\\mathrm{vac}\\rangle \\\\\n|-;S^{z}\\rangle &= \\tfrac{\\sigma}{\\sqrt{6}}\\left[ c_{2\\sigma}^{\\dagger}(c_{1\\uparrow}^{\\dagger}c_{3\\downarrow}^{\\dagger}-c_{3\\uparrow}^{\\dagger}c_{1\\downarrow}^{\\dagger})-2c_{2-\\sigma}^{\\dagger}c_{1\\sigma}^{\\dagger}c_{3\\sigma}^{\\dagger}\\right]\n|\\mathrm{\\mathrm{vac}}\\rangle\n\\end{split}\n\\end{equation}\nwith $S^{z}=\\tfrac{\\sigma}{2}$ and $\\sigma =\\pm$ for spins $\\uparrow\/\\downarrow$.\nTheir energy separation is $E_{\\Delta}=E_{+}-E_{-}=J-J'$ with antiferromagnetic exchange couplings\n$J=4t^2\/U$ and $J'=4t'^2\/U$, such that the levels cross at $t'=t$ (reflecting the magnetic frustration\ninherent at this point). The $|-;S^{z}\\rangle$ doublet, containing triplet configurations of spins `1' and `3',\nhas odd parity ($-$) under a $1\\leftrightarrow 3$ interchange; while $|+;S^{z}\\rangle$, which has singlet-locked spins `1' and `3' and behaves in~effect as a spin-$\\tfrac{1}{2}$ carried by dot `2' alone, has even parity ($+$).\n\n\nOn coupling to the lead the effective model describing the system on low energy\/temperature scales \nis obtained by a standard Schrieffer-Wolff transformation~\\cite{SW,hewson}. Provided \nthe doublets are not close to degeneracy, only the lower such state need be retained in the ground state manifold:\n$|- ;S^{z}\\rangle$ for $J'\\ll J$ and $|+ ;S^{z}\\rangle$ for $J'\\gg J$. In either case a low-energy model\nof Kondo form arises\n\\begin{equation}\n\\label{eq:Jeff}\nH_{\\text{\\emph{eff}}}=J_{K\\gamma}~\\hat{\\textbf{S}}\\cdot \\hat{\\textbf{S}}(0),\n\\end{equation}\n(potential scattering is omitted for clarity),\nwith $\\hat{\\textbf{S}}(0)$ the conduction band spin density at dot `2', and $\\hat{\\textbf{S}}$ a spin-$\\tfrac{1}{2}$ operator \nrepresenting the appropriate doublet ($\\gamma = +$ or $-$). The effective Kondo\ncoupling is $J_{K\\gamma}=2\\langle\\gamma;+\\tfrac{1}{2}|\\hat{s}^{z}_{2}|\\gamma;+\\tfrac{1}{2}\\rangle J_{K}$ with \n$\\hat{\\textbf{s}}_{2}$ the spin of dot `2'; where $J_K= 8\\Gamma\/(\\pi\\rho U)$ with hybridization\n$\\Gamma =\\pi V^{2}\\rho$ and $\\rho$ the lead density of states. \n\\Eref{eq:states} thus gives $J_{K-}=-\\tfrac{1}{3}J_K$, and $J_{K+}=+J_K$.\nHence, for tunnel coupling $t'\\ll t$ ($J'\\ll J$), a \\emph{ferro}magnetic spin-$\\tfrac{1}{2}$ Kondo\neffect arises ($J_{K-}<0$)~\\cite{ferrodots}. Kondo quenching of the lowest doublet is in consequence\nineffective, and as temperature $T\\rightarrow 0$ the spin becomes asymptotically free -- the stable fixed point (FP) is the local moment (LM) FP with a residual entropy $S_{\\mathrm{imp}}(T=0) =\\ln 2$ ($k_{B} =1$)~\\cite{KWW}. The system here is the simplest example of a `singular Fermi liquid'~\\cite{mehta}, reflected in the non-analyticity of leading irrelevant corrections to the fixed point~\\cite{mehta,singularfm}. For $t'\\gg t$ by contrast, the Kondo coupling is antiferromagnetic ($J_{K+}=+J_K>0$), destabilizing the LM fixed point. The strong coupling (SC) FP then controls the $T\\rightarrow 0$ behaviour, describing the familiar Fermi liquid Kondo singlet ground state in which the spin is screened by the lead\/conduction electrons below the characteristic Kondo scale $T_{K}$, with $T_{K}\/\\sqrt{U\\Gamma} \\sim \\exp(-1\/\\rho J_{K+})=\\exp(-\\pi U\/8\\Gamma)$~\\cite{note2}.\n\n\n Since the fixed points for the two stable phases are distinct, a quantum phase transition must thus occur \non tuning the tunnel coupling $t'$ through a critical value $t'_{c}\\simeq t$. We study it below, but first \noutline the effective low-energy model in the vicinity of the transition. Here, as the $|\\pm;S^{z}\\rangle$ states\nare of course near degenerate, both doublets must thus be retained in \nthe low-energy trimer manifold, and the unity operator for the local (dot) Hilbert space is hence:\n\\begin{equation}\n\\label{eq:unity}\n\\hat{1} = \\sum_{S^{z}} (|+;S^{z}\\rangle\\langle +;S^{z}| +|-;S^{z}\\rangle\\langle -;S^{z}|) \n~\\equiv ~ \\hat{1}_{+} +\\hat{1}_{-} \n\\end{equation}\nThe effective low-energy model then obtained by Schrieffer-Wolff is readily shown to be\n\\begin{equation}\n\\label{eq:fpa}\nH_{\\text{\\emph{eff}}}^{\\text{trans}}=J_K ~\\hat{1} \\hat{\\textbf{s}}_{2}^{\\phantom\\dagger}\\hat{1} \\cdot \\hat{\\textbf{S}}(0)+\\tfrac{1}{2}E_{\\Delta}(\\hat{1}_{+} - \\hat{1}_{-} )\n\\end{equation}\nwith $J_{K}$ as above. The final term here refers simply to the energy difference between the two doublets.\nIt may be written equivalently as $E_{\\Delta}\\hat{\\mathcal{T}}_z$ with a pseudospin operator \n\\begin{equation}\n\\label{eq:pseud}\n\\hat{\\mathcal{T}}_z =\\tfrac{1}{2}(\\hat{1}_{+} - \\hat{1}_{-} ) \n\\end{equation}\nthus defined, such that the doublets are each\neigenstates of it, $\\hat{\\mathcal{T}}_z |\\pm\\ ;S^{z} \\rangle=\\pm\\tfrac{1}{2} |\\pm ;S^z\\rangle$.\nConsidering now the first term in \\eref{eq:fpa},\n$\\hat{1} \\hat{\\textbf{s}}_{2}^{\\phantom\\dagger}\\hat{1} \\equiv \\hat{1}_{+} \\hat{\\textbf{s}}_{2}^{\\phantom\\dagger}\\hat{1}_{+}\n+\\hat{1}_{-} \\hat{\\textbf{s}}_{2}^{\\phantom\\dagger}\\hat{1}_{-}$ for the mirror symmetric case considered (cross terms vanish by symmetry). Direct evaluation of $\\hat{1}_{\\pm} \\hat{\\textbf{s}}_{2}^{\\phantom\\dagger}\\hat{1}_{\\pm}$ gives\n$\\hat{1}_{+} \\hat{\\textbf{s}}_{2}^{\\phantom\\dagger}\\hat{1}_{+} = \\hat{\\textbf{S}}\\hat{1}_{+}$ ($=\\hat{1}_{+}\\hat{\\textbf{S}}$)\nand $\\hat{1}_{-} \\hat{\\textbf{s}}_{2}^{\\phantom\\dagger}\\hat{1}_{-} = -\\tfrac{1}{3}\\hat{1}_{-}\\hat{\\textbf{S}}$,\nwhere $\\hat{\\textbf{S}}$ is a spin-$\\tfrac{1}{2}$ operator for the dot Hilbert space (specifically\n$\\hat{S}^{z}= \\sum_{\\gamma =\\pm,~S^{z}}|\\gamma;S^{z}\\rangle S^{z}\\langle\\gamma;S^{z}|$ and\n$\\hat{S}^{\\pm}= \\sum_{\\gamma}|\\gamma;\\pm\\tfrac{1}{2}\\rangle\\langle\\gamma;\\mp\\tfrac{1}{2}|$).\n\nHence, using \\erefs{eq:unity}{eq:pseud} to express \n$\\hat{1}_{\\pm}~=~\\tfrac{1}{2}(\\hat{1} \\pm 2 \\hat{\\mathcal{T}}_z)$ in terms of the pseudospin,\nthe effective low-energy model is given from \\eref{eq:fpa} by\n\\begin{equation}\n\\label{eq:fp}\nH_{\\text{\\emph{eff}}}^{\\text{trans}}=\\tfrac{1}{3} J_K(1+4\\hat{\\mathcal{T}}_z)\\hat{\\textbf{S}}\\cdot \\hat{\\textbf{S}}(0)+E_{\\Delta}\\hat{\\mathcal{T}}_z,\n\\end{equation}\nexpressed as desired in terms of the spin $\\hat{\\textbf{S}}$ and pseudospin $\\hat{\\mathcal{T}}_z$.\nThe term $E_{\\Delta}\\hat{\\mathcal{T}}_z$ is equivalent to a magnetic field acting on the pseudospin, favoring the $|-;S^{z}\\rangle$ doublet for $E_{\\Delta}>0$ and $|+;S^{z}\\rangle$ for $E_{\\Delta}<0$; such that\n\\eref{eq:fp} reduces, as it should, to one or other of \\eref{eq:Jeff} in the limit where the separation \n$|E_{\\Delta}|$ is sufficiently large that only one of the doublets need be retained in the low-energy\nTQD manifold. Finally, note that the absence of pseudospin raising\/lowering terms $\\hat{\\mathcal{T}}^{\\pm}$ in\n$H_{\\text{\\emph{eff}}}^{\\text{trans}}$ reflects the strict $1\\leftrightarrow 3$ parity in the mirror symmetric setup (which cannot be broken by virtual hopping processes between dot `2' and the lead); and means that the Hilbert space of \\eref{eq:fp} separates exactly into spin and pseudospin sectors, such that only the \\emph{sign} of the effective Kondo coupling is correlated to the pseudospin.\n\n\n\n\\section{Results}\nThe physical picture is thus clear, and indicates the presence of a quantum phase transition as a \nfunction of $t'$. We now present NRG results for the TQD Anderson model, using a symmetric, constant \nlead density of states $\\rho = 1\/(2D)$. The full density matrix extension~\\cite{asbasis,fdmnrg} of the NRG is employed~\\cite{nrgrev}, together with direct calculation of the electron self-energy~\\cite{UFG}. Calculations are typically performed for an NRG discretization parameter $\\Lambda =3$, retaining the lowest $N_{s}= 3000$ states per iteration.\n\nAs above we choose $\\epsilon=-\\tfrac{1}{2}U$ and $t\/U=10^{-2}$, which realistic case~\\cite{gaudreau3dot} corresponds to single occupancy of the dots (all calculations give $\\langle \\hat{n}_i \\rangle=1$ for each dot). The low temperature behaviour is determined by three fixed points: those for the two stable phases at $T=0$ (SC or LM), and a `transition fixed point' precisely at the transition, which at \\emph{finite}-$T$ strongly affects the behaviour of the system close to\nthe transition.\n\n\\begin{figure}[t]\n \\includegraphics[height=5.4cm]{fig2.eps}\n \\caption{\\label{entgraph}\nEntropy $S_{\\mathrm{imp}}(T)$ \\emph{vs} $T\/\\Gamma$ for fixed $\\tilde{U}=10$\nand $\\tilde{t}=0.1$, as a function of $\\tilde{t}'$ as the transition ($\\tilde{t}'_{c} \\simeq 0.097$)\nis approached from either side: for $t'=t'_{c}\\pm \\lambda T_{K}$ with\n$\\lambda = 10^4,10,1,10^{-1},10^{-4}$ and $\\simeq 0$ (curves (a)-(f) respectively). \nSolid lines: antiferromagnetically coupled SC phase ($\\lambda >0$); dashed lines: \nferromagnetically coupled LM phase ($\\lambda <0$).\nInset: close to the transition, the scale $T_{\\Delta}$ vanishes linearly in $(\\tilde{t'}^2-\\tilde{t'_c}^2)$.\n}\n \\end{figure}\n\n\nThe $T$-dependence of the entropy $S_{\\mathrm{imp}}(T)$~\\cite{KWW} provides a clear picture\nof the relevant fixed points. We show it in Fig.~\\ref{entgraph}, for $\\tilde{U}=U\/\\pi\\Gamma=10$\nand $\\tilde{t}=t\/\\pi\\Gamma=0.1$ (with $\\Gamma\/D = 10^{-2}$), for variable\n$\\tilde{t'}=t'\/\\pi\\Gamma$ approaching the transition from either side:\n$t'=t'_{c}\\pm \\lambda T_{K}$, varying $\\lambda$. Here $\\tilde{t}_{c}'=0.09715..(\\simeq \\tilde{t}$ as expected), and the antiferromagnetic Kondo scale $T_K\/\\Gamma\\simeq 7 \\times 10^{-6}$~~\\cite{note2}.\nSolid lines refer to systems in the SC phase ($t'>t'_{c}$), dashed lines to the LM phase ($t't'_c$ the antiferromagnetic\nKondo effect drives the system to the SC fixed point below $T\\sim T_K$.\nLines (b)-(e) in Fig.~\\ref{entgraph} are for systems progressively approaching the transition.\nHere, when $T$ exceeds the energy gap between the doublets (denoted $|\\tilde{E}_{\\Delta}|$ and naturally renormalized slightly from the isolated TQD limit of $|E_{\\Delta}|$), the pair of doublets are effectively degenerate~\\cite{note3} and an $S_{\\text{imp}} = \\ln (4)$ plateau is thus reached. \nThe fixed point Hamiltonian here is then simply a free conduction band, plus two free spins (\\eref{eq:fp} with $J_K$ and $E_{\\Delta}$ set to zero).\n\n\n\\begin{figure}[t]\n \\includegraphics[height=5.4cm]{fig3.eps}\n \\caption{\\label{chigraph} Spin susceptibility $T\\chi_{\\mathrm{imp}}(T)$\n\\emph{vs} $T\/\\Gamma$ for the same parameters as Fig.~\\ref{entgraph}.\nSolid lines: antiferromagnetically coupled SC phase; dashed lines: \nferromagnetically coupled LM phase. Close to the transition, for $|\\tilde{E}_{\\Delta}| t'_{c}$.\nClose to the transition however, for $|\\tilde{E}_{\\Delta}| t'_c$), and dashed lines for\nthe LM phase ($t't'_c$) to $\\pi\\Gamma D_2(0;0)=0$ in the LM phase ($t't'_{c}$, $D_{2}(\\omega;0) \\equiv D_{2}^{SC}(\\omega)$ since now only excitations from the Kondo singlet arise. At finite-$T$ however, with $T_{\\Delta}\\ll T \\ll T_K$ such that the lowest manifold of states comprises both the LM and the Kondo singlet, it is easy to show from the Lehmann representation of the spectrum that $D_{2}(\\omega;T) = \\tfrac{1}{3}[D_{2}^{SC}(\\omega)+2D_{2}^{LM}(\\omega)]$ (such that $\\pi\\Gamma D_{2}(\\omega;T) = 1\/3$ for $|\\omega|\\lesssim T_{K}$); and hence from Eq.~\\ref{zbc} that \n$G(T)\/G_{0} = 1\/3$ -- which persists down to $T=0$ precisely at the transition, where $T_{\\Delta}=0$ (Fig.~\\ref{tempdep}, case(f)).\n\n\n\n\\section{Conclusion}\nThe TQD ring system in the 3-electron Coulomb blockade valley exhibits a rich range of physical behaviour. Both antiferromagnetic and ferromagnetic spin-$\\tfrac{1}{2}$ Kondo physics is accessible\non tuning the tunnel coupling $t'$; the two phases being separated by a level crossing quantum phase transition, reflected in a transition fixed point which controls in particular the conductance in the vicinity of the transition. We also add that while explicit results have been given for a mirror symmetric TQD with $\\epsilon_{i}=\\epsilon$ ($=-U\/2$) for all dots, our conclusions are robust provided the dots remain in essence singly occupied. Varying $\\epsilon$ (or $U$) on dot `2' for example, making it inequivalent to dots `1' and `3', does not break mirror symmetry and leaves unaltered the behaviour uncovered above. Indeed even breaking mirror symmetry, via e.g.\\ distinct tunnel couplings between all dots, still results in both ferromagnetically-coupled and antiferromagnetically-coupled (Kondo quenched) ground states separated by a quantum phase transition~\\cite{note4}. The robustness of the essential physics suggests that both phases should be experimentally accessible in a TQD device; as too should the transition between them, provided the tunnel couplings can be sufficiently finely\ntuned.\n\n\n\\acknowledgements\nWe are grateful to M. Galpin, C. Wright and T. Kuzmenko for stimulating discussions. This research was supported in part by EPSRC Grant EP\/D050952\/1.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAccording to Hawking and Ellis \\cite[Prop. 9.3.6]{HE}, under\nappropriate conditions, which include analyticity of all the objects\nunder consideration, the event horizon of a stationary, say\nelectro--vacuum, black hole space--time $(M,g)$ is necessarily a\nKilling horizon. More precisely, the isometry group of $(M,g)$ should\ncontain an ${\\Bbb R}$ subgroup, the orbits of which are tangent to the black\nhole horizon. In order to substantiate their claim the authors of\n\\cite{HE} first argue that for each $t$ the map defined as the\ntranslation by $t$ along the appropriately parameterized generators of\nthe event horizon extends to an isometry $ \\phi_t$ in a neighborhood\nof the event horizon. Next they assert that for all $t$ one can\nanalytically continue $ \\phi_t$ to the whole space--time, to obtain a\nglobally defined one parameter group of of isometries. This last\nclaim is wrong, which\\footnote{The construction that follows is a\n straightforward adaptation to the problem at hand of a\n construction in \\cite[Section 5]{ChRendall}.} can be seen as\nfollows: Let $( M, g_{ab})$ be the extension of the exterior region of\nthe Kerr space--time consisting of ``two type I regions and two type\nII regions'', as described in Section 5.6 of \\cite{HE} (thus $( M,\ng_{ab})$ consists of the four uppermost blocks of Figure 28, p. 165 in\n\\cite{HE}). Let $\\phi_t$ denote those isometries of $( M, g_{ab})$\nwhich are time--translations in an asymptotic region $M_{\\mathrm ext}$, and let\n$\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ denote the domain of outer communication of $( M, g_{ab})$ as\ndetermined by $M_{\\mathrm ext}$ ({\\em cf.\\\/} eq. \\eq{mext} below; $M_{ext}$\ncorresponds to one of the blocks ``I'' of Figure 28 of \\cite{HE}).\nLet $\\Si$ be any asymptotically flat Cauchy surface of $( M, g_{ab})$\n(thus $\\Si$ has two asymptotic regions), and let ${\\cal E}$ be any\nembedded two-sided three-dimensional sub-manifold of\n$\\mbox{int}\\cD^+(\\Si;M)\\setminus\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$, invariant under $\\phi_t$. We\nshall moreover suppose that $M\\setminus\\overline{{\\cal E}}$ is\nconnected, and that $\\cal E$ is {\\em not} invariant under the $U(1)$\nfactor of the isometry group of $( M, g_{ab})$. Let $(M_a,g_a)$,\n$a=1,2$, be two copies of $M\\setminus\\overline{ {\\cal E}}$ with the\nmetric induced from $g$. As ${\\cal E}$ is two-sided, there exists an\nopen neighborhood ${\\cal O}$ of ${\\cal E}$ such that ${\\cal E}$\nseparates ${\\cal O}$ into two disjoint open sets ${\\cal O}_a$,\n$a=1,2$, with $\\overline{{\\cal O}_1}\\cap \\overline{{\\cal\n O}_2}=\\overline{ {\\cal E}}$, ${\\cal O}_1\\cap {\\cal\n O}_2=\\emptyset$. Let $\\psi _a$ denote the natural embedding of\n${\\cal O}_a$ into $M_a$. Let $M_3$ be the disjoint union of $M_1,M_2$\nand ${\\cal O}$, with the following identifications: a point $p\\in{\\cal\n O}_a\\subset{\\cal O}$ is identified with $\\psi _a(p)\\in M_a$. It is\neasily seen that $M_3$ so defined is a Hausdorff topological space.\n \nWe can equip $M_3$ with the obvious real analytic manifold structure\nand an obvious metric $g_3$ coming from $(M_1,g_1)$, $(M_2,g_2)$ and\n$({\\cal O},g|_{\\cal O})$. Note that \n$g_3$ is real analytic with respect to this structure. Let finally\n$(M_4,g_4)$ be any \nmaximal\\,\\footnote{\\label{fnmaximal} {\\em cf., e.g.,}\\ \n \\cite[Appendix C]{SCC} for a proof of existence of space--times\n maximal with respect to some property. It should be pointed out\nthat there is an error in that proof, as the relation $\\prec$ defined\nthere is not a partial order. This is however easily corrected by\nadding the requirement that the isometry $\\Phi$ considered there\nrestricted to some fixed three--dimensional hypersurface be the\nidentity.} vacuum real analytic extension of $(M_3,g_3)$. \nThen $(M_4,g_4)$ is a maximal vacuum real analytic extension of \n$\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ which clearly is not isometric to $(M,g)$. \n\nThe space--time $(M_4,g_4)$ satisfies all the hypotheses\nof \\cite{HE}. The connected component of the identity of the group of\nisometries drops down from ${{\\Bbb R}}\\times U(1)$ (in the case of $( M,\ng_{ab})$) to ${\\Bbb R}$ (in the case of $(M_4,g_4)$), as\nall the orbits of the rotation group acting on $( M, g_{ab})$ meeting\n${\\cal E}$ are incomplete in $(M_4,g_4)$. \n\nTopological games put aside, the method of proof suggested in\n\\cite{HE} of analytically extending $ \\phi_t$ faces the problem that $\n\\phi_t$ might potentially be analytically extendible to a proper\nsubset\\footnote{ In the physics literature there seem to be\n misconceptions about existence and uniqueness of analytic extensions\n of various objects. As a useful example the reader might wish to\n consider the (both real and complex) analytic function $f$ from,\n say, the open disc $D(1,1\/2)$ of radius $1\/2$ centered at $1$ into\n ${\\Bbb C}$, defined as the restriction of the principal branch of $\\log\n z$. Then: 1) There exists no analytic extension of $f$ from\n $D(1,1\/2)$ to ${\\Bbb C}$. 2) There exists no unique maximal subset of\n ${\\Bbb C}$ on which an analytic extension of $f$ is defined.} of the\nspace--time only. One can nevertheless hope that the analyticity of\nthe domain of outer communication and some further conditions, as {\\em\n e.g.\\\/} global hyperbolicity thereof, allow one to extend the\nlocally defined isometries at least to the whole domain of outer\ncommunications. The aim of this paper is to show that this is indeed\nthe case. More precisely, we wish to show the following:\n\n\\begin{Theorem}\\label{T1}\nConsider an analytic \nspace--time $(M,g_{ab})$ with a Killing vector field\n$X$ with complete orbits. Suppose that $M$ contains an asymptotically \nflat three--end $\\Sigma_{\\mathrm ext}$ with time-like ADM \nfour--momentum, and with \n $X(p)$ --- time-like\nfor $p\\in\\Sigma_{\\mathrm ext}$. (Here asymptotic flatness is defined\nin the sense of eq.\\ \\eq{K.99} with $\\alpha > 1\/2$ and\n$k\\ge 3$.)\n Let $\\langle\\langle\nM_{ext}\\rangle\\rangle$ denote \nthe domain of outer communications associated with \n$\\Sigma_{\\mathrm ext}$ as defined below, assume that $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ is globally hyperbolic and\nsimply connected. If there exists a Killing vector field $Y$, which is not\na constant multiple of $X$, defined\non an open subset $\\cal O$ of $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$, then the isometry group\nof $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ (with the metric obtained from $(M,g_{ab})$ by restriction)\ncontains ${\\Bbb R} \\times U(1)$.\n\\end{Theorem}\n\n\n{\\bf Remarks} \n\\begin{enumerate}\n\n\\item It should be noted that no field equations or energy\n inequalities are assumed.\n \n\\item Simple connectedness of the domain of outer communications\n necessarily holds when a positivity condition is imposed on the\n Einstein tensor of $g_{ab}$ \\cite{galloway-topology}.\\footnote{{\\em\n cf.\\\/} also \\cite{ChWald,Jacobson:venkatarami,HE} for similar\n but weaker results. Note that in the stationary black hole\n context, under suitable hypotheses one can use Theorem~\\ref{T2}\n below to obtain completeness of orbits of $X$ in $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$, and then\n use \\cite{ChWald} to obtain simple--connectedness of $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$.}\n \n \n\\item When a positivity condition is imposed on the Einstein tensor of\n $g_{ab}$, the hypothesis of time-likeness of the ADM momentum can be\n replaced by that of existence of an appropriately regular Cauchy\n surface in $(M,g_{ab})$. See, {\\em e.g.}, \\cite{Horowitz} and\n references therein; {\\em cf.\\\/} also \\cite{ChBeig1} for a recent\n discussion.\n \n\\item It should be emphasized that no claims about isometries of\n $M\\setminus \\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ (with the obvious metric) are made.\n\\end{enumerate}\n\nTheorem \\ref{T1} allows one to give a corrected version of the rigidity\ntheorem, the reader is referred to \\cite{ChAscona} for a precise\nstatement together with a proof.\n\nIt seems of interest to remove the condition of completeness of the\nKilling orbits of $X$ above. Recall that completeness of those\nnecessarily holds \\cite{Chorbits} in maximal globally hyperbolic, say\nvacuum, space--times under various conditions on the Cauchy data. (It\nwas mentioned in \\cite{Chnohair} that the results of \\cite{Chorbits}\ngeneralize to the electro--vacuum case.) Those conditions are,\nhowever, somewhat unsatisfactory in the black hole context for the\nfollowing reasons: recall that the existing theory of uniqueness of\nblack holes gives only a classification of domains of outer\ncommunication $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$. Thus in this context one would like to have\nresults which do not make any hypotheses about the global properties\nof the complement of $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ in $M$.\n Moreover the hypotheses of those results of \\cite{Chorbits} which\n apply when degenerate Killing horizons are present require further\n justification. Here we wish to raise the question, whether or not it\n makes sense to talk about a stationary black hole space--time for\n space--times for which the Killing orbits are not complete in the\n asymptotic region. We do not know an answer to that question. It\n is nevertheless tempting to decree that in ``physically reasonable\"\n stationary black hole space--times the orbits of the Killing vector\n field $X$ which is time-like in the asymptotically flat three--end\n $\\Sigma_{\\mathrm ext}$ are complete through points in the asymptotic region\n $\\Sigma_{\\mathrm ext}$. One would then like to be able to derive various desirable\n global properties of $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ using this assumption. Our second\n result in this paper is the proof that in globally hyperbolic\n domains of outer communication the orbits of those Killing vector\n fields which are time-like in $\\Sigma_{\\mathrm ext}$ are complete ``if and only if\"\n they are so\\footnote{The quotation marks here are due to the fact\n that in our approach the asymptotic four--end $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ is not even\n defined when the orbits of $X$ through $\\Sigma_{\\mathrm ext}$ are not complete.\n In that last case one could make sense of this sentence using\n Carter's definition of the domain of outer communication\n \\cite{CarterlesHouches}, \n involving Scri.} for points $p\\in\\Sigma_{\\mathrm ext}$ (it should be emphasized that,\n in contradistinction to \\cite{Chorbits}, no maximality hypotheses\n are made and no field equations are assumed below; similarly no\n analyticity or simple connectedness conditions are made here):\n \n\\begin{Theorem}\\label{T2}\n Consider a space--time $(M,g_{ab})$ with a Killing vector field $X$\n and suppose that $M$ contains an asymptotically flat three--end\n $\\Sigma_{\\mathrm ext}$, with $X$ time-like in $\\Sigma_{\\mathrm\n ext}$. (Here the metric is assumed to be twice differentiable,\n while asymptotic flatness is defined in the sense of eq.\\ \\eq{K.99}\n with $\\alpha > 0$ and $k\\ge 0$.) Suppose that the orbits of $X$\n are complete through all points $p\\in \\Sigma_{\\mathrm ext}$. Let $\\langle\\langle\n M_{\\mathrm ext}\\rangle\\rangle$ denote the domain of outer communications\n associated with $\\Sigma_{\\mathrm ext}$ as defined below. If $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ is globally\n hyperbolic, then the orbits of $X$ through points $p\\in \\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ are\n complete.\n\\end{Theorem}\n\n\nIn view of the recent classification of orbits of Killing vector field\nin asymptotically flat space--times of \\cite{ChBeig2} it is of\ninterest to prove the equivalent of Theorem~\\ref{T2} for\n``stationary--rotating'' Killing vectors $X$, as defined in\n\\cite{ChBeig2}.\nIn Theorem~\\ref{Tnowy} below we prove that generalization.\n\n \n \\section{Definitions, proof of Theorem {\\protect\\ref{T1}}.}\n \\label{proof1}\n \n Throughout this work all objects under consideration are assumed to be smooth.\n For a vector field $W$ we denote by $\\phi_t[W]$ the (perhaps \n defined only locally) flow generated by W. Consider a Killing\n vector field $X$ which is time-like for $p\\in\\Sigma_{\\mathrm ext}$. If the orbits \n $\\gamma_p$ of $X$ are complete through points $p\\in\\Sigma_{\\mathrm ext}$, then\nwe define the asymptotically flat four--end $M_{\\mathrm ext} $ by\n\\begin{equation}\n\\label{mext}\n M_{\\mathrm ext} = \\cup_{t\\in {\\Bbb R}} \\phi_t[X](\\Sigma_{\\mathrm ext}),\n\\end{equation}\nand the {\\em domain of outer communications\\\/} $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ by\n$$ \\langle\\langle M_{\\mathrm ext}\\rangle\\rangle = J^-(M_{\\mathrm ext})\\cap J^+(M_{\\mathrm ext}).$$\n\n Let $R > 0$ and let $(g_{ij},K_{ij})$ be \ninitial data on $\\Sigma_{\\mathrm ext}\\equiv\\Sigma_R \\equiv {\\Bbb R}^3 \\setminus B(R)$\nsatisfying \n\\begin{equation}\ng_{ij} - \\delta_{ij} = O_{k} (r^{-\\alpha}), \\qquad\n K_{ij} = O_{k-1} (r^{-1-\\alpha}), \n\\label{K.99}\n\\end{equation}\nwith some $\nk\\ge 1$ and some $0<\\alpha< 1$. A set $(\\Sigma_{\\mathrm ext},g_{ij},K_{ij})$ satisfying\nthe above will be called {\\em an asymptotically flat three--end}. \nHere a function $f$, defined on\n$\n\\Sigma_\n$,\nis said to be $ O_k(r^\\beta)$ if there exists a constant $C$ such that\nwe have\n$$\n0 \\leq i \\leq k \\qquad |\\partial^i f| \\leq C r^{\\beta - i}.\n$$\n\nWe shall need the following result, which is a straightforward\nconsequence\\footnote{Actually in \\cite{Nomizu} it is assumed that \n $(M,g_{ab})$ is Riemannian. The reader will note that all\n the assertions and proofs of \\cite{Nomizu} remain valid word for word\n when ``Riemannian\" is replaced by ``pseudo--Riemannian\".}\n of what has been proved in \\cite{Nomizu}:\n \n\\begin{Theorem}[Nomizu]\\label{TNomizu}\nLet $(M,g_{ab})$ be a (connected) simply connected analytic pseudo--Riemannian \nmanifold, and suppose that there exists a Killing vector field $Y$ defined\non an open connected subset $\\cal O$ of M. Then there exists a Killing \nvector field $\\hat Y$ defined on $M$ which\ncoincides with $Y$ on $\\cal O$.\n\\end{Theorem}\n\nLet us pass to the proof of Theorem~\\ref{T1}. Without loss of generality\nwe may assume that $X$ is future oriented for $p\\in\\Sigma_{\\mathrm ext}$.\nSimple connectedness and analyticity of $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ together with Theorem\n\\ref{TNomizu} allow us to conclude that \nthe Killing vector $Y$ can be globally extended to a Killing vector field\n$\\hat Y$ defined on $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$. The time-likeness of the ADM four--momentum \n$p^\\mu$ allows us to use the results in \\cite{ChBeig2} to assert that\nthere exists a linear combination $Z$ (with constant coefficients) \nof $X$ and $\\hat Y$ which has complete periodic orbits through all\npoints $p$ in $M_{\\mathrm ext}$ which satisfy $r(p)\\ge R$, for \nsome $R$. (Moreover $Z$ and $X$ commute.) To prove Theorem~\\ref{T1} we need \nto show that the orbits of $Z$ are complete (and periodic) for\nall $p\\in \\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$.\n\nConsider, thus, a point $p\\in \\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$. There exist $q_\\pm \\in M_{\\mathrm ext}$, with\n$r(q_\\pm)\\ge R$, such that $p\\in J^- ( q_+)\\cap J^+ ( q_-)$. Completeness\n and periodicity of the orbits\n$\\gamma_{q_\\pm}[Z]\\equiv \\cup_{t\\in{\\Bbb R}}\\phi_t[Z](q_\\pm)$\nof $Z$ through $q_\\pm$ implies that the sets $\\gamma_{q_\\pm}[Z]$\n are compact.\nGlobal hyperbolicity of $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ implies then that \n$$ K\\equiv J^- (\\gamma_{q_+}[Z])\\cap J^+ (\\gamma_{q_-}[Z])$$\n is compact.\n \nFor $q\\in \\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ let $t_\\pm(q)\\in {\\Bbb R} \\cup \\{\\pm \\infty\\}$ be \nthe forward and backward life time of orbits of $Z$ \nthrough $q$, defined by the requirement that $(t_-(q),t_+(q))$ is the \nlargest\nconnected interval containing $0$ such that the solution $\\phi_t[Z](q)$ \nof the equation $d\\phi_t[Z](q)\/dt = \\circ \\phi_t[Z](q)$ is defined for\nall $t\\in (t_-(q),t_+(q))$. From continuous dependence of solutions of \nODE`s upon initial values it follows that $t_+$ is a lower \nsemi--continuous function and $t_-$ is an upper\nsemi--continuous function.\n\nLet $\\gamma:[0,1]\\rightarrow M$ be any future oriented causal curve \nsuch that $\\gamma(0)=q_-$, \n$\\gamma(1)=q_+$,\nand $p\\in \\gamma$. Set\n\\begin{equation}\n\\label{Tdef}\nT_+ = \\inf _{q\\in \\gamma} t_+(q), \\qquad \nT_- = \\sup _{q\\in \\gamma} t_-(q).\n\\end{equation}\nHere and elsewhere \n$\\inf$ and $\\sup$ are taken in ${\\Bbb R} \\cup \\{\\pm \\infty\\}$. If $T_\\pm = \\pm \n\\infty$ we are done, suppose thus that $T_+\\ne \\infty$; the case \n$T_-\\ne -\\infty$ is analyzed in a similar way.\nBy lower semi--continuity of $ t_+$ and compactness of $\\gamma$\nthere exists $\\tilde p\\in \\gamma$ such that $t_+(\\tilde p)= T_+$. By\nglobal hyperbolicity the\nfamily of causal curves $\\phi_t[Z](\\gamma)$, $t\\in [0,T_+)$,\naccumulates at a causal curve $\\tilde\\gamma\\subset K$. Consequently the \norbit $ \\phi_t[Z](\\tilde p)$, $t\\in [0,T_+)$, has an accumulation point\nin $K$. It follows that $\\phi_t[Z](\\tilde p)$ can be extended beyond\n$T_+$, which gives a contradiction unless\n$T_+=\\infty$, and the result follows.\\hfill $\\Box$\n\n\\section{Proof of Theorem {\\protect\\ref{T2}}.}\n\\label{proof2}\n\n{\\bf Proof of Theorem {\\protect\\ref{T2}:}} Without loss of generality\nwe may suppose \nthat $X$ is future oriented for $p\\in \\Sigma_{\\mathrm ext}$. \n Consider a point $p\\in\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$, there exist\n$p_\\pm\\in M_{\\mathrm ext}$ such that $p\\in J^+ (p_-)\\cap J^-(p_+)$. Let $ \\Sigma$ be \na Cauchy surface for $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$, without loss of generality we may assume\nthat $p_-\\in I^- (\\Sigma)$ and $p_+\\in I^+ (\\Sigma)$.\nLet $t_\\pm$ be defined as in the proof of Theorem~\\ref{T1}, we have\n$t_-(p_\\pm)=-\\infty$, $t_+(p_\\pm)=\\infty$. Let $\\gamma:[0,1]\\rightarrow \n\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ be any causal curve such that $\\gamma (0)=p_-$, $\\gamma (1)=p_+$,\nand $p\\in\\gamma$. Define $T_\\pm$ by eq.\\ \\eq{Tdef}. By lower \nsemi--continuity of $t_+$ there exists $\\tilde p\\in \\gamma$ such\nthat $t_+(\\tilde p)=T_+$. Define\n$$\n\\tilde \\Omega = \\{ s \\in [0, T_+): \\phi_s[X](\\tilde p)\\in \nI^-(\\Sigma)\\}\\ .\n$$\nConsider any $s\\in\\tilde\\Omega$. Then the curve obtained \nby concatenating $\\phi_t[X]( p_-)$, $t\\in [0,s]$, with\n $\\phi_s[X](\\gamma)$ is a future directed causal curve which starts at\n $p_-$ and passes through $\\phi_s[X](\\tilde p)$, hence\n \\begin{equation}\n s\\in\\tilde\\Omega \\quad \\Rightarrow \\quad \\phi_s[X](\\tilde p) \\in K \\equiv\n J^+(p_-)\\cap J^-(\\Sigma)\\ .\n \\label{compact}\n \\end{equation}\n By global hyperbolicity of $\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ the set $K$ is compact. If\n $\\tilde\\Omega= \\emptyset$ set $\\omega =0$, otherwise set\n $$\n \\omega = \\sup \\tilde \\Omega\\ .\n $$\n Consider any sequence $\\omega_i\\in \\tilde \\Omega$ such that\n $\\omega_i\\rightarrow \\omega$. By \\eq{compact} and by compactness of $K$ \n the sequence $\\phi_{\\omega_i}[X](\\tilde p)$ has an accumulation point in \n $K$. It follows that $\\omega < T_+$. \n \n By definition of $\\omega$ we have $ \\phi_s[X](\\tilde p)\\in \nJ^+(\\Sigma)$ for all $\\omega \\le s < T_+$. By Lemma 2.5 of \\cite{Chorbits}\nit follows that $T_+=\\infty$. As $t_+(p)\\ge t_+(\\tilde p) =T_+$ we obtain\n $t_+(p)=\\infty$. The equality $t_-(p)=-\\infty$ for all $p\\in\\langle\\langle M_{\\mathrm ext}\\rangle\\rangle$ is \n obtained\n similarly by using the time--dual version of Lemma 2.5 of \\cite{Chorbits}.\n \\hfill $\\Box$\n\nBefore presenting a generalization of Theorem~\\ref{T2} which covers\nthe case of ``stationary--rotating'' Killing vectors, as defined\nin\\cite{ChWald,ChBeig2}, we need to introduce some\nterminology. Following \\cite{ChWald} we shall say that the orbit\nthrough $p$ of a Killing vector field $Z$ is time--oriented if there\nexists $t_p>0$ such that $\\phi_{t_p}[Z](p)\\in I^+(p)$. It then follows\nthat for all $ \\alpha \\in\\ R$ and all $ z\\in{\\cal N}$ we have\n$\\phi_{\\alpha+zt_p}[Z](p)\\in I^+(\\phi_{\\alpha}[Z](p))$: if \n$\\gamma$ is a timelike curve from $p$ to $\\phi_{t_p}[Z](p)$, one\nobtains a timelike curve from $\\phi_{\\alpha}[Z](p)$ to \n$\\phi_{\\alpha+zt_p}[Z](p)$ by concatenating $\\phi_{\\alpha}[Z](\\gamma)$\nwith $\\phi_{\\alpha+t_p}[Z](\\gamma)$ with\n$\\phi_{\\alpha+2t_p}[Z](\\gamma)$, etc.\n\nA trivial example of a Killing vector field with time--oriented orbits\nis given by a timelike Killing vector field. A more interesting example\nis that of ``stationary--rotating'' Killing vector fields, as\nconsidered in \\cite{ChWald,ChBeig2} --- loosely speaking, those\nare Killing vectors which behave like $\\alpha \\partial\/\\partial t +\n\\beta \\partial\/\\partial \\phi$ in the asymptotic region, with $\\alpha$\nand $\\beta$ non--vanishing, where $\\phi$ is an angular\ncoordinate. Thus the theorem that follows \napplies in the ``stationary--rotating'' case.\n\n\\begin{Theorem}\\label{Tnowy}\n The conclusion of Theorem~\\ref{T2} will hold if to its hypotheses\n one adds the requirement that $k$ in \\eq{K.99} is\n larger than or equal to 2, and if the hypothesis that $X$ is\n timelike is replaced by the assumption that the orbits of $X$\nare\n{\\em \\\/time--oriented} through all $p\\in\\Sigma_{\\mathrm ext}$.\n\n\\end{Theorem}\n\n{\\bf Proof:} \nThe proof\nis achieved by a minor modification of the proof of Theorem~\\ref{T2},\nas follows: Let $p_\\pm$ be as in that proof, from the asymptotic\nbehavior of Killing vector fields in asymptotically flat space--times\n({\\em cf.\\\/ e.g.\\\/} Section 2 of \\cite{ChBeig1}) it follows that we\ncan without loss of generality assume that\n\\begin{eqnarray*}\n&\\phi_{2\\pi}[X](p_+)\\in I^+(p_+),\\qquad \\phi_{2\\pi}[X](p_-)\\in\nI^+(p_-)\\ ,&\n\\\\\n& \\forall s\\in [0,2\\pi]\\quad \\phi_{s}[X](p_-)\\in\nI^-(\\Si)\\ ,\\quad\n \\phi_{s}[X](p_+)\\in\nI^+(\\Si)\\ .\n&\n\\end{eqnarray*}\nThe proof proceeds then as before, up to the definition of the set\n$K$, eq. \\eq{compact}. In the present case that definition is replaced\nby\n$$\n K \\equiv\n J^+(\\cup_{s\\in[0,2\\pi]}\\phi_s[X](p_-))\\cap J^-(\\Sigma)\\ .\n$$\nThis set is again compact, in view of global hyperbolicity of\n$M_{\\mathrm ext}$. The fact that for $s\\in\\tilde\\Omega$ we have\n$\\phi_s[X](\\tilde p)\\in K$ follows by considering the causal curve\nobtained by concatenating a causal curve $\\gamma_1$ from\n$\\phi_{s-\\lfloor s\/2\\pi\\rfloor2\\pi}[X](\\tilde p)$ to\n$\\phi_s[X](\\tilde p)$ with $\\phi_s[X](\\gamma)$. Here $\\lfloor \\alpha\n\\rfloor$ denotes the largest \ninteger smaller than or equal to $\\alpha$; the existence of $\\gamma_1$\nis guaranteed by our discussion above. \\hfill $\\Box$\n\n{\\bf Acknowledgments:} The author is grateful to I. R\\'acz for\ncomments about a previous version of this paper.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\n\nThe past decade has witnessed great progress in crowd counting. The counting strategies have devolved gradually from detection-based methods~\\cite{li2008estimating,zeng2010robust,lin2010shape,shami2018people} to density regression-based methods~\\cite{lempitsky2010learning,zhang2016single,li2018csrnet} since the wild crowd scenes often bring challenges like heavy congestion and large scale variation. Nowadays, convolutional neural network (CNN) based counting methods~\\cite{KONG2020105927,LI2020106485,GUO2021106691,onoro2016towards,sam2017switching,liu2019adcrowdnet,liu2020crowd} have attracted a lot of attention due to their ability to extract rich hierarchical features. CNN based density estimation methods project the features of an input image onto a density map, on which the total people count can be determined by summing up the densities. To handle the scale variation problem, these methods generally adopts several techniques such as exploring multi-scale feature~\\cite{zhang2016single,KONG2020105927,guo2019dadnet,sindagi2017generating}, embedding dilation convolution~\\cite{li2018csrnet}, utilizing attention mechanism~\\cite{hossain2019crowd,zhang2019relational,zhang2019attentional,liu2019recurrent}, and so on. However, the counting performance is still far from satisfactory.\n\n\\begin{figure}[hbt]\n\t\\centering\n\t\t\\includegraphics[scale=0.35]{fig1.png}\n\t\\caption{The crowd scenes can be coarsely divided into three regions according to the distances from the camera: far (f), middle (m) and near (m). The counting network usually performs differently in the three regions, with larger count estimation errors in the regions far away from the camera. The f and m regions are typically characterized by heavy people occlusion and high densities.}\n\t\\label{fig1}\n\\end{figure}\n\nThe data-driven CNN counting network usually suffers from non-uniform performance in different regions, especially resulting in relatively larger estimation errors in high-density regions where people are small and are heavily occluded.\nFigure.~\\ref{fig1} shows that the count estimation errors of the given input crowd scenes mainly come from the high-density regions far away from the camera. It is known that the deep convolutional neural network has a hierarchy of features. The features close to the input of the network contain low-level features with local receptive fields, such as edge information, texture information, etc. The features close to the output of the network contain high-level semantic features with large or global receptive fields. The high-level features contain context information that is important to scene content analysis. \nHowever, in the crowd counting task, the heads of the high-density regions far from the camera are usually very small and the corresponding local low-level features are also important to the density estimation. When increasing the network depth, the detailed information contained in the local low-level features gradually decreases. Hence, the traditional CNN based counting network does not work well in regions with congested small heads. \n\nIn this paper, we aim to ensure the features projected to the density map contain rich multi-scale information about the crowd scenes, especially the high-density regions. To this end, we present a multi-scale feature aggregation convolutional network named MSFANet. \nThe core idea of MSFANet is to enhance the feature representation ability by aggregating low-level features with local receptive fields and high-level semantic features with large receptive fields.\nMSFANet densely aggregates the features of various scales at multiple stages by utilizing two feature aggregation modules.\nThe first module is the short aggregation module named as ShortAgg. ShortAgg aggregates the features of adjacent scales and sends them into the next convolution block. A dense insertion of the ShortAgg modules helps the local low-level features flow gradually from the bottom to the top of the network.\nThe second module is the skip aggregation module named SkipAgg. SkipAgg directly combines the local low-level features with the high-level features that have much larger receptive fields. Inspired by the recent great success that the vision transformers have made in tasks such as object classification~\\cite{2020CarionEnd} and object detection~\\cite{2020An}, we introduce the Swin-Transform blocks~\\cite{2021Swin} to extract low-level global features and propagate them to high-level CNN features in the SkipAgg module. \n\nFurthermore, we present a loss function based on the combination of the global and local density estimation losses. The global loss part is the standard Euclidean loss between the whole estimated density map and the corresponding ground truth of an input image. The local loss part is called pooling loss (PLoss) which utilizes a locality-aware loss (LA-Loss) kernel to aggregate all the local losses in the pooling manner. The PLoss helps attenuate the loss differences between regions of different crowd densities and benefits the generalization of the learned counting network.\n\nThe major contributions of this work can be summarized as follows:\n\\begin{enumerate}\n\\itemsep=0pt\n\\item We introduce a multi-scale feature aggregation convolutional network (MSFANet) that utilizes two feature aggregation paths to propagate local low-level features from the bottom to the top of the network both gradually and directly, which helps strengthen the multi-scale representation ability of the output features.\n\\item We present a short aggregation module (ShortAgg) that merges features of the adjacent scales and a skip aggregation module (SkipAgg) that merges features with large scale differences.\n\\item We present a combined density estimation loss that considers both the global and local density estimation losses, which alleviates the bias caused by the non-uniform crowd distributions of low and high densities.\n\\item Extensive experimental results on four challenging crowd datasets demonstrate that our proposed approach performs favorably against state-of-the-art crowd counting methods.\n\n\n\\end{enumerate}\n\n\n\\section{Related Works}\n\\begin{figure*}[tbh]\n\t\\centering\n\t\\includegraphics[scale=0.36]{fig2.png}\n\t\\caption{The architecture of the proposed multi-scale feature aggregation network (MSFANet). It consists of the VGG-16 backbone, the short feature aggregation (ShortAgg) path, and the skip feature aggregation (SkipAgg) path. ShortAgg fuses features of the adjacent scales and SkipAgg directly propagates the local low-level transformer features from the bottom to the top of the network. ShortAgg and SkipAgg together make the output features rich in multi-scale information, which helps promote the performance of the counting network.}\n\t\\label{fig2}\n\\end{figure*}\n\n\\subsection{Multi-scale Feature Fusion}\nThese kinds of approaches utilize multi-scale features of the input image to address the problem of object scale variation.\nEarly on, Zhang \\textit{et al.}~\\cite{zhang2016single} proposed the multi-column convolutional neural network (MCNN) to capture multi-scale features by utilizing convolution kernels of different sizes in three branches.\nOnoro \\textit{et al.}~\\cite{onoro2016towards} develop the Hydra CNN that captures multi-scale information by using image patches of one image pyramid. As a result, the Hydra network can robustly estimate crowd density in various crowded scenarios.\nThe Switch-CNN, proposed by Sam \\textit{et al.}~\\cite{sam2017switching}, consists of three CNN regressors that have the same structure as MCNN. However, it uses a switch classifier to select the best regressor for different image patches.\nSindagi \\textit{et al.}~\\cite{sindagi2017generating} presented a contextual pyramid CNN (CP-CNN) that incorporates the global and local context to produce high-quality crowd density maps.\nThe TDF-CNN, proposed by Sam \\textit{et al.}~\\cite{sam2018top}, delivers the top-down feedback information as a correction signal to the bottom-up network to correct the density prediction. The bottom-up CNN has two columns with different receptive fields to regress the density maps.\nThe DADNet, proposed by Guo \\textit{et al.}~\\cite{guo2019dadnet}, generates high-quality density maps by using multi-scale dilated attention to learn context cues of multi-scale features.\n\n\\subsection{Attention Mechanism}\nThese kinds of methods usually utilize the attention mechanism to highlight the regions containing the crowd and filter out the noises of the background.\nLiu \\textit{et al.}~\\cite{liu2019recurrent} utilized attention models to detects ambiguous image region recurrently and zooms it into high resolution for re-inspection.\nHossain \\textit{et al.}~\\cite{hossain2019crowd} proposed a scale-aware attention network that uses the global and local attention mechanism to automatically select features of the appropriate scale.\nZhang \\textit{et al.}~\\cite{zhang2019relational} proposed a relational attention network that is composed of an attention module and a relation module. The attention module utilizes a local self-attention (LSA) and global self-attention (GLA) mechanism to capture long and short-range interdependence information. The relation module fuses the information obtained by LSA and GSA to obtain robust features.\nThe attentional neural field network(ANF), proposed by Zhang \\textit{et al.}~\\cite{zhang2019attentional}, utilizes a non-local attention mechanism to expand the receptive field, captures long-range dependencies, and enhances the representation of multi-scale features in the network.\n\n\\subsection{Vision Transformer}\nInspired by the success that transformers have made in the field of natural language processing, researchers begin to explore the self-attention mechanism for visual tasks such as image classification~\\cite{2020An,2021Learning,2021Swin}, object detection~\\cite{2020CarionEnd,2020Deformable,2020zhangEnd,2020SunRethinking}, object segmentation~\\cite{2020WangEnd,2020MaX}, etc.\nSpecifically, ViT~\\cite{2020An} directly applies the transformer encoder to accomplish the classification task. It interprets an image as a series of patches and takes the linear embedding sequence of these image patches as input of the transformer network. \nDETR~\\cite{2020CarionEnd} is an end-to-end CNN and transformer-based object detection network. It first utilizes a CNN backbone to extract features and then utilizes a stack of transformer blocks to generate features for object detection.\nBased on DETR, Deformable DETR~\\cite{2020Deformable} uses deformable convolution to sparsely sample the features and focuses only on the key positions, which reduces the computational complexity and improves the convergence speed.\nSimilarly, SETR~\\cite{2020ZhangRethinking} proposes to replace the CNN encoder with a pure transformer structure encoder, which can fully explore the segmentation capabilities of ViT~\\cite{2020An}.\nWang \\textit{et al.} proposed a video instance segmentation framework named VisTR~\\cite{2020WangEnd}, which directly treats the video instance segmentation task as a direct end-to-end parallel sequence decoding and prediction problem.\nLiu \\textit{et al.} proposed a hierarchical transformer, named Swin Transformer~\\cite{2021Swin}, which improves the efficiency by restricting the self-attention computation within the non-overlapping local windows and allowing the cross-window connection.\n\nIn this paper, we utilize Swin Transformer blocks to capture low-level feature information and propagate them to top CNN layers to accomplish the fusion of transformer features and CNN features, which help promote the multi-scale feature representation ability of the counting network.\n\n\\section{Our Approach}\nThe overall architecture of the proposed MSFANet method is illustrated in Figure~\\ref{fig2}. The VGG-16 model is chosen as the backbone due to its simple and powerful architecture and its success in many vision tasks. And two additional feature propagation paths go from the bottom of the backbone to the top of it via two feature aggregation modules. They are called the short aggregation module (ShortAgg) and the skip aggregation module (SkipAgg), respectively.\nIn this section, we first introduce the ShortAgg module and SkipAgg module and then present the pooling loss (PLoss) \n\n\\subsection{ShortAgg module}\n\\begin{figure}[tbh]\n\t\\centering\n\t\\includegraphics[scale=0.38]{fig3.png}\n\t\\caption{The configuration of the proposed MSFANet method. The middle part is the VGG-16 backbone, the left part is the SkipAgg, and the right part is the ShortAgg. The density regressor is at the end of the fused output features.}\n\t\\label{fig3}\n\\end{figure}\nThe purpose of ShortAgg is to aggregate the features that have an adjacent receptive field (i.e., features of adjacent scales ). ShortAgg combines the features of the current convolution block and the preceding one and takes them as input of the subsequent convolution block. The structure of ShortAgg is similar to the residual connection proposed by He \\textit{et al.}~\\cite{he2016deep}. Residual connection aims to deal with the problem of vanishing gradients whereas ShortAgg is to promote feature fusion. The ShortAgg module is defined as: \n\\begin{equation}Y_{SH}^{i}\\;=\\;P\\{F(X_i,w)\\}+ w_s\\ast X_i, \\end{equation}\n$ X_i $ and $Y^{i}$ represent the input and the output features of the $i-th$ convolution block, respectively. The function $F$ represents the feature mapping which consists of two or three convolutional layers in a block, and $P$ denotes the maximum pooling operation. For example, $ F=\\omega2\\ast ((\\sigma(\\omega1\\ast X+b1))+b2) $ indicates a two-layer convolution block, where $\\sigma$ denotes the activation function ReLU, $\\omega1$, $ b1$ and $\\omega2$ , $ b2$ represent the weights and biases of each corresponding convolutional layer, respectively.\nThe $'+'$ operation indicates the ShortAgg module that accomplishes the feature aggregation between $X$ and $F$. A simple convolution is operated on $X$ to ensure that the dimensions of $X$ and $F$ are the same. \n\n\nThe configuration of the ShortAgg is presented in Figure~\\ref{fig3}. The right part with blue connection lines represents the established ShortAgg, and the gray rectangles are the $1\\times1$ convolutional layers introduced by ShortAgg, $s$ represents the stride of the convolution. We utilize the convolution with a kernel of $1\\times1$ and $s=2$ in the ShortAgg to ensure that the dimensions of $X$ and $F$ are the same. \nWe divide the first thirteen layers of the VGG-16 model into five blocks and each block performs a maximum pooling operation at the end except for the last block. There are two $3\\times3$ convolution layers in the first two blocks, three $3\\times3$ convolution layers in the last three blocks. Starting from the second block, each block adopts a ShortAgg module.\n\n\\subsection{SkipAgg module}\nLocal low-level CNN features contain detailed information of objects and play an important role in detecting small objects. In the crowd counting task, the human heads in the far-away regions are usually very small and the corresponding features are easy to lose at the top of the network.\nTherefore, we further propose the skip aggregation module (SkipAgg) to directly propagate the local low-level features to the top layers of CNN. Specifically, SkipAgg is to aggregate the input and the output of multiple stacked convolutional blocks to achieve cross-scale feature fusion.\n\nAs shown in Figure~\\ref{fig3}, we introduce the Swin-Transformer block to extract local low-level features and directly propagate them to high-level features to obtain fused Transformer-CNN features. Swin Transformer is adopted due to that it can effectively model the local information via the multi-head self-attention module within the patch windows.\nAnd then the $1\\times1$ convolutions with different strides are used to make the resolutions of the transformer features adapted to the high-level CNN features. The SkipAgg module is defined as:\n\\begin{equation}\n\tY_{SK}^{i+n-1}=n * P\\left\\{F(X_i,w)\\right\\}+w_s\\ast X_i,\n\\end{equation}\nwhere $n$ represents the number of blocks that need to be skipped, $X_i $ represents the input of the $i-th$ convolution block, and $Y_{SK}^{i+n-1}$ represent the output of the $(i+n-1)-th$ convolution block.\nIn our MSFANet, the values of n are set to 2, 3, and 4, respectively. When $n = 2$, the output of the first block connects to the output of the third block (i.e., skip two blocks). Similarly, $ n=3$ denotes that the output of the first block connects to the output of the fourth block (i.e., skip three blocks).\nAs a result, the low-level transformer features generated in the first block can be directly propagated to the topper CNN layers and be fused with the high-level CNN features that have large receptive fields. This can further make the output features of the network contain more information about small objects.\n\n\n\\begin{figure}[tbh]\n\t\\centering\n\t\\includegraphics[scale=0.34]{fig4.png}\n\t\\caption{The demonstration of the proposed pooling loss. The locality-aware losses are gathered in the pooling manner to obtain the whole density estimation loss.}\n\t\\label{fig4}\n\\end{figure}\n\nThe configuration of SkipAgg is presented in Figure~\\ref{fig3}. The left part shows the SkipAgg path. The transformer module contains a stack of layers, including one $1\\times1$ convolutional layer with 96 channels, one average pooling layer with a stride of 2, two Swin Transformer blocks, one convolutional layer with 128 channels, and one interpolation layer. \nAnd then three $1\\times1$ convolutional layers with strides of 4, 8, 8 are utilized to make the transformer features adapted to three different CNN blocks, respectively. The element-wise addition operation is adopted to merge the corresponding features.\n\n\nIt is noted each the SkipAgg module also fuses with the ShortAgg module, which strengthens the multi-scale feature aggregation and promotes feature propagation from the bottom of the network to the top layers. The whole process can be defined as:\n\\begin{equation}\n\tY^{i+n-1}=\tY_{SH}^{i} + Y_{SK}^{i,i+n-1},\n\\end{equation}\nwhere the output features $Y^{i+n-1}$ of the $(i+n-1)-th$ convolution block contain features from the main backbone, the ShortAgg path, and the SkipAgg path.\n\n\\subsection{Loss Function}\nMost of the previous CNN-based crowd counting works adopt standard Euclidean distance between the whole estimated density map and the whole ground truth density map of input as the loss during back propagation to guide the next training direction of the model. The traditional Euclidean distance loss is then defined as follows:\n\\begin{equation}\n\tL_E = \\frac{1}{K}\\sum_{i=1}^{K}||D(X^i;\\Theta) -D^i||_{2}^{2},\n\\end{equation}\n\nwhere $ K $ is the number of samples used for training. $D(X^i;\\Theta) $ and $ D^i $ represent the estimated density map and the ground truth density map for an input sample $ X^i $ respectively.$ \\Theta $ is trainable parameters in the counting network.This kind of loss averages the errors caused by different local density distributions. In other words, it ignores the individual contribution of each local density distribution.\n\n\nTo overcome this issue, we present a pooling loss (PLoss) that uses a new loss kernel named locality-aware loss (LA-Loss) to calculate each loss value in the pooling manner. The pooling loss is presented in Figure~\\ref{fig4}. The LA-Loss is defined as follows:\n\\begin{equation}\n\tl_{mn}^{i}( \\Theta) = \\frac{||D_{mn}(X^i;\\Theta) -D_{mn}^i||_{2}^{2}}{SUM(D_{mn}^i)+1},\n\\end{equation}\nwhere \\textit{m} and \\textit{n} are the indices of the pooling window in the vertical and horizontal directions, respectively. And $ D_{mn}(X^i;\\Theta) $ and $ D_{mn}^i $ are the corresponding windows on the predicted density map and the ground truth density map for sample $ X^i $, respectively. $ SUM(D_{mn}^i) $ sums the densities of each pooling window $ D_{mn}^i $ to get the number of people in it. $ SUM(D_{mn}^i)+1$ prevents the division by zero.\n\n\n\nAnd then the total loss $ l^{i}( \\Theta) $ of the whole density map for the sample $ X^i $ is obtained by gathering all the individual losses of the map.\nFinally, the proposed PLoss is used for back-propagation during training and is computed as follows:\n\\begin{equation}\n\tL_P = \\frac{1}{K}\\sum_{i=1}^{K}l^{i}( \\Theta).\n\\end{equation}\nAnd the final density estimation loss function is defined as follows:\n\\begin{equation}\n\tL_T = \\alpha L_E + L_P.\n\\end{equation}\nwhere $\\alpha $ is the weighting coefficient to balance the standard Euclidean distance loss and PLoss. In our experiments, we set $ \\alpha $ to 0.1.\n\n\\section{Experiments}\nIn this section, we demonstrate the effectiveness of the proposed method on four challenging crowd datasets, including the ShangehaiTech Part A Dataset~\\cite{zhang2016single}, UCF\\_CC\\_50 dataset~\\cite{idrees2013multi}, UCF-QNRF dataset~\\cite{idrees2018composition}, and WorldExpo'10 dataset~\\cite{zhang2015cross}. We first introduce the four datasets and then present implementation details and the evaluation metrics used in the experiments. Finally, we present the experimental results and analysis. The implementation of our method is based on the PyTorch framework.\n\n\\subsection{Datasets}\n\\textbf{ShanghaiTech Part A Dataset.} This dataset is introduced by Zhang \\textit{et al.}~\\cite{zhang2016single} and contains 482 images collected from the website randomly. It is divided into the training and test subsets, with 300 images for training and 182 images for the test, respectively. In addition,The dataset has many complex scenes with the challenge of population-scale change.\n\n\\textbf{UCF$\\_ $CCF$ \\_ $50 Dataset.} This dataset introduced by Idrees \\textit{et al.}~\\cite{idrees2013multi} presents many challenges. It contains only 50 images randomly collected from the website. The scenes are very crowded with 1280 individuals on average in each image and 63974 annotations in total. And it contains a wide range of densities with the head number per image changing largely from 94 to 4543. \n\n\\textbf{UCF-QNRF Dataset.} This is a large dataset introduced for crowd counting~\\cite{idrees2018composition}. It is collected from the web and has a large variation in perspective, image resolution, and crowd density. It contains 1535 images with a total of 1,251,642 annotations.It is divided into the training and test subsets, with 1,201 images for training and 334 images for the test, respectively. \n\n\\textbf{WorldExpo'10 dataset.} This dataset is collected and published by Zhang \\textit{et al.}~\\cite{zhang2015cross} to solve the problem of rowd counting in cross-domain scenes.It comes from the video sequences captured by 108 surveillance cameras of the 2010 Shanghai World Expo.There are 1132 annotated video sequences with 199,923 annotated pedestrians at the centers of their heads. The dataset is divided into a training set with 103 scenes and a test set with 5 scenes. \n\n\n\\subsection{Implementation details}\n\\textbf{Ground truth generation.}\nWe generate the ground truth density maps by using a Gaussian kernel to blur the head annotations. The Gaussian kernel is normalized to sum to one so that summing the density map gives the crowd count. In our experiments, we use a fixed spread Gaussian to generate the density maps. Since the output density map is $ 1\/8 $ size of the input image, we resize the ground truth density map of the original input image as a loss calculation needs to one-eighth of the input, and the processing operation is down sampling. \n\n\\textbf{Data Augmentation.}\nThe training data is augmented by randomly cropping image patches of $224\\times 224$ pixels in one image. On UCF-QNRF Dataset, the patch size is set to $512\\times 512$. We also scale the original image with scales of 0.75 and 1.25 and mirror each image patch horizontally.\n\n\\textbf{Training.}\nWe initialize the front part (the first 13 convolutional layers ) of MSFANet by adopting a well-trained VGG-16 model. For the rest layers of the network, we initialize the weight values using a Gaussian of 0.01 standard deviation. The whole network is trained in an end-to-end manner. We train the MSFANet with $L_T $ by adopting the Adma optimization algorithm.\n\n\n\\subsection{Evaluation Metrics}\n\\label{secMetric}\nTo uniformly evaluate the performance of the algorithm, we use the general algorithm evaluation standard Mean Absolute Error (MAE) and Mean Square Error (MSE) in the crowd counting task to evaluate our method.MAE and MSE are defined as follows:\n\\begin{equation}\n\tMAE = \\frac{1}{K}\\sum_{i=1}^{K}|C_i-\\hat{C_i}|,\n\\end{equation}\n\\begin{equation}\n\tMSE =\\sqrt{\\frac{1}{K}\\sum_{i=1}^{K}||C_i-\\hat{C_i}||^{2}},\n\\end{equation}\nwhere $ K $ is the number of images in the test set, $ C_i $ is the ground truth crowd count of the \\textit{i}-th image, and $ \\hat{C_i} $ is the predicted crowd count which integrates the corresponding generated density map. \n\n\\subsection{Ablation study}\n\\label{secAblation}\nWe conduct an ablation study on ShanghaiTech Part A dataset~\\cite{zhang2016single} to evaluate the effectiveness of the proposed ShortAgg module, SkipAgg module, and PLoss. The experimental results are shown in Table~\\ref{tableAblation}.\n\n\n\\textbf{Baseline.} The baseline model contains a VGG-16 model by removing the fifth max-pool layer and the fully connected layers and adds three additional convolution layers and one transposed convolution layer to regress the final density map. The model is trained in an end-to-end way with the standard Euclidean distance loss. The MAE is 65.22 and the MSE is 102.18 on the ShanghaiTech Part A dataset~\\cite{zhang2016single}.\n\n\\textbf{ShortAgg.} As shown in Table~\\ref{tableAblation}, it is noted that the network with ShortAgg (SH) module (denoted by Base$+$SH ) improves the counting performance, which shows that ShortAgg effectively promotes the learned CNN features. Specifically, ShortAgg improves the counting accuracy by 8.7$\\%$ and 6.4$\\%$ in terms of MAE and MSE, respectively. Furthermore, we present a comparison of the feature visualization of the same convolutional layer between the baseline and the Base$+$SH network in (a) and (b) of Figure~\\ref{fig5}. We mark the high-density crowd region far away from the camera with a red rectangle. It is noted that in the red rectangle region, there is almost no feature response in the baseline model. In contrast, the Base$+$SH model has a stronger feature response.\n\n\\begin{table}[tbh]\n\t\\centering\n\t\\caption{Ablation study of the proposed ShortAgg (SH), SkipAgg (SK), and PLoss on ShanghaiTech Part A dataset~\\cite{zhang2016single}.}\n\t\\label{tableAblation}\n\t\\scalebox{1.0}{\n\t\t\\begin{tabular}{|c|c|c|}\n\t\t\t\\hline\n\t\t\t& \\multicolumn{2}{c|}{Part A}\\\\\n\t\t\t\\hline\n\t\t\t\n\t\t\tMethod & MAE & MSE\\\\\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\tBaseline & 65.22 & 102.18\\\\\n\t\t\n\t\t\tBase$+$SH & 59.54 & 95.61\\\\\n\t\t\n\t\t\tBase$+$SH$+$SK & 56.18 & 92.19\\\\\n\t\t\t\n\t\t\tBase$+$SH$+$SK$+$PLoss & \\textbf{54.67} & \\textbf{87.66}\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\\medskip{}}\n\t\\vspace{-0.3cm}\n\n\\end{table}\n\n\\begin{figure}[tbh]\n\t\\centering\n\t\\includegraphics[scale=0.26]{fig5.png}\n\t\\caption{Feature visualization results of a sample. (a), (b), (c), and (d) represent the visualization results of the first convolution layer of the sixth convolution block by adopting the Baseline, Base$+$SH, Base$+$SH$+$SK, and Base$+$SH$+$SK$+$PLoss models, respectively.}\n\t\\label{fig5}\n\\end{figure}\n\\begin{figure}[tbh]\n\t\\centering\n\t\\includegraphics[scale=0.35]{fig6.png}\n\t\\caption{Statistics the counting errors(MAE) of baseline and MSFANet in three different distance regions of the ShanghaiTech part A test set.} \n\t\\label{fig6}\n\\end{figure}\n\n\\textbf{SkipAgg.} It is seen in Table~\\ref{tableAblation} that the SkipAgg (SK) module further improves the Base$+$SH network. Compared with the Base$+$SH network, the obtained Base$+$SH$+$SK network decreases the MAE from 59.54 to 56.18 and decreases the MSE from 95.61 to 92.19, respectively. This means that SkipAgg effectively merges the local low-level transformer features with the high-level CNN features and makes the output features better represent the small people's heads far away from the camera. This is also confirmed by the results of the feature visualization presented in b and c of Figure~\\ref{fig5}. It is seen that the SkipAgg module makes the network has much stronger feature responses in the red rectangle region.\n\n\n\n\\textbf{PLoss.} In the above ablation study, the Baseline, Base$+$SH, and Base$+$SH$+$SK models only adopt the Euclidean distance loss to train the model. \nNow we use the combination of PLoss and standard Euclidean distance loss $L_T$ to supervise the MSFANet training. It can be seen from Table~\\ref{tableAblation} that the combined loss decreases the MAE from 56.18 to 54.67 on Shanghai A dataset. In our experiments, for simplicity, we adopt the non-overlapping strategy when computing the PLoss. That is, the pooling window size equals the sliding stride and we set window size to $4\\times 4$ in all the experiments.\n\nAll in all, it can be seen from the red rectangle region in Figure~\\ref{fig5} that the feature response of the Base$+$SH$+$SK$+$PLoss model is closer to the distribution of the ground truth, which demonstrates that the proposed method has better feature representation ability for high-density regions far away from the camera. \n\nMoreover, we present the statistical analysis and visualization analysis to further testify the effectiveness of the proposed MSFANet method. We divide an image scene into three regions according to the perspective, which is denoted by far\\_view, mid\\_view, and near\\_view, respectively. Typically, the people densities in the far\\_view and mid\\_view regions (regions far away from the camera) are larger than those in the near\\_view region. \n\n\\begin{figure}[tbh]\n\t\\centering\n\t\\includegraphics[scale=0.26]{fig7.png}\n\t\\caption{Visualization analysis on the ShanghaiTech Part A dataset. Compared with the baseline network, MSFANet effectively reduces the density estimation errors in different scenes, especially the high-density regions far away from the camera.}\n\t\\label{fig7}\n\\end{figure}\nAs presented in Figure~\\ref{fig6}, compared with the baseline network, the MSFANet method indeed reduces the counting errors of different distance regions, especially in the far\\_view and mid\\_view regions. Specifically our MSFANet reduces the MAEs of the far\\_view and mid\\_view regions by 20\\% and 8.4\\% respectively.\n\nExamples of the visualization analysis between the baseline network and our MSFANet on ShanghaiTech Part A are presented in Figure~\\ref{fig7}. It can be seen that our MSFANet has much more accurate density estimations in all three regions.\n\n\n\n\\subsection{Results and Analysis}\nWe evaluate the proposed MSFANet method against eighteen state-of-the-art methods~\\cite{zhang2019relational,LI2020106485,GUO2021106691,jiang2019crowd,miao2020shallow,liu2019context,yang2020reverse,yan2019perspective,luo2020hybrid,wan2021generalized,xiong2019open,liu2020adaptive,hu2020count,abousamra2021localization,ma2021learning,jiang2020attention,liu2020weighing,bai2020adaptive} on ShanghaiTech Part A ~\\cite{zhang2016single}, UCF\\_CC\\_50~\\cite{idrees2013multi} ,UCF-QNRF~\\cite{idrees2018composition}, and WorldExpo'10~\\cite{zhang2015cross} datasets.We use the MAE metric and MSE metric to evaluate these method. In addition, we rank these methods according to MAE.The detailed experimental results are presented in Table~\\ref{tableResult}. \n\n\\textbf{ShanghaiTech Part A Dataset.}\nFor simplicity, SHTech PartA represents the ShanghaiTech Part A Dataset in Table~\\ref{tableResult}.On this dataset, our method achieves the best result with an MAE of 54.67, which is 1.3\\% lower than ADSCNet~\\cite{bai2020adaptive}.\n\n\\textbf{UCF\\_CC\\_50 Dataset.}\nFollowing the standard protocol adopted in ~\\cite{idrees2013multi}, we perform 5-fold cross-validation to evaluate the proposed MSFANet on UCF\\_CC\\_50 Dataset. As shown in Table~\\ref{tableResult}, it is observed that our MSFANet outperforms all other methods with the lowest MAE of 159.1 ,which is 9.0\\% lower than that of the second-best ASNet~\\cite{jiang2020attention}. \n\n\\textbf {UCF-QNRF Dataset.}\nOn the UCF-QNRF Dataset, since most of the original images are very large, we resize the larger side of the input image to 2048 pixels and keep the aspect ratio of the image unchanged. Our method achieves an MAE of 86.24 that ranks 4th among the state-of-the-art methods. \n\n\n\\begin{table*}[t!]\n\t\\begin{center}\n\t\t\\resizebox{\\textwidth}{!}{\n\t\t\t\\setlength{\\tabcolsep}{3pt}\n\t\t\t\\begin{tabular}{|l|ccc|ccc|ccc|ccccccc|c|}\n\t\t\t\t\\hline\n\t\t\t\t&\\multicolumn{3}{|c|}{SHTech Part A}&\\multicolumn{3}{|c|}{UCF\\_CC\\_50}\n\t\t\t\t&\\multicolumn{3}{|c|}{UCF-QNRF}\n\t\t\t\t&\\multicolumn{7}{|c|}{WorldExpo10}&\\\\\n\t\t\t\t\\hline\n\t\t\t\tMethod &MAE&MSE&R.\n\t\t\t\t&MAE&MSE&R. \n\t\t\t\t&MAE&MSE&R.\n\t\t\t\t&S1&S2&S3&S4&S5&Avg.\n\t\t\t\t&R.&avg. R.\\\\ \t\t\t\t\n\t\t\t\t\\hline\n\t\t\t\t\\hline\n\t\t\t\n\t\t\t\n\t\t\t\tTEDnet~\\cite{jiang2019crowd} &64.2 &109.1 &18\n\t\t\t\t&249.4 &354.5 &14\n\t\t\t\t&113 &188 &15\n\t\t\t\t&2.3 &10.1 &11.3 &13.8 &2.6 &8.0 &8 \n\t\t\t\t&13.75 \\\\\n\t\t\t\t\n\t\t\t\tDSA-Net~\\cite{LI2020106485} &67.4 &104.6 &19\n\t\t\t\t&- &- &-\n\t\t\t\t&- &- &-\n\t\t\t\t&2.4 &11.2 &12.7 &9.5 &2.4 &7.6 &7\n\t\t\t\t&13 \\\\\n\t\t\t\t\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\tSDANet~\\cite{miao2020shallow} &63.6 &101.8 &17 \n\t\t\t\t&227.6 &316.4 &11\n\t\t\t\t&-&-&- \n\t\t\t\t&2.0 &14.3 &12.5 &9.5 &\\textbf{2.5} &8.16 &10 \n\t\t\t\t&12.67 \\\\\n\t\t\t\t\n\t\t\t\tRANet~\\cite{zhang2019relational}&59.4 &102.0 &10 \n\t\t\t\t&239.8 &319.4 &12\n\t\t\t\t&111 &190 &14\n\t\t\t\t&-&-&-&-&-&-&-\n\t\t\t\t&12 \\\\ \n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\tCAN~\\cite{liu2019context}&62.3 &100.0 &16\n\t\t\t\t&212.2 &243.7 & 10\n\t\t\t\t&107 &183 &13\n\t\t\t\t&2.9 &12.0 &10.0 &7.9 &4.3 &7.4 &6\n\t\t\t\t&10.5 \\\\\n\t\t\t\t\n\t\t\t\tRPNet~\\cite{yang2020reverse} &61.2 & 91.9 &12 \n\t\t\t\t&-&-&- \n\t\t\t\t&-&-&- \n\t\t\t\t&2.4 &10.2 &9.7 &11.5 &3.8 &8.2 &11\n\t\t\t\t&11.5 \\\\\n\t\t\t\t\n\t\t\t\t\n\t\t\t\tPGCNet~\\cite{yan2019perspective} &57.0 &\\textbf{86.0} &6\n\t\t\t\t&244.6 &361.2 &13\n\t\t\t\t&-&-&-\n\t\t\t\t&2.5 &12.7 &\\textbf{8.4} &13.7 &3.2 &8.1 &9\n\t\t\t\t&9.33 \\\\\n\t\t\t\t\n\t\t\t\tHyGnn~\\cite{luo2020hybrid} &60.2 &94.5 &11\n\t\t\t\t&184.4 &270.1 &6\n\t\t\t\t&100.8 &185.3 &10\n\t\t\t\t&-&-&-&-&-&-&-\n\t\t\t\t&9 \\\\\n\t\t\t\t\n\t\t\t\tTopoCount~\\cite{abousamra2021localization}&61.2 &104.6 &13\n\t\t\t\t&184.1 & 258.3 &5\n\t\t\t\t&89.0 & 159.0 &7\n\t\t\t\t&-&-&-&-&-&-&- \n\t\t\t\t&8.33 \\\\\n\t\t\t\t\n\t\t\t\tGLoss~\\cite{wan2021generalized}&61.3 &95.4 &14\n\t\t\t\t&-&-&- \n\t\t\t\t&84.3 &147.5 &3\n\t\t\t\t&-&-&-&-&-&-&- \n\t\t\t\t&8.5 \\\\\n\t\t\t\t\n\t\t\t\tAMRNet~\\cite{liu2020adaptive} &61.59 &98.36 &15 \n\t\t\t\t&184.0 &265.8 & 4\n\t\t\t\t&86.6 &152.2 &5\n\t\t\t\t&-&-&-&-&-&-&- \n\t\t\t\t&8\\\\\n\t\t\t\t\n\t\t\t\tS-DCNet~\\cite{xiong2019open}&58.3 &95.0 &9\n\t\t\t\t&204.2 &301.3 &8\n\t\t\t\t&104.4 &176.1 &12 \n\t\t\t\t&\\textbf{1.57} &9.51 &9.46 &10.35 &2.49 &6.67 &3 \n\t\t\t\t&8 \\\\\n\t\t\t\t\n\t\t\t\n\t\t\t\t\n\t\t\t\tAMSNet~\\cite{hu2020count} &56.7 &93.4 &5\n\t\t\t\t&208.4 &297.3 &9\n\t\t\t\t&101.8 &163.2 &11\n\t\t\t\t&1.6 &\\textbf{8.8} &10.8 &10.4 &\\textbf{2.5} &6.8 &5 \n\t\t\t\t&7.5\\\\\n\t\t\t\t\n\t\t\t\tCHANet~\\cite{GUO2021106691} &55.8 &95.6 &3\n\t\t\t\t&- &- &-\n\t\t\t\t&98 &177 &9\n\t\t\t\t&1.4 &11.6 &8.7 &8.9 &2.8 &6.7 &4\n\t\t\t\t&5.33 \\\\\n\t\t\t\n\t\t\t\t\n\t\t\t\tUOT~\\cite{ma2021learning} &58.1 &95.9 &8\n\t\t\t\t&-&-&- \n\t\t\t\t&83.3 &142.3 &2\n\t\t\t\t&-&-&-&-&-&-&- \n\t\t\t\t&5 \\\\\n\t\t\t\t\n\t\t\t\tASNet~\\cite{jiang2020attention} &57.78 &90.13 &7\n\t\t\t\t&174.84 &251.63 & 2\n\t\t\t\t&91.59 &159.71 &8\n\t\t\t\t&2.22 &10.11 &8.89 &\\textbf{7.14} &4.84 &6.64 &2 \n\t\t\t\t&4.75 \\\\\n\t\t\t\t\n\t\t\t\tLibraNet~\\cite{liu2020weighing} &55.9 &97.1 &4 \n\t\t\t\t&181.2 &262.2 &3\n\t\t\t\t&88.1 &143.7 &6\n\t\t\t\t&-&-&-&-&-&-&- \n\t\t\t\t&4.33\\\\\n\t\t\t\t\n\t\t\t\tADSCNet~\\cite{bai2020adaptive} &55.4 &97.7 &2 \n\t\t\t\t&198.4 &267.3 &7\n\t\t\t\t&\\textbf{71.3} &\\textbf{132.5} &\\textbf{1}\n\t\t\t\t&-&-&-&-&-&-&- \n\t\t\t\t&3.33\\\\\n\t\t\t\t\n\t\t\t\tMSFANet & \\textbf{54.67} & 89.89 &\\textbf{1} \n\t\t\t\t& \\textbf{159.1} & \\textbf{230.6}&\\textbf{1} \n\t\t\t\t& 86.24 & 148.20 &4\n\t\t\t\t&1.59 & 11.25 & 8.92 & 8.50 & 2.6 & \\textbf{6.57} &\\textbf{1} \n\t\t\t\t&\\textbf{1.75}\\\\\n\t\t\t\t\\hline\n\t\t\t\\end{tabular}\n\t\t}\\vspace{-0.3cm}\n\t\\end{center}\n\t\\caption{Comparison of the proposed MSFANet with other state-of-the-art methods on four challenging datasets. The average ranking performance (denoted by Avg. R.) is obtained by using the sum of all rankings that one method gains to divide the number of datasets it utilizes. Avg. R. can demonstrate the counting performance of the method across different crowd scenes.}\n\t\\vspace{-0.3cm}\n\t\\label{tableResult}\n\\end{table*}\n\nThis dataset provides Region of Interest (ROI) along with perspective maps for both training and test scenes.\n\n\\textbf{WorldExpo'10 Dataset.}\nThe WorldExpo'10 dataset \\cite{zhang2015cross} provides Region of Interest (ROI) along with perspective maps for both training and test scenes. To be consistent with previous\nwork~\\cite{zhang2015cross},we prune the last convolutional layer so that the features out of ROI regions are set to zero. In addition,only the MAE metric is used to evaluate the results.\nFirst, the MAE of each test scenes is calculated, and then all the results are averaged to evaluate the performance of MSFANet across different test scenes.It is observed that the proposed MSFANet outperforms other approaches with the lowest average MAE of 6.57 while achieving competitive performance in each test scene.\n\nBy following the ASNet~\\cite{jiang2020attention}, we also adopt the average ranking metric to comprehensively evaluate the proposed methods.Avg. R. represents the average ranking in Table~\\ref{tableResult}, which is obtained by calculating the sum of the ranking of this method on each dataset divided by the number of datasets.It is noted that in Table~\\ref{tableResult} that our method ranks first in the average ranking.\n\nThe average ranking is obtained by calculating the sum of the ranking of this method on each data set divided by the number of data sets, in which the ranking of unpublished data sets does not participate in the calculation.\n\n\n\n\n\\section{Conclusion}\n\nIn this paper, we have proposed a novel multi-scale feature aggregation network (MSFANet) that merges features of different receptive fields to reduce the density estimation errors, especially in high-density regions. To this end, we introduce two feature aggregation modules named short aggregation (ShortAgg) and skip aggregation (SkipAgg) to jointly enhance the feature representation ability of the counting network.\nShortAgg gradually merges features of the adjacent scales and SkipAgg directly propagates the low-level transformer features to the high-level CNN features. The combination of ShortAgg and SkipAgg makes the output features of the MSFANet robust to different crowd distributions, especially the high-density scenes. \nFurthermore, we introduce the global-and-local counting loss by combining the standard Euclidean distance loss and the proposed pooling Loss (PLoss). PLoss utilizes a locality-aware loss kernel to generate the counting loss in the pooling manner, so as to alleviate the impact of the non-uniform crowd data on the training procedure.\nExtensive experiments on four challenging datasets demonstrate that the proposed method achieves promising results via the two easy-to-implement feature aggregation modules. \n\n\n\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}